SlideShare a Scribd company logo
1 of 30
Download to read offline
Orthogonal Polynomials
Indre Skripkauskaite
F10GP Project
supervised by
Dr. M. Dreher
March 31, 2016
1
2 CONTENTS
Contents
1 Introduction 3
2 Hilbert Spaces and Self-Adjoint Operators 4
2.1 Hermitian Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 The Space L2[a, b] and Differential Operators . . . . . . . . . . . . . . . . . . . . 6
3 Legendre Polynomials 10
3.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2 Recurrence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.3 The Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4 Chebyshev’s Polynomials 20
4.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.2 The Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.3 Recurrence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Conclusion 29
3
1 Introduction
In mathematics, a set of polynomials is said to be orthogonal under some inner product if any
two of the polynomials from the given set are orthogonal, i.e. their scalar product equals zero.
There are quite a few families of Orthogonal polynomials, but in this project we will be focusing
only on Legendre and Chebyshev’s polynomials.
Legendre polynomials are widely used in physics and Chebyshev’s polynomials are applicable
in finance. However, by the use of our knowledge of various topics from Complex Analysis,
Functional Analysis, Linear Algebra as well as Calculus, we will be focusing only on the mathe-
matical, i.e. the theoretical part and understanding the behavior of such families of polynomials
in respective L2 Hilbert spaces, which is the aim of this project.
4 2 HILBERT SPACES AND SELF-ADJOINT OPERATORS
2 Hilbert Spaces and Self-Adjoint Operators
In this section we will be discussing properties of Hilbert spaces and Self-Adjoint Operators,
where the understanding of Hermitian which we will be recalling constantly in later chapters.
We will also mention the famous Fourier Series, which plays an important role in orthogonality
of functions.
2.1 Hermitian Matrices
Definition 2.1.1. Let A be an n×n matrix and let AT be its transpose. A is said to be a
Hermitian matrix [2] if it is equal to its conjugate transpose. This means
AT = A.
and we write
AH
= A.
Definition 2.1.2. Let u, v ∈ Cn and k ∈ N. We define a scalar product in C as
u, v =
n
k=1
ukvk. (2.1)
Proposition 2.1.3. Let u, v ∈ Cn and let A = AH. Then A is said to be self-adjoint if we can
write Au, v = u, Av .
PROOF:
Assume A = AH. We claim that Au, v = u, Av , for all u, v ∈ Cn.
Then we have
Au, v =
n
k=1
(Au)kvk
=
n
k=1


n
j=1
akjuj

 vk
=
n
j=1
uj
n
k=1
akjvk
=
n
j=1
AHvj
2.1 Hermitian Matrices 5
Therefore if A = AH, then
Au, v = u, Av . (2.2)
Proposition 2.1.4. Eigenvalues of a Hermitian matrix are real.
PROOF: Let A be a Hermitian matrix. Let λ be an eigenvalue of A and let u be an eigenvector
of A to the eigenvalue λ. Suppose λ ∈ C.
By (2.2) we have that
Au, u = u, Au ,
Au, u = λu, u = λ u, u = λ u 2
,
u, Au = u, λu = λ u, u = λ u 2
,
hence
λ u 2
= λ u 2
.
Note that u 2 > 0 since u = 0, therefore λ = λ ∈ R.
Theorem 2.1.5. Let A = AH, λ, µ be eigenvalues of A, λ = µ. Let u, v ∈ Cn be eigenvectors
of A to the eigenvalues λ, µ, hence
Au = λu
Av = µv,
then u, v = 0.
PROOF: Consider Au, v . From (2.2) we already know that
6 2 HILBERT SPACES AND SELF-ADJOINT OPERATORS
Au, v = u, Av
λu, v = u, µv
λ u, v = µ u, v
= µ u, v
λ u, v = µ u, v
(λ − µ) u, v = 0
u, v = 0
since λ = µ.
Hence the eigenvectors of distinct eigenvalues of a hermitian matrix are orthogonal.
Theorem 2.1.6 (Spectral Theorem). [1] Let A be a Hermitian matrix and let V be a finite
dimensional inner product space. Then there exists an orthonormal basis of V consisting of
eigenvectors of A.
2.2 The Space L2
[a, b] and Differential Operators
Definition 2.2.1. Let a, b ∈ R and let f(x),g(x) be two functions in L2[a, b]. Then the scalar
product in L2[a, b] [2] is defined as
f, g =
b
a
f(x)g(x)dx. (2.3)
We can also set A to act as a differential operator.
Example 2.2.2. Let A be a differential operator A := d2
dx2 and gn(x) := sin(nx),
gn(x) ∈ L2[−π, π], n ∈ N, then we get Agn = −n2gn.
By the use of (2.3) and integration by parts
2.2 The Space L2[a, b] and Differential Operators 7
Agn, gm =
π
−π
gn(x)gm(x)dx
= −
π
−π
gn(x)gm(x)dx + gn(x)gm(x)
π
−π
=
π
−π
gn(x)gm(x)dx − gn(x)gm(x)
π
−π
= gn, Agm ,
Agn, gm = gn, Agm
(2.4)
for n, m ∈ N.
Remark: If compare this result to the Proposition 2.1.3 in the previous section, we can see
that in this example A acts as a differential operator rather than being a Hermitian matrix and
gn, gm are being treated as eigenfunctions rather than being eigenvectors of A.
Continuing from 2.4 we have that
−n2
gn, gm = gn, −m2
gm ,
n2
gn, gm = m2
gn, gm ,
(n2
− m2
) gn, gm = 0,
gn, gm = 0
(2.5)
for n = m.
Definition 2.2.3. The set of functions {f1(x), f2(x), . . .} is orthogonal in L2[a, b] [2] if:
fn(x), fm(x) = 0 (2.6)
for n = m.
Proposition 2.2.4. The set {sin(nx), cos(mx)}, n ∈ N+, m ∈ N0} is an orthogonal set of
functions [2] in the Hilbert space L2[−π, π].
PROOF:
By use of (2.3) we can see that
8 2 HILBERT SPACES AND SELF-ADJOINT OPERATORS
1, cos(nx) =
π
π
cos(nx)dx =
sin(nx)
n
π
−π
= 0,
1, sin(nx) =
π
π
sin(nx)dx = −
cos(nx)
n
π
−π
= 0.
(2.7)
By (2.5)
sin(nx), sin(mx) = 0 (2.8)
cos(nx), cos(mx) = 0 for n = m.
Note that cos(x) is an even function and sin(x) is an odd function.
Hence
cos(nx), sin(mx) =
π
−π
cos(nx) sin(mx)dx = 0
Theorem 2.2.5. Let n ∈ N and let f(t) be 2L-periodic function in Hilbert space L2[−L, L].
Then the Fourier Series expansion [4] of f(t) is represented as
f(t) = a0 +
∞
n=1
an cos
nπt
L
+
∞
n=1
bn sin
nπt
L
, (2.9)
where
an =
1
L
L
−L
f(t) cos
nπt
L
dt,
bn =
1
L
L
−L
f(t) sin
nπt
L
dt
a0 =
1
2L
L
−L
f(t)dt.
Remark: We can make a comparison of the above to the Theorem 2.1.5, i.e. we can say that
cos nπt
L , sin nπt
L are the eigen functions to the eigen values an, bn, a0 in the Hilbert space
L2[−L, L]. Although, we do not claim that we can span full L2[−L, L] space, since it requires
deeper analysis, so we will not be focusing on that in this project.
2.2 The Space L2[a, b] and Differential Operators 9
Example 2.2.6 (Heaviside step function).
f(t) =



−1 for −π < t < 0
1 for 0 < t < π.
(2.10)
Our function f(t) is 2π periodic and it can be approximated by the use of (2.9) as
f(t) =
4c
π
sin(t) +
4c
3π
sin(3t) +
4c
5π
sin(5t) + · · · +
4c
(2n + 1)π
sin((2n + 1)t) + . . . (2.11)
n ∈ N.
We can now compare the original function with the approximated result where the blue line
indicates the Heaviside step function f(t) and the red line indicated the Fourier approximation
with n = 6.
Figure 1: Fourier series approximation of the Heaviside step function (2.10) with n = 6
10 3 LEGENDRE POLYNOMIALS
3 Legendre Polynomials
3.1 Generating Function
Legendre Polynomials were introduced by Legendre in the theory of potential, where they are
related to the expansion of the reciprocal of the distance 1
R , where R is the distance between two
points r and r .
Figure 2:
R = |r − r | = (r2
+ r 2
− 2rr cos θ)
1
2 ,
where θ is the angle between r and r . If we let t = r
r , x = cos(θ) we have that
1
R
=
1
r
(1 − 2xt + t2
)−1
2 (3.1)
for −1 x 1. If we rewrite
(1 − 2xt + t2
)−1
2
as
(t − x − x2 − 1)−1
2 (t − x + x2 − 1)−1
2 (3.2)
and treat it as a function of t, we see that (3.2) has two singularities at
x ± x2 − 1,
for −1 x 1. Therefore, if |t| ≤ min |x ±
√
x2 − 1|, we have the following Taylor Series
expansion:
(1 − 2xt + t2
)−1
2 =
∞
n=0
Pn(x)tn
.
t ∈ C, is is called the generating function [3] for some function Pn(x) which will be discussed
now.
3.1 Generating Function 11
Definition 3.1.1. The Legendre polynomials are defined by Rodrigue’s formula [5]
Pn(x) =
1
2nn!
dn
dxn
(x2
− 1)n
, n ∈ N, (3.3)
for arbitrary values of x.
Later we will see that these Pn’s are the same as in section (3.3).
To obtain the general expression for the nth Legendre polynomial we will use the binomial
expression
(x2
− 1)n
=
n
k=0
(−1kn!)
k!(n − k)!
x2n−2k
. (3.4)
Substituting (3.4) into (3.3) implies
Pn(x) =
[n/2]
k=0
(−1)k(2n − 2k)!
2nk!(n − k!)(n − 2k)!
xn−2k
. (3.5)
By the use of (3.5) we can see that the first eleven Legendre polynomials [8] are:
P0(x) = 1,
P1(x) = x,
P2(x) =
1
2
(3x2
− 1),
P3(x) =
1
2
(5x3
− 3x),
P4(x) =
1
8
(35x4
− 30x2
+ 3),
P5(x) =
1
8
(63x5
− 70x3
− 5),
P6(x) =
1
16
(231x6
− 315x4
+ 105x2
− 5),
P7(x) =
1
16
(429x7
− 693x5
+ 315x3
− 35x),
P8(x) =
1
128
(6435x8
− 12012x6
+ 6930x4
− 1260x2
+ 35),
P9(x) =
1
128
(12155x9
− 25740x7
+ 18018x5
− 4620x3
+ 315x),
P10(x) =
1
256
(46189x10
− 109395x8
+ 90090x6
− 30030x4
+ 3465x2
− 63).
12 3 LEGENDRE POLYNOMIALS
Figure 3: Legendre polynomials of degree 0 through 10
We can also approach the generating function via Cauchy Integral Formula in the following way:
Theorem 3.1.2 (Cauchy Integral Formula [5]). Let f(z) be analytic in a simply connected
domain D. Let C be a closed contour going in the counter-clockwise direction inside D, and let
z be an interior point of D. Then
1
2πi C
f(ζ)
ζ − z
dζ = f(z).
n!
2πi C
f(ζ)
(ζ − z)n+1
dζ =
dn
dzn
f(z) (3.6)
Proposition 3.1.3. For n ∈ N, t ∈ C, |x| 1, x ∈ R, (1 − 2xt + t2) = 0, hence |t| < 1
3,
g(x, t) = (1 − 2xt + t2
)−1
2 (3.7)
is the generating function for Legendre polynomials.
PROOF: We wish to show that
∞
n=0
Pn(x)tn
= g(x, t). (3.8)
3.1 Generating Function 13
Replace Pn(x) by the Rodrigues formulas and use the Cauchy integral theorem (3.6) we obtain
Pn(x) =
(−1)n
2nn!
dn
dxn
(1 − x2
)n
=
(−1)n
2n
1
2πi C
(1 − z2)n
(z − x)n+1
dz, (3.9)
where C is any curve enclosing x, going in a counter-clockwise direction. Now inserting (3.9)
into (3.8) we get
∞
n=0
Pn(x)tn
=
∞
n=0
(−1)n
2n
1
2πi
tn
C
(1 − z2)n
(z − x)n+1
dz. (3.10)
Now interchanging summation and integration in (3.10) we get
∞
n=0
Pn(x)tn
=
1
2πi C
dz
z − x
∞
n=0
(−1)n
2n
tn (1 − z2)
z − x
n
. (3.11)
The resulting series is a simple geometric series, which is readily summed. We then obtain the
following integral:
−
1
2πi C
2
t
dz
z2 − 2
t z − 1 − 2
t x
. (3.12)
Now we evaluate this integral by using the familiar residue integration technique. We can clearly
see that the denominator of (3.12) has two roots:
z1 =
1
t
+
1
t2
−
2
t
x + 1 =
1
t
+
1
t
1 − 2xt + t2
and
z2 =
1
t
−
1
t2
−
2
t
x + 1 =
1
t
−
1
t
1 − 2xt + t2.
We can rewrite z2 as
z2 =
1
t2 − 1
t2 x + 1
1
t + 1
t2 − 2
t x + 1
=
2
t x − 1
1
t + 1
t2 − 2
t x + 1
(3.13)
.
Now if we multiply both numerator and denominator of (3.13) by t we obtain the following:
z2 =
2x − t
1 +
√
1 − 2xt + t2
≈ x (3.14)
14 3 LEGENDRE POLYNOMIALS
for t ≈ 0.
We now choose C to be a path surrounding the point x and z2 and by applying Residue Theorem
we see that
−
2
t
lim
z→z2
z − 1
t − 1
t2 − 2
t x + 1
z2 − 2
t z − 1 − 2
t x
=
1
√
1 − 2xt + t2
.
Consequently,
∞
n=0
Pn(x)tn
=
1
√
1 − 2xt + t2
. (3.15)
3.2 Recurrence Relation
Proposition 3.2.1. Legendre Polynomials satisfy the Recurrence Relation [5]
(n + 1)Pn+1(x) − (2n + 1)xPn(x) + nPn−1(x) = 0 (3.16)
for n ∈ N+.
PROOF:
We are going to show it by first differentiating the generating function (3.7) with respect to t:
∂g
∂t
= −
1
2
(1 + t2
− 2xt)−3
2 (2t − 2x) =
x − t
(1 + t2 − 2xt)
3
2
=
∞
n=0
Pn(x)tn−1
(3.17)
Multiply (3.17) by (1 + t2 − 2xt) to obtain
(1 + t2
− 2xt)
∞
n=0
nPn(x)tn−1
= (x − t)(1 + t2
− 2xt)−1
2 = (x − t)
∞
n=0
Pn(x)tn
,
(1 + t2
− 2xt)
∞
n=0
nPn(x)tn−1
= (x − t)
∞
n=0
Pn(x)tn
,
(1 + t2
− 2xt)
∞
n=0
nPn(x)tn−1
− (x − t)
∞
n=0
Pn(x)tn
= 0.
Setting the coefficient of tn equal to zero, we find that
(n + 1)Pn+1(x) − 2nxPn(x) + (n − 1)Pn−1(x) − xPn(x) = 0,
3.3 The Differential Equation 15
or equivalently
(n + 1)Pn+1(x) − (2n + 1)xPn(x) + nPn−1(x) = 0, n ∈ N.
Legendre polynomials can be calculated step by step, starting from P0(x) = 1, P1(x) = x.
3.3 The Differential Equation
Proposition 3.3.1. Legendre Polynomials solve the differential equation [7]
((1 − x2
)Pn(x)) + n(n + 1)Pn(x) = 0, n ∈ N0. (3.18)
PROOF:
By the use of the generating function (3.15)
∂
∂x
:
∞
n=0
Pn(x)tn
= −
1
2
−2t
(1 − 2xt + t2)
3
2
=
t
(1 − 2xt + t2)
3
2
∂
∂t
:
∞
n=0
nPn(x)tn−1
= −
1
2
−2x + 2t
(1 − 2xt + t2)
3
2
=
x − t
(1 − 2xt + t2)
3
2
(3.19)
∂2
∂t2
:
∞
n=0
n(n − 1)Pn(x)tn−2
= −
1
(1 − 2xt + t2)
3
2
+ (x − t) −
3
2
−2x + 2t
(1 − 2xt + t2)
5
2
= −
1
(1 − 2xt + t2)
3
2
+
3(x − t)2
(1 − 2xt + t2)
5
2
(3.20)
(1 − x2
)
∂
∂x
:
∞
n=0
(1 − x2
)Pn(x)tn
=
t(1 − x2)
1 − 2xt + t2)
3
2
(1 − x2
)
∂
∂x
:
∞
n=0
(1 − x2
)Pn tn
=
−2xt
(1 − 2xt + t2)
3
2
+ t(1 − x2
) −
3
2
−2t
(1 − 2xt + t2)
5
2
= −
2xt
(1 − 2xt + t2)
3
2
+
3t2(1 − x2)
(1 − 2xt + t2)
5
2
(3.21)
Now we multiply (3.19) by t and obtain
∞
n=0
nPn(x)tn
=
t(x − t)
(1 − 2xt + t2)
3
2
(3.22)
16 3 LEGENDRE POLYNOMIALS
If we multiply (3.20) by t2 we get
∞
n=0
n(n − 1)Pn(x)tn
= −
t2
(1 − 2xt + t2)
3
2
+
3t2(x − t)2
(1 − 2xt + t2)
5
2
(3.23)
Now if we multiply (3.22) by 2 an add it to (3.23) we obtain the following:
∞
n=0
2nPn(x)tn
+
∞
n=0
n(n − 1)Pn(x)tn
=
2t(x − t)
(1 − 2xt + t2)
3
2
−
t2
(1 − 2xt + t2)
3
2
+
3t2(x − t)2
(1 − 2xt + t2)
5
2
∞
n=0
(2n + n(n − 1))Pn(x)tn
=
2xt − 3t2
(1 − 2xt + t2)
3
2
+
3t2(x − t)2
(1 − 2xt + t2)
5
2
∞
n=0
n(n + 1)Pn(x)tn
=
2xt − 3t2
(1 − 2xt + t2)
3
2
+
3t2(x − t)2
(1 − 2xt + t2)
5
2
(3.24)
Now by adding (3.21) to (3.24) we see that
∞
n=0
(1 − x2
)Pn(x) tn
+
∞
n=0
n(n + 1)Pn(x)tn
= −
2xt
(1 − 2xt + t2)
3
2
+
3t2(1 − x2)
(1 − 2xt + t2)
5
2
+
2xt − 3t2
(1 − 2xt + t2)
3
2
+
3t2(x − t)2
(1 − 2xt + t2)
5
2
∞
n=0
(1 − x2
)Pn(x) tn
+ n(n + 1)Pn(x)tn
=
−2xt + 2xt − 3t2
(1 − 2xt + t2)
3
2
+
3t2((1 − x2) + (x − t)2)
(1 − 2xt + t2)
5
2
=
−3t2
(1 − 2xt + t)
3
2
+
3t2(1 − 2xt + t2)
(1 − 2xt + t2)
5
2
=
−3t2 + 3t2
(1 − 2xt + t2)
3
2
= 0.
Hence
∞
n=0
(1 − x2
)Pn(x) tn
+ n(n + 1)Pn(x)tn
= 0.
Now setting the coefficients of tn equal to zero, we finally obtain
(1 − x2
)Pn(x) + n(n + 1)Pn(x) = 0, n ∈ N,
(3.25)
Remark: Let us compare the above result to Theorem 2.1.5, which states the following:
Au = λu (3.26)
3.4 Orthogonality 17
for all u ∈ Cn.
We shal prove the following proposition first:
Proposition 3.3.2. A = d
dx (1 − x2) d
dx is a self-adjoint operator in Hilbert space L2[−1, 1],
i.e. Au, v = u, Av , for all twice differentiable functions u(x), v(x), x ∈ R.
PROOF: Using differentiation by parts we see that
Au, v =
1
−1
(Au) · v
dx
√
1 − x2
=
1
−1
1 − x2
d
dx
( 1 − x2
d
dx
u(x)) · v(x)
1
√
1 − x2
dx
=
1
−1
1 − x2u vdx
= 1 − x2u v
1
−1
−
1
−1
1 − x2u v dx
= 0 −
1
−1
u · 1 − x2v dx
= u 1 − x2v
1
−1
+
1
−1
u · 1 − x2v dx
= 0 +
1
−1
u · (Av)
1
√
1 − x2
dx
= u, Av ,
hence A is indeed self-adjoint
If we set λ = −n(n + 1), u = Pn(x) and treat A as a differential operator in L2, i.e.
A = d
dx (1 − x2) d
dx in (3.26), then comparing to Theorem 2.1.5 we can write (3.25) as
d
dx
(1 − x2
)
d
dx
Pn(x) = −n(n + 1)Pn(x), (3.27)
Consequently,
((1 − x2
)Pn(x)) = −n(n + 1)Pn(x) (3.28)
which is the self-adjoint form of Legendre Differential Equation.
3.4 Orthogonality
Proposition 3.4.1. Legendre polynomials are orthogonal in the Hilbert space L2[(−1, 1), dt].
18 3 LEGENDRE POLYNOMIALS
PROOF:
We wish to show
1
−1 Pn(x)Pm(x)dx = 0 for n = m. If we multiply both sides by Pm(x) and take
the scalar product as defined in (2.3) of both sides we get
n(n + 1)
1
−1
Pn(x)Pm(x) = −
1
−1
Pm(x) (1 − x2
)Pn(x) dx
= − Pm(x) (1 − x2
)Pn(x)
1
−1
+
1
−1
Pm(x) (1 − x2
)Pn(x) dx
= −
1
−1
Pm(x)(1 − x2
) Pn(x)dx
= m(m + 1)
1
−1
Pm(x)Pn(x)dx
=
1
−1
Pm(x)Pn(x)dx = 0 (3.29)
whenever n = m.
Proposition 3.4.2. If n = m, then the scalar product or two Legendre polynomials satisfy
PROOF:
1
−1
P2
n(x)dx =
2
2n + 1
.
We substitute (3.15) into L2([−1, 1]) scalar product by the use of (2.3):
∞
n=0
Pn(x)tn
,
∞
m=0
Pm(x)tm
=
1
√
1 − 2xt + t2
,
1
√
1 − 2xt + t2
∞
n=0
∞
m=0
Pn(x), Pm(x) tn+m
=
1
−1
1
1 − 2xt + t2
dx.
By use of (3.34) we drop the terms where n = m and consider only the cases where n = m
3.4 Orthogonality 19
obtaining the following:
∞
n=0
Pn(x), Pn(x) t2n
=
1
−1
1
1 − 2xt + t2
dx
∞
n=0
t2n
1
−1
P2
n(x)dx =
1
−1
1
1 − 2xt + t2
dx
= −
1
2t
ln(1 − 2xt + t2
)
1
−1
= −
1
2t
ln(1 − 2t + t2
) − ln(1 + 2t + t2
) .
Note that ln(t)α = α ln(t)
∞
n=0
t2n
1
−1
P2
n(x)dx = −
1
2t
ln(1 − t)2
− ln(1 + t)2
= −
1
t
(ln(1 − t) − ln(1 + t)) .
Note that ln(a) − ln(b) = ln(a
b )
∞
n=0
t2n
1
−1
P2
n(x)dx = −
1
t
ln
1 − t
1 + t
=
1
t
ln
1 − t
1 + t
−1
=
1
t
ln
1 + t
1 − t
.
Using Taylor expansion we obtain
ln
1 + t
1 − t
= 2 +
2t3
3
+
2t5
5
+
2t7
7
+ · · · +
2t2n+1
2n + 1
+ · · · =
∞
n=1
2t2n+1
2n + 1
(3.30)
and so
1
t
ln
1 + t
1 − t
= 2 +
2t2
3
+
2t4
5
+
2t6
7
+ · · · +
2t2n
2n + 1
+ · · · =
∞
n=1
2t2n
2n + 1
. (3.31)
Finally
∞
n=0
2t2n
2n + 1
=
∞
n=0
t2n
1
−1
P2
n(x)dx,
20 4 CHEBYSHEV’S POLYNOMIALS
hence
1
−1
P2
n(x)dx =
2
2n + 1
Remark:
We have shown that Legendre polynomials are orthogonal, hence, linearly independent. We can
refer back to Theorem 2.1.6. and make a comparison. We can say that in the Hilbert space
L2[(−1, 1), dt], Legendre polynomials act as eigen functions. However, we do not claim that Pns
span full L2[(−1, 1), dt] space, because proving it would require much deeper understanding so
we will not be focusing on that in this project.
4 Chebyshev’s Polynomials
Definition 4.0.1. Chebyshev’s polynomials are defined as
Tn(x) = cos(n arccos(x)), n ∈ N+, −1 ≤ x ≤ 1. (4.1)
The first eleven Chebyshev’s polynomials [9] are:
T0(x) = 1
T1(x) = x
T2(x) = 2x2
− 1
T3(x) = 4x3
− 3x
T4(x) = 8x4
− 8x2
+ 1
T5(x) = 16x5
− 20x3
+ 5x
T6(x) = 32x6
− 48x4
+ 18x2
− 1
T7(x) = 64x7
− 112x5
+ 56x3
− 7x
T8(x) = 128x8
− 256x6
+ 160x4
− 32x2
+ 1
T9(x) = 256x9
− 576x7
+ 432x5
− 120x3
+ 9x
T10(x) = 512x1
0 − 1280x8
+ 1120x6
− 100x4
+ 50x2
− 1
4.1 Generating Function 21
Figure 4: Chebyshev’s polynomials of degree 0 through 10
4.1 Generating Function
Proposition 4.1.1. Let x = cos(θ) and Tn(x) = cos(nθ). Then the generating function [5] for
Chebyshev’s polynomials is
1 − xt
1 − 2xt + t2
=
∞
n=0
Tn(x)tn
for t ∈ C, |t| 1, n ∈ N0.
PROOF:
Let |t| < 1, then
∞
n=0
Tn(x)tn
=
∞
n=0
cos(nθ)tn
. (4.2)
By the use of Euler’s formulas:
einθ
= cos nθ + i sin nθ (4.3)
e−inθ
= cos nθ − i sin nθ (4.4)
If we add (4.3) to (4.4) we get
einθ
+ e−inθ
= 2 cos(nθ) (4.5)
22 4 CHEBYSHEV’S POLYNOMIALS
Now dividing both sides of (4.5) by 2, we can see that
∞
n=0
Tn(x)tn
=
1
2
∞
n=0
(teiθ
)n
+ (te−iθ
)n
=
1
2
∞
n=0
(teiθ
)n
+
∞
n=0
(te−iθ
)n
Note that ∞
n=0(teiθ)n = 1
1−teiθ and ∞
n=0(teiθ)n = 1
1−te−iθ , so
∞
n=0
Tn(x)tn
=
1
2
1
1 − teiθ
+
1
1 − te−iθ
=
1
2
1 − te−iθ + 1 − teiθ
(1 − teiθ)(1 − te−iθ)
=
1
2
2 − 2t(eiθ + e−iθ)
1 − (eiθ + eiθ) + t2
=
1 − t cos(θ)
1 − 2t cos(θ) + t2
.
And since x = cos(θ)
∞
n=0
Tn(x)tn
=
1 − xt
1 − 2xt + t2
. (4.6)
4.2 The Differential Equation
Proposition 4.2.1. Chebyshev’s polynomials solve Chebyshev’s differential equation [6]
(1 − x2
)Tn(x) − xTn(x) + n2
Tn(x) = 0, −1 ≤ x ≤ 1 (4.7)
PROOF:
4.2 The Differential Equation 23
We differentiate the generating function (4.6) with respect to x and t:
∂
∂x
:
∞
n=0
Tn(x)tn
=
2t(1 − tx)
(1 − 2xt + t2)2
−
t
1 − 2xt + t2
=
t(1 − t2)
(1 − 2xt + t2)2
(4.8)
∂2
∂x2
:
∞
n=0
Tn (x)tn
=
4t2(1 − t2)
(1 − 2xt + t2)3
(4.9)
∂
∂t
:
∞
n=0
nTn(x)tn−1
= −
x
1 − 2xt + t2
−
(2t − 2x)(1 − xt)
(t2 − 2xt + 1)2
=
xt2 − 2t + x
(1 − 2xt + t2)2
(4.10)
∂2
∂t2
:
∞
n=0
n(n − 1)Tn(x)tn−2
=
2xt − 2
(t2 − 2xt + 1)2
−
2(2t − 2x)(xt2 − 2t + x)
(1 − 2xt + t2)3
= −
2(xt3 − 3t2 + 3xt − 2x2 + 1)
(t2 − 2xt + 1)3
(4.11)
∞
n=0
(1 − x2
)Tn (x)tn
=
4t2(1 − x2)(1 − t2)
(1 − 2xt + t2)3
(4.12)
∞
n=0
xTn(x)tn
=
xt(1 − t2)
(1 − 2xt + t2)2
. (4.13)
Multiplying (4.10) by t we get
∞
n=0
nTn(x)tn
=
t(xt2 − 2t + x)
(1 − 2xt + t2)2
, (4.14)
then multiplying (4.11) by t2
∞
n=0
n(n − 1)Tn(x)tn
= −
2t2(xt3 − 3t2 + 3xt − 2x2 + 1)
(1 − 2xt + t2)3
(4.15)
Now adding (4.14) to (4.15) we obtain the following:
∞
n=0
n2
Tn(x)tn
=
t(xt2 − 2t + x)
(1 − 2xt + t2)2
−
2t2(xt3 − 3t2 + 3xt − 2x2 + 1)
(1 − 2xt + t2)3
(4.16)
24 4 CHEBYSHEV’S POLYNOMIALS
Adding (4.12), subtracting (4.14) and then adding (4.16) we see that
∞
n=0
(1 − x2
)Tn (x) − xTn(x) + n2
Tn(x) tn
=
4t2(1 − x2)(1 − t2)
(1 − 2xt + t2)3
(4.17)
−
xt(1 − t2)
(1 − 2xt + t2)2
+
t(xt2 − 2t + x)
(1 − 2xt + t2)2
−
2t2(xt3 − 3t2 + 3xt − 2x2 + 1)
(1 − 2xt + t2)3
Now writing everything under common denominator we obtain the following:
∞
n=0
(1 − x2
)Tn (x) − xTn(x) + n2
Tn(x) tn
=
4t2(1 − t2 − x2 + x2t2)
(1 − 2xt + t2)3
−
xt(1 − t2)(1 − 2xt + t2)
(1 − t2 + x2t2)3
(4.18)
+
t(xt2 − 2t + x)(1 − 2xt + t2)
(1 − 2xt + t2)3
−
2t2(xt3 − 3t2 + 3xt + 2x2 + 1)
(1 − 2xt + t2)3
= (4t2 − 4t4 − 4t2x2 + 4x2t4 − xt + 2x2t2xt3 − xt3 − 2x2t4 + xt5 + xt3 − 3x2t4 + xt5 − 2t2 + 4xt3 −
2t4 + tx − 2x2t2 + t3x − 2xt5 − t4 − 6xt3 + 4x2t2 − 2t2)/(1 − 2xt + 2x2 + 1)3
0/(1 − 2xt + 2x2 + 1) = 0
Hence
∞
n=0
(1 − x2
)Tn (x) − xTn(x) + n2
Tn(x) tn
= 0
Now setting the coefficients of tn equal to zero we obtain
(1 − x2
)Tn (x) − xTn(x) + n2
Tn(x) = 0
Proposition 4.2.2. Let u(x), v(x) be continuous and twice differentiable functions, x ∈ R. Let
A be the differential operator
1 − x2
d
dx
1 − x2
d
dx
.
Then
Au, v = u, Av ,
in L2 [−1, 1], dx√
1−x2
i.e. A is self-adjoint.
4.2 The Differential Equation 25
PROOF:
Au, v =
1
−1
(Au(x)) · v(x)
dx
√
1 − x2
=
1
−1
1 − x2
d
dx
( 1 − x2
d
dx
u(x) · v(x)
dx
√
1 − x2
=
1
−1
1 − x2u (x) · v(x)dx
= 1 − x2u (x)v(x)
1
−1
−
1
−1
1 − x2u (x)v (x)dx
= 0 −
1
−1
u (x) · 1 − x2v (x) dx
= 0 +
1
−1
u(x) · (Av(x))
dx
√
1 − x2
= u, Av
i.e. A is self-adjoint.
Remark:
If we divide (4.7) by
√
1 − x2 we obtain the following:
1 − x2Tn (x) −
x
√
1 − x2
Tn(x) +
n2
√
1 − x2
Tn(x) = 0 (4.19)
Proposition 4.2.3. The expression
1 − x2Tn(x) +
n2
√
1 − x2
Tn(x) = 0 (4.20)
is the self-adjoint form of the Chebyshev differential equation.
PROOF:
If we multiply (4.20) by
√
1 − x2, apply A as in Proposition 4.2.2 and let u(x) = Tn(x)
√
1−x2
and
λ(x) = −n2, we obtain
26 4 CHEBYSHEV’S POLYNOMIALS
Au = λu,
d
dx
1 − x2
d
dx
Tn(x) = −
n2
√
1 − x2
Tn(x)
1 − x2Tn(x) = −
n2
√
1 − x2
Tn(x), (4.21)
hence A is a self-adjoint operator and (4.21) is a self-adjoint form of Chebyshev’s differential
equation.
Remark: We compare the above to Theorem 2.1.5 and see that the same idea repeats
once again, but instead of having an eigenvector u, we have an eigenfunction u(x) and an
eigenvalue λ = −n2 ∈ R.
4.3 Recurrence Relation
Proposition 4.3.1. Chebyshev’s Polynomials satisfy the following recurrence relation [7]:
Tn+1(x) = 2xTn(x) − Tn−1(x)
PROOF:
Let’s introduce the notation θ = arccos(x).
Then (4.1) becomes Tn(θ(x)) = Tn(θ) = cos(nθ), where 0 θ 2π.
We observe that replacing n by n + 1
Tn+1(θ) = cos((n + 1)θ) = cos(nθ) cos(nθ) − sin(nθ) sin(nθ), (4.22)
Tn−1(θ) = cos((n − 1)θ) = cos(nθ) cos(θ) + sin(nθ) sin(θ). (4.23)
Now adding (4.22) to (4.23) we obtain the following
Tn+1 + Tn−1 = 2 cos(nθ)cos(θ) + sin(nθ) sin(θ),
Tn+1(θ) = 2 cos(nθ) cos(θ) − Tn−1(θ),
Tn+1(x) = 2xTn(x) − Tn−1(x),
4.4 Orthogonality 27
or equivalently
Tn+1(x) = 2xTn(x) − Tn−1(x), (4.24)
which is the recurrence relation for Chebyshev’s Polynomials.
Observation: Now we know that Tn’s are indeed polynomials.
4.4 Orthogonality
Proposition 4.4.1. Chebyshev’s Polynomials are orthogonal in L2 [−1, 1], 1√
1−x2
.
PROOF:
We shall prove this by using (4.20) for
d
dx
1 − x2Tn +
n2
√
1 − x2
Tn = 0 (4.25)
d
dx
1 − x2Tm +
m2
√
1 − x2
Tm = 0 (4.26)
Multiplying (4.25) by Tm and (4.26) by Tn and then subtracting the results we obtain
d
dx
1 − x2Tn Tm −
d
dx
1 − x2Tm Tn +
n2 − m2
√
1 − x2
TmTn = 0
d
dx
1 − x2(TnTm − TmTn) −
n2 − m2
√
1 − x2
TmTn = 0 (4.27)
Now if we integrate (4.27) over the interval [−1, 1] with respect to x we get
1
−1
TmTn
√
1 − x2
dx =
√
1 − x2
n2 − m2
TnTm − TmTn
1
−1
(4.28)
1
−1
TmTn
√
1 − x2
dx = 0
for n = m.
We can clearly see that the value of (4.28) is zero if m = n = 0. But what happens
when m = n = 0 and m = n = 0?
28 4 CHEBYSHEV’S POLYNOMIALS
Let
θ = arccos(x)
dθ = −
dx
√
1 − x2
.
Tn(x) = cos(n arccos(x)) = cos(nθ) (4.29)
Tm(x) = cos(m arccos(x)) = cos(mθ) (4.30)
where θ ∈ [0, π], n, m ∈ N.
Now if we take a scalar product in L2[0, π] of (4.29) and (4.30) we obtain the following:
Tn(x), Tm(x) =
π
0
cos(nθ) cos(mθ)dθ (4.31)
=
π
0
cos(n + m)θ − cos(n − m)θ
2
dθ (4.32)
Now if n = m = 0 we can see that (4.32) equals to
1
2
π
0
(cos((2n)θ − cos(0)) dθ =
1
2
sin(2n)θ − x
2n
π
0
=
π
2
(4.33)
and if n = m = 0 we have
1
2
π
0
(cos((2n)θ − cos(0)) dθ =
1
2
sin(θ)
2n
− 2x
π
0
= π (4.34)
Now summarizing (4.28), (4.33) and (4.34) we finally get
1
−1
Tn(x)Tm(x)
√
1 − x2
=



0, for m = n,
π
2 , for m = n = 0,
π, for m = n = 0
Remark:
We can refer back to Theorem 2.1.6 once again and and to summarize the above we can say that
Tn’s are eigen functions, since they are orthogonal, i.e. linearly independent. However, similarly
as for Legendre polynomials, we do not claim that they span entire L2 [−1, 1], dx√
1−x2
, since
showing it would require much further analysis and we shall not be focusing on that in this
project.
29
5 Conclusion
We can see that we have been following the same pattern through out the paper. Either we are
examining Hermite matrices, Legendre polynomials or Chebyshev’s polynomials, in each case we
have some scalar product and orthogonality in some Hilbert space. We have also noticed a strong
connection between Hermite matrices and Orthogonal polynomials in general, where Hermmite
matrices are self-adjoint and Orthogonal polynomials can be also expressed in their self adjoint
form. It is also important to stress that the Spectral Theorem plays an important role in the
analysis of Orthogonal polynomials.
30 REFERENCES
References
[1] Wikipedia, Spectral Theorem, (https://en.wikipedia.org/wiki/Spectraltheorem), (Accessed
on 30/03/2016).
[2] M. Youngson, Functional Analysis Lecture Notes, Heriot Watt University, Edinburgh, 2015.
[3] Z. X. Wang, D. R. Guo. Special Functions, chapter 4. Hypergeometric Function, page 176.
Singapore.
[4] H. D. Sterck, Week 4 - Discrete Fourier Methods, Introduction to Computational Mathemat-
ics Course Notes, University of Waterloo, Waterloo, 2015.
[5] Holt, Rinehart and Winston, Special Functions of Mathematical Physics, New York, 1961.
[6] G. Szego, American Mathematical Society. Orthogonal Polynomials. Colloquium Publication.
Volume XXIII. Chapter II. Definition of Orthogonal Polynomials; Principal Examples, pages
23-29. Providence, Rhode Island, 1939.
[7] R. A. Silverman (ed.). Special Functions and Their Applications, chapter 4, Orthogonal
Polynomials, pages 43-50. PRENTICE-HALL, INC. Englewood Cliffs, N.J., 1965.
[8] Wikipedia, Legendre Polynomials (https://en.wikipedia.org/wiki/Legendrepolynomials),
(Accessed on 17/03/2016)
[9] Wikipecia, Chebyshev Polynomials, (https://en.wikipedia.org/wiki/Chebyshevpolynomials),
(Accessed on 21/03/2016).
[10] M. Dreher, Mathematics for Physics III, (https://sites.google.com/site/michaeldreher7/home/
lecture-notes), (Accessed on 15/03/2016).

More Related Content

What's hot

Partial Differential Equation - Notes
Partial Differential Equation - NotesPartial Differential Equation - Notes
Partial Differential Equation - NotesDr. Nirav Vyas
 
Gaussian quadratures
Gaussian quadraturesGaussian quadratures
Gaussian quadraturesTarun Gehlot
 
FEM Introduction: Solving ODE-BVP using the Galerkin's Method
FEM Introduction: Solving ODE-BVP using the Galerkin's MethodFEM Introduction: Solving ODE-BVP using the Galerkin's Method
FEM Introduction: Solving ODE-BVP using the Galerkin's MethodSuddhasheel GHOSH, PhD
 
Higher Order Differential Equation
Higher Order Differential EquationHigher Order Differential Equation
Higher Order Differential EquationShrey Patel
 
Homogeneous Linear Differential Equations
 Homogeneous Linear Differential Equations Homogeneous Linear Differential Equations
Homogeneous Linear Differential EquationsAMINULISLAM439
 
B.tech ii unit-2 material beta gamma function
B.tech ii unit-2 material beta gamma functionB.tech ii unit-2 material beta gamma function
B.tech ii unit-2 material beta gamma functionRai University
 
6.4 Translations of Sine and Cosine Graphs
6.4 Translations of Sine and Cosine Graphs6.4 Translations of Sine and Cosine Graphs
6.4 Translations of Sine and Cosine Graphssmiller5
 
Numerical integration
Numerical integrationNumerical integration
Numerical integrationMohammed_AQ
 
Second order homogeneous linear differential equations
Second order homogeneous linear differential equations Second order homogeneous linear differential equations
Second order homogeneous linear differential equations Viraj Patel
 
Mathematical induction and divisibility rules
Mathematical induction and divisibility rulesMathematical induction and divisibility rules
Mathematical induction and divisibility rulesDawood Faheem Abbasi
 
heat diffusion equation.ppt
heat diffusion equation.pptheat diffusion equation.ppt
heat diffusion equation.ppt056JatinGavel
 
Gamma & Beta functions
Gamma & Beta functionsGamma & Beta functions
Gamma & Beta functionsSelvaraj John
 
Application of analytic function
Application of analytic functionApplication of analytic function
Application of analytic functionDr. Nirav Vyas
 
MATLAB ODE
MATLAB ODEMATLAB ODE
MATLAB ODEKris014
 

What's hot (20)

Partial Differential Equation - Notes
Partial Differential Equation - NotesPartial Differential Equation - Notes
Partial Differential Equation - Notes
 
Gaussian quadratures
Gaussian quadraturesGaussian quadratures
Gaussian quadratures
 
FEM Introduction: Solving ODE-BVP using the Galerkin's Method
FEM Introduction: Solving ODE-BVP using the Galerkin's MethodFEM Introduction: Solving ODE-BVP using the Galerkin's Method
FEM Introduction: Solving ODE-BVP using the Galerkin's Method
 
Beta gamma functions
Beta gamma functionsBeta gamma functions
Beta gamma functions
 
Gamma function
Gamma functionGamma function
Gamma function
 
Higher Order Differential Equation
Higher Order Differential EquationHigher Order Differential Equation
Higher Order Differential Equation
 
Homogeneous Linear Differential Equations
 Homogeneous Linear Differential Equations Homogeneous Linear Differential Equations
Homogeneous Linear Differential Equations
 
B.tech ii unit-2 material beta gamma function
B.tech ii unit-2 material beta gamma functionB.tech ii unit-2 material beta gamma function
B.tech ii unit-2 material beta gamma function
 
INTERPOLATION
INTERPOLATIONINTERPOLATION
INTERPOLATION
 
6.4 Translations of Sine and Cosine Graphs
6.4 Translations of Sine and Cosine Graphs6.4 Translations of Sine and Cosine Graphs
6.4 Translations of Sine and Cosine Graphs
 
Numerical integration
Numerical integrationNumerical integration
Numerical integration
 
Analytic function
Analytic functionAnalytic function
Analytic function
 
Second order homogeneous linear differential equations
Second order homogeneous linear differential equations Second order homogeneous linear differential equations
Second order homogeneous linear differential equations
 
Mathematical induction and divisibility rules
Mathematical induction and divisibility rulesMathematical induction and divisibility rules
Mathematical induction and divisibility rules
 
heat diffusion equation.ppt
heat diffusion equation.pptheat diffusion equation.ppt
heat diffusion equation.ppt
 
Gamma & Beta functions
Gamma & Beta functionsGamma & Beta functions
Gamma & Beta functions
 
Application of analytic function
Application of analytic functionApplication of analytic function
Application of analytic function
 
Runge Kutta Method
Runge Kutta MethodRunge Kutta Method
Runge Kutta Method
 
MATLAB ODE
MATLAB ODEMATLAB ODE
MATLAB ODE
 
Differential equations
Differential equationsDifferential equations
Differential equations
 

Similar to Orthogonal_Polynomials

problems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdf
problems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdfproblems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdf
problems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdfShivamSharma340355
 
Natural and Clamped Cubic Splines
Natural and Clamped Cubic SplinesNatural and Clamped Cubic Splines
Natural and Clamped Cubic SplinesMark Brandao
 
Solution set 3
Solution set 3Solution set 3
Solution set 3慧环 赵
 
Calculusseveralvariables.ppt
Calculusseveralvariables.pptCalculusseveralvariables.ppt
Calculusseveralvariables.pptssuser055963
 
OrthogonalFunctionsPaper
OrthogonalFunctionsPaperOrthogonalFunctionsPaper
OrthogonalFunctionsPaperTyler Otto
 
On fixed point theorem in fuzzy metric spaces
On fixed point theorem in fuzzy metric spacesOn fixed point theorem in fuzzy metric spaces
On fixed point theorem in fuzzy metric spacesAlexander Decker
 
The Fundamental theorem of calculus
The Fundamental theorem of calculus The Fundamental theorem of calculus
The Fundamental theorem of calculus AhsanIrshad8
 
Relative superior mandelbrot and julia sets for integer and non integer values
Relative superior mandelbrot and julia sets for integer and non integer valuesRelative superior mandelbrot and julia sets for integer and non integer values
Relative superior mandelbrot and julia sets for integer and non integer valueseSAT Journals
 
Relative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relativeRelative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relativeeSAT Publishing House
 
The Multivariate Gaussian Probability Distribution
The Multivariate Gaussian Probability DistributionThe Multivariate Gaussian Probability Distribution
The Multivariate Gaussian Probability DistributionPedro222284
 

Similar to Orthogonal_Polynomials (20)

paper
paperpaper
paper
 
math camp
math campmath camp
math camp
 
chapter3.ppt
chapter3.pptchapter3.ppt
chapter3.ppt
 
problems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdf
problems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdfproblems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdf
problems-and-exercises-in-integral-equations-krasnov-kiselev-makarenko.pdf
 
Differential calculus
Differential calculus  Differential calculus
Differential calculus
 
Natural and Clamped Cubic Splines
Natural and Clamped Cubic SplinesNatural and Clamped Cubic Splines
Natural and Clamped Cubic Splines
 
Solution set 3
Solution set 3Solution set 3
Solution set 3
 
Calculusseveralvariables.ppt
Calculusseveralvariables.pptCalculusseveralvariables.ppt
Calculusseveralvariables.ppt
 
OrthogonalFunctionsPaper
OrthogonalFunctionsPaperOrthogonalFunctionsPaper
OrthogonalFunctionsPaper
 
Lecture5
Lecture5Lecture5
Lecture5
 
MAINPH
MAINPHMAINPH
MAINPH
 
M.Sc. Phy SII UIV Quantum Mechanics
M.Sc. Phy SII UIV Quantum MechanicsM.Sc. Phy SII UIV Quantum Mechanics
M.Sc. Phy SII UIV Quantum Mechanics
 
Senior Research
Senior ResearchSenior Research
Senior Research
 
On fixed point theorem in fuzzy metric spaces
On fixed point theorem in fuzzy metric spacesOn fixed point theorem in fuzzy metric spaces
On fixed point theorem in fuzzy metric spaces
 
The Fundamental theorem of calculus
The Fundamental theorem of calculus The Fundamental theorem of calculus
The Fundamental theorem of calculus
 
A05330107
A05330107A05330107
A05330107
 
Relative superior mandelbrot and julia sets for integer and non integer values
Relative superior mandelbrot and julia sets for integer and non integer valuesRelative superior mandelbrot and julia sets for integer and non integer values
Relative superior mandelbrot and julia sets for integer and non integer values
 
Relative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relativeRelative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relative
 
The Multivariate Gaussian Probability Distribution
The Multivariate Gaussian Probability DistributionThe Multivariate Gaussian Probability Distribution
The Multivariate Gaussian Probability Distribution
 
02_AJMS_297_21.pdf
02_AJMS_297_21.pdf02_AJMS_297_21.pdf
02_AJMS_297_21.pdf
 

Orthogonal_Polynomials

  • 1. Orthogonal Polynomials Indre Skripkauskaite F10GP Project supervised by Dr. M. Dreher March 31, 2016 1
  • 2. 2 CONTENTS Contents 1 Introduction 3 2 Hilbert Spaces and Self-Adjoint Operators 4 2.1 Hermitian Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 The Space L2[a, b] and Differential Operators . . . . . . . . . . . . . . . . . . . . 6 3 Legendre Polynomials 10 3.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Recurrence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3 The Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 Chebyshev’s Polynomials 20 4.1 Generating Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.2 The Differential Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.3 Recurrence Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4.4 Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5 Conclusion 29
  • 3. 3 1 Introduction In mathematics, a set of polynomials is said to be orthogonal under some inner product if any two of the polynomials from the given set are orthogonal, i.e. their scalar product equals zero. There are quite a few families of Orthogonal polynomials, but in this project we will be focusing only on Legendre and Chebyshev’s polynomials. Legendre polynomials are widely used in physics and Chebyshev’s polynomials are applicable in finance. However, by the use of our knowledge of various topics from Complex Analysis, Functional Analysis, Linear Algebra as well as Calculus, we will be focusing only on the mathe- matical, i.e. the theoretical part and understanding the behavior of such families of polynomials in respective L2 Hilbert spaces, which is the aim of this project.
  • 4. 4 2 HILBERT SPACES AND SELF-ADJOINT OPERATORS 2 Hilbert Spaces and Self-Adjoint Operators In this section we will be discussing properties of Hilbert spaces and Self-Adjoint Operators, where the understanding of Hermitian which we will be recalling constantly in later chapters. We will also mention the famous Fourier Series, which plays an important role in orthogonality of functions. 2.1 Hermitian Matrices Definition 2.1.1. Let A be an n×n matrix and let AT be its transpose. A is said to be a Hermitian matrix [2] if it is equal to its conjugate transpose. This means AT = A. and we write AH = A. Definition 2.1.2. Let u, v ∈ Cn and k ∈ N. We define a scalar product in C as u, v = n k=1 ukvk. (2.1) Proposition 2.1.3. Let u, v ∈ Cn and let A = AH. Then A is said to be self-adjoint if we can write Au, v = u, Av . PROOF: Assume A = AH. We claim that Au, v = u, Av , for all u, v ∈ Cn. Then we have Au, v = n k=1 (Au)kvk = n k=1   n j=1 akjuj   vk = n j=1 uj n k=1 akjvk = n j=1 AHvj
  • 5. 2.1 Hermitian Matrices 5 Therefore if A = AH, then Au, v = u, Av . (2.2) Proposition 2.1.4. Eigenvalues of a Hermitian matrix are real. PROOF: Let A be a Hermitian matrix. Let λ be an eigenvalue of A and let u be an eigenvector of A to the eigenvalue λ. Suppose λ ∈ C. By (2.2) we have that Au, u = u, Au , Au, u = λu, u = λ u, u = λ u 2 , u, Au = u, λu = λ u, u = λ u 2 , hence λ u 2 = λ u 2 . Note that u 2 > 0 since u = 0, therefore λ = λ ∈ R. Theorem 2.1.5. Let A = AH, λ, µ be eigenvalues of A, λ = µ. Let u, v ∈ Cn be eigenvectors of A to the eigenvalues λ, µ, hence Au = λu Av = µv, then u, v = 0. PROOF: Consider Au, v . From (2.2) we already know that
  • 6. 6 2 HILBERT SPACES AND SELF-ADJOINT OPERATORS Au, v = u, Av λu, v = u, µv λ u, v = µ u, v = µ u, v λ u, v = µ u, v (λ − µ) u, v = 0 u, v = 0 since λ = µ. Hence the eigenvectors of distinct eigenvalues of a hermitian matrix are orthogonal. Theorem 2.1.6 (Spectral Theorem). [1] Let A be a Hermitian matrix and let V be a finite dimensional inner product space. Then there exists an orthonormal basis of V consisting of eigenvectors of A. 2.2 The Space L2 [a, b] and Differential Operators Definition 2.2.1. Let a, b ∈ R and let f(x),g(x) be two functions in L2[a, b]. Then the scalar product in L2[a, b] [2] is defined as f, g = b a f(x)g(x)dx. (2.3) We can also set A to act as a differential operator. Example 2.2.2. Let A be a differential operator A := d2 dx2 and gn(x) := sin(nx), gn(x) ∈ L2[−π, π], n ∈ N, then we get Agn = −n2gn. By the use of (2.3) and integration by parts
  • 7. 2.2 The Space L2[a, b] and Differential Operators 7 Agn, gm = π −π gn(x)gm(x)dx = − π −π gn(x)gm(x)dx + gn(x)gm(x) π −π = π −π gn(x)gm(x)dx − gn(x)gm(x) π −π = gn, Agm , Agn, gm = gn, Agm (2.4) for n, m ∈ N. Remark: If compare this result to the Proposition 2.1.3 in the previous section, we can see that in this example A acts as a differential operator rather than being a Hermitian matrix and gn, gm are being treated as eigenfunctions rather than being eigenvectors of A. Continuing from 2.4 we have that −n2 gn, gm = gn, −m2 gm , n2 gn, gm = m2 gn, gm , (n2 − m2 ) gn, gm = 0, gn, gm = 0 (2.5) for n = m. Definition 2.2.3. The set of functions {f1(x), f2(x), . . .} is orthogonal in L2[a, b] [2] if: fn(x), fm(x) = 0 (2.6) for n = m. Proposition 2.2.4. The set {sin(nx), cos(mx)}, n ∈ N+, m ∈ N0} is an orthogonal set of functions [2] in the Hilbert space L2[−π, π]. PROOF: By use of (2.3) we can see that
  • 8. 8 2 HILBERT SPACES AND SELF-ADJOINT OPERATORS 1, cos(nx) = π π cos(nx)dx = sin(nx) n π −π = 0, 1, sin(nx) = π π sin(nx)dx = − cos(nx) n π −π = 0. (2.7) By (2.5) sin(nx), sin(mx) = 0 (2.8) cos(nx), cos(mx) = 0 for n = m. Note that cos(x) is an even function and sin(x) is an odd function. Hence cos(nx), sin(mx) = π −π cos(nx) sin(mx)dx = 0 Theorem 2.2.5. Let n ∈ N and let f(t) be 2L-periodic function in Hilbert space L2[−L, L]. Then the Fourier Series expansion [4] of f(t) is represented as f(t) = a0 + ∞ n=1 an cos nπt L + ∞ n=1 bn sin nπt L , (2.9) where an = 1 L L −L f(t) cos nπt L dt, bn = 1 L L −L f(t) sin nπt L dt a0 = 1 2L L −L f(t)dt. Remark: We can make a comparison of the above to the Theorem 2.1.5, i.e. we can say that cos nπt L , sin nπt L are the eigen functions to the eigen values an, bn, a0 in the Hilbert space L2[−L, L]. Although, we do not claim that we can span full L2[−L, L] space, since it requires deeper analysis, so we will not be focusing on that in this project.
  • 9. 2.2 The Space L2[a, b] and Differential Operators 9 Example 2.2.6 (Heaviside step function). f(t) =    −1 for −π < t < 0 1 for 0 < t < π. (2.10) Our function f(t) is 2π periodic and it can be approximated by the use of (2.9) as f(t) = 4c π sin(t) + 4c 3π sin(3t) + 4c 5π sin(5t) + · · · + 4c (2n + 1)π sin((2n + 1)t) + . . . (2.11) n ∈ N. We can now compare the original function with the approximated result where the blue line indicates the Heaviside step function f(t) and the red line indicated the Fourier approximation with n = 6. Figure 1: Fourier series approximation of the Heaviside step function (2.10) with n = 6
  • 10. 10 3 LEGENDRE POLYNOMIALS 3 Legendre Polynomials 3.1 Generating Function Legendre Polynomials were introduced by Legendre in the theory of potential, where they are related to the expansion of the reciprocal of the distance 1 R , where R is the distance between two points r and r . Figure 2: R = |r − r | = (r2 + r 2 − 2rr cos θ) 1 2 , where θ is the angle between r and r . If we let t = r r , x = cos(θ) we have that 1 R = 1 r (1 − 2xt + t2 )−1 2 (3.1) for −1 x 1. If we rewrite (1 − 2xt + t2 )−1 2 as (t − x − x2 − 1)−1 2 (t − x + x2 − 1)−1 2 (3.2) and treat it as a function of t, we see that (3.2) has two singularities at x ± x2 − 1, for −1 x 1. Therefore, if |t| ≤ min |x ± √ x2 − 1|, we have the following Taylor Series expansion: (1 − 2xt + t2 )−1 2 = ∞ n=0 Pn(x)tn . t ∈ C, is is called the generating function [3] for some function Pn(x) which will be discussed now.
  • 11. 3.1 Generating Function 11 Definition 3.1.1. The Legendre polynomials are defined by Rodrigue’s formula [5] Pn(x) = 1 2nn! dn dxn (x2 − 1)n , n ∈ N, (3.3) for arbitrary values of x. Later we will see that these Pn’s are the same as in section (3.3). To obtain the general expression for the nth Legendre polynomial we will use the binomial expression (x2 − 1)n = n k=0 (−1kn!) k!(n − k)! x2n−2k . (3.4) Substituting (3.4) into (3.3) implies Pn(x) = [n/2] k=0 (−1)k(2n − 2k)! 2nk!(n − k!)(n − 2k)! xn−2k . (3.5) By the use of (3.5) we can see that the first eleven Legendre polynomials [8] are: P0(x) = 1, P1(x) = x, P2(x) = 1 2 (3x2 − 1), P3(x) = 1 2 (5x3 − 3x), P4(x) = 1 8 (35x4 − 30x2 + 3), P5(x) = 1 8 (63x5 − 70x3 − 5), P6(x) = 1 16 (231x6 − 315x4 + 105x2 − 5), P7(x) = 1 16 (429x7 − 693x5 + 315x3 − 35x), P8(x) = 1 128 (6435x8 − 12012x6 + 6930x4 − 1260x2 + 35), P9(x) = 1 128 (12155x9 − 25740x7 + 18018x5 − 4620x3 + 315x), P10(x) = 1 256 (46189x10 − 109395x8 + 90090x6 − 30030x4 + 3465x2 − 63).
  • 12. 12 3 LEGENDRE POLYNOMIALS Figure 3: Legendre polynomials of degree 0 through 10 We can also approach the generating function via Cauchy Integral Formula in the following way: Theorem 3.1.2 (Cauchy Integral Formula [5]). Let f(z) be analytic in a simply connected domain D. Let C be a closed contour going in the counter-clockwise direction inside D, and let z be an interior point of D. Then 1 2πi C f(ζ) ζ − z dζ = f(z). n! 2πi C f(ζ) (ζ − z)n+1 dζ = dn dzn f(z) (3.6) Proposition 3.1.3. For n ∈ N, t ∈ C, |x| 1, x ∈ R, (1 − 2xt + t2) = 0, hence |t| < 1 3, g(x, t) = (1 − 2xt + t2 )−1 2 (3.7) is the generating function for Legendre polynomials. PROOF: We wish to show that ∞ n=0 Pn(x)tn = g(x, t). (3.8)
  • 13. 3.1 Generating Function 13 Replace Pn(x) by the Rodrigues formulas and use the Cauchy integral theorem (3.6) we obtain Pn(x) = (−1)n 2nn! dn dxn (1 − x2 )n = (−1)n 2n 1 2πi C (1 − z2)n (z − x)n+1 dz, (3.9) where C is any curve enclosing x, going in a counter-clockwise direction. Now inserting (3.9) into (3.8) we get ∞ n=0 Pn(x)tn = ∞ n=0 (−1)n 2n 1 2πi tn C (1 − z2)n (z − x)n+1 dz. (3.10) Now interchanging summation and integration in (3.10) we get ∞ n=0 Pn(x)tn = 1 2πi C dz z − x ∞ n=0 (−1)n 2n tn (1 − z2) z − x n . (3.11) The resulting series is a simple geometric series, which is readily summed. We then obtain the following integral: − 1 2πi C 2 t dz z2 − 2 t z − 1 − 2 t x . (3.12) Now we evaluate this integral by using the familiar residue integration technique. We can clearly see that the denominator of (3.12) has two roots: z1 = 1 t + 1 t2 − 2 t x + 1 = 1 t + 1 t 1 − 2xt + t2 and z2 = 1 t − 1 t2 − 2 t x + 1 = 1 t − 1 t 1 − 2xt + t2. We can rewrite z2 as z2 = 1 t2 − 1 t2 x + 1 1 t + 1 t2 − 2 t x + 1 = 2 t x − 1 1 t + 1 t2 − 2 t x + 1 (3.13) . Now if we multiply both numerator and denominator of (3.13) by t we obtain the following: z2 = 2x − t 1 + √ 1 − 2xt + t2 ≈ x (3.14)
  • 14. 14 3 LEGENDRE POLYNOMIALS for t ≈ 0. We now choose C to be a path surrounding the point x and z2 and by applying Residue Theorem we see that − 2 t lim z→z2 z − 1 t − 1 t2 − 2 t x + 1 z2 − 2 t z − 1 − 2 t x = 1 √ 1 − 2xt + t2 . Consequently, ∞ n=0 Pn(x)tn = 1 √ 1 − 2xt + t2 . (3.15) 3.2 Recurrence Relation Proposition 3.2.1. Legendre Polynomials satisfy the Recurrence Relation [5] (n + 1)Pn+1(x) − (2n + 1)xPn(x) + nPn−1(x) = 0 (3.16) for n ∈ N+. PROOF: We are going to show it by first differentiating the generating function (3.7) with respect to t: ∂g ∂t = − 1 2 (1 + t2 − 2xt)−3 2 (2t − 2x) = x − t (1 + t2 − 2xt) 3 2 = ∞ n=0 Pn(x)tn−1 (3.17) Multiply (3.17) by (1 + t2 − 2xt) to obtain (1 + t2 − 2xt) ∞ n=0 nPn(x)tn−1 = (x − t)(1 + t2 − 2xt)−1 2 = (x − t) ∞ n=0 Pn(x)tn , (1 + t2 − 2xt) ∞ n=0 nPn(x)tn−1 = (x − t) ∞ n=0 Pn(x)tn , (1 + t2 − 2xt) ∞ n=0 nPn(x)tn−1 − (x − t) ∞ n=0 Pn(x)tn = 0. Setting the coefficient of tn equal to zero, we find that (n + 1)Pn+1(x) − 2nxPn(x) + (n − 1)Pn−1(x) − xPn(x) = 0,
  • 15. 3.3 The Differential Equation 15 or equivalently (n + 1)Pn+1(x) − (2n + 1)xPn(x) + nPn−1(x) = 0, n ∈ N. Legendre polynomials can be calculated step by step, starting from P0(x) = 1, P1(x) = x. 3.3 The Differential Equation Proposition 3.3.1. Legendre Polynomials solve the differential equation [7] ((1 − x2 )Pn(x)) + n(n + 1)Pn(x) = 0, n ∈ N0. (3.18) PROOF: By the use of the generating function (3.15) ∂ ∂x : ∞ n=0 Pn(x)tn = − 1 2 −2t (1 − 2xt + t2) 3 2 = t (1 − 2xt + t2) 3 2 ∂ ∂t : ∞ n=0 nPn(x)tn−1 = − 1 2 −2x + 2t (1 − 2xt + t2) 3 2 = x − t (1 − 2xt + t2) 3 2 (3.19) ∂2 ∂t2 : ∞ n=0 n(n − 1)Pn(x)tn−2 = − 1 (1 − 2xt + t2) 3 2 + (x − t) − 3 2 −2x + 2t (1 − 2xt + t2) 5 2 = − 1 (1 − 2xt + t2) 3 2 + 3(x − t)2 (1 − 2xt + t2) 5 2 (3.20) (1 − x2 ) ∂ ∂x : ∞ n=0 (1 − x2 )Pn(x)tn = t(1 − x2) 1 − 2xt + t2) 3 2 (1 − x2 ) ∂ ∂x : ∞ n=0 (1 − x2 )Pn tn = −2xt (1 − 2xt + t2) 3 2 + t(1 − x2 ) − 3 2 −2t (1 − 2xt + t2) 5 2 = − 2xt (1 − 2xt + t2) 3 2 + 3t2(1 − x2) (1 − 2xt + t2) 5 2 (3.21) Now we multiply (3.19) by t and obtain ∞ n=0 nPn(x)tn = t(x − t) (1 − 2xt + t2) 3 2 (3.22)
  • 16. 16 3 LEGENDRE POLYNOMIALS If we multiply (3.20) by t2 we get ∞ n=0 n(n − 1)Pn(x)tn = − t2 (1 − 2xt + t2) 3 2 + 3t2(x − t)2 (1 − 2xt + t2) 5 2 (3.23) Now if we multiply (3.22) by 2 an add it to (3.23) we obtain the following: ∞ n=0 2nPn(x)tn + ∞ n=0 n(n − 1)Pn(x)tn = 2t(x − t) (1 − 2xt + t2) 3 2 − t2 (1 − 2xt + t2) 3 2 + 3t2(x − t)2 (1 − 2xt + t2) 5 2 ∞ n=0 (2n + n(n − 1))Pn(x)tn = 2xt − 3t2 (1 − 2xt + t2) 3 2 + 3t2(x − t)2 (1 − 2xt + t2) 5 2 ∞ n=0 n(n + 1)Pn(x)tn = 2xt − 3t2 (1 − 2xt + t2) 3 2 + 3t2(x − t)2 (1 − 2xt + t2) 5 2 (3.24) Now by adding (3.21) to (3.24) we see that ∞ n=0 (1 − x2 )Pn(x) tn + ∞ n=0 n(n + 1)Pn(x)tn = − 2xt (1 − 2xt + t2) 3 2 + 3t2(1 − x2) (1 − 2xt + t2) 5 2 + 2xt − 3t2 (1 − 2xt + t2) 3 2 + 3t2(x − t)2 (1 − 2xt + t2) 5 2 ∞ n=0 (1 − x2 )Pn(x) tn + n(n + 1)Pn(x)tn = −2xt + 2xt − 3t2 (1 − 2xt + t2) 3 2 + 3t2((1 − x2) + (x − t)2) (1 − 2xt + t2) 5 2 = −3t2 (1 − 2xt + t) 3 2 + 3t2(1 − 2xt + t2) (1 − 2xt + t2) 5 2 = −3t2 + 3t2 (1 − 2xt + t2) 3 2 = 0. Hence ∞ n=0 (1 − x2 )Pn(x) tn + n(n + 1)Pn(x)tn = 0. Now setting the coefficients of tn equal to zero, we finally obtain (1 − x2 )Pn(x) + n(n + 1)Pn(x) = 0, n ∈ N, (3.25) Remark: Let us compare the above result to Theorem 2.1.5, which states the following: Au = λu (3.26)
  • 17. 3.4 Orthogonality 17 for all u ∈ Cn. We shal prove the following proposition first: Proposition 3.3.2. A = d dx (1 − x2) d dx is a self-adjoint operator in Hilbert space L2[−1, 1], i.e. Au, v = u, Av , for all twice differentiable functions u(x), v(x), x ∈ R. PROOF: Using differentiation by parts we see that Au, v = 1 −1 (Au) · v dx √ 1 − x2 = 1 −1 1 − x2 d dx ( 1 − x2 d dx u(x)) · v(x) 1 √ 1 − x2 dx = 1 −1 1 − x2u vdx = 1 − x2u v 1 −1 − 1 −1 1 − x2u v dx = 0 − 1 −1 u · 1 − x2v dx = u 1 − x2v 1 −1 + 1 −1 u · 1 − x2v dx = 0 + 1 −1 u · (Av) 1 √ 1 − x2 dx = u, Av , hence A is indeed self-adjoint If we set λ = −n(n + 1), u = Pn(x) and treat A as a differential operator in L2, i.e. A = d dx (1 − x2) d dx in (3.26), then comparing to Theorem 2.1.5 we can write (3.25) as d dx (1 − x2 ) d dx Pn(x) = −n(n + 1)Pn(x), (3.27) Consequently, ((1 − x2 )Pn(x)) = −n(n + 1)Pn(x) (3.28) which is the self-adjoint form of Legendre Differential Equation. 3.4 Orthogonality Proposition 3.4.1. Legendre polynomials are orthogonal in the Hilbert space L2[(−1, 1), dt].
  • 18. 18 3 LEGENDRE POLYNOMIALS PROOF: We wish to show 1 −1 Pn(x)Pm(x)dx = 0 for n = m. If we multiply both sides by Pm(x) and take the scalar product as defined in (2.3) of both sides we get n(n + 1) 1 −1 Pn(x)Pm(x) = − 1 −1 Pm(x) (1 − x2 )Pn(x) dx = − Pm(x) (1 − x2 )Pn(x) 1 −1 + 1 −1 Pm(x) (1 − x2 )Pn(x) dx = − 1 −1 Pm(x)(1 − x2 ) Pn(x)dx = m(m + 1) 1 −1 Pm(x)Pn(x)dx = 1 −1 Pm(x)Pn(x)dx = 0 (3.29) whenever n = m. Proposition 3.4.2. If n = m, then the scalar product or two Legendre polynomials satisfy PROOF: 1 −1 P2 n(x)dx = 2 2n + 1 . We substitute (3.15) into L2([−1, 1]) scalar product by the use of (2.3): ∞ n=0 Pn(x)tn , ∞ m=0 Pm(x)tm = 1 √ 1 − 2xt + t2 , 1 √ 1 − 2xt + t2 ∞ n=0 ∞ m=0 Pn(x), Pm(x) tn+m = 1 −1 1 1 − 2xt + t2 dx. By use of (3.34) we drop the terms where n = m and consider only the cases where n = m
  • 19. 3.4 Orthogonality 19 obtaining the following: ∞ n=0 Pn(x), Pn(x) t2n = 1 −1 1 1 − 2xt + t2 dx ∞ n=0 t2n 1 −1 P2 n(x)dx = 1 −1 1 1 − 2xt + t2 dx = − 1 2t ln(1 − 2xt + t2 ) 1 −1 = − 1 2t ln(1 − 2t + t2 ) − ln(1 + 2t + t2 ) . Note that ln(t)α = α ln(t) ∞ n=0 t2n 1 −1 P2 n(x)dx = − 1 2t ln(1 − t)2 − ln(1 + t)2 = − 1 t (ln(1 − t) − ln(1 + t)) . Note that ln(a) − ln(b) = ln(a b ) ∞ n=0 t2n 1 −1 P2 n(x)dx = − 1 t ln 1 − t 1 + t = 1 t ln 1 − t 1 + t −1 = 1 t ln 1 + t 1 − t . Using Taylor expansion we obtain ln 1 + t 1 − t = 2 + 2t3 3 + 2t5 5 + 2t7 7 + · · · + 2t2n+1 2n + 1 + · · · = ∞ n=1 2t2n+1 2n + 1 (3.30) and so 1 t ln 1 + t 1 − t = 2 + 2t2 3 + 2t4 5 + 2t6 7 + · · · + 2t2n 2n + 1 + · · · = ∞ n=1 2t2n 2n + 1 . (3.31) Finally ∞ n=0 2t2n 2n + 1 = ∞ n=0 t2n 1 −1 P2 n(x)dx,
  • 20. 20 4 CHEBYSHEV’S POLYNOMIALS hence 1 −1 P2 n(x)dx = 2 2n + 1 Remark: We have shown that Legendre polynomials are orthogonal, hence, linearly independent. We can refer back to Theorem 2.1.6. and make a comparison. We can say that in the Hilbert space L2[(−1, 1), dt], Legendre polynomials act as eigen functions. However, we do not claim that Pns span full L2[(−1, 1), dt] space, because proving it would require much deeper understanding so we will not be focusing on that in this project. 4 Chebyshev’s Polynomials Definition 4.0.1. Chebyshev’s polynomials are defined as Tn(x) = cos(n arccos(x)), n ∈ N+, −1 ≤ x ≤ 1. (4.1) The first eleven Chebyshev’s polynomials [9] are: T0(x) = 1 T1(x) = x T2(x) = 2x2 − 1 T3(x) = 4x3 − 3x T4(x) = 8x4 − 8x2 + 1 T5(x) = 16x5 − 20x3 + 5x T6(x) = 32x6 − 48x4 + 18x2 − 1 T7(x) = 64x7 − 112x5 + 56x3 − 7x T8(x) = 128x8 − 256x6 + 160x4 − 32x2 + 1 T9(x) = 256x9 − 576x7 + 432x5 − 120x3 + 9x T10(x) = 512x1 0 − 1280x8 + 1120x6 − 100x4 + 50x2 − 1
  • 21. 4.1 Generating Function 21 Figure 4: Chebyshev’s polynomials of degree 0 through 10 4.1 Generating Function Proposition 4.1.1. Let x = cos(θ) and Tn(x) = cos(nθ). Then the generating function [5] for Chebyshev’s polynomials is 1 − xt 1 − 2xt + t2 = ∞ n=0 Tn(x)tn for t ∈ C, |t| 1, n ∈ N0. PROOF: Let |t| < 1, then ∞ n=0 Tn(x)tn = ∞ n=0 cos(nθ)tn . (4.2) By the use of Euler’s formulas: einθ = cos nθ + i sin nθ (4.3) e−inθ = cos nθ − i sin nθ (4.4) If we add (4.3) to (4.4) we get einθ + e−inθ = 2 cos(nθ) (4.5)
  • 22. 22 4 CHEBYSHEV’S POLYNOMIALS Now dividing both sides of (4.5) by 2, we can see that ∞ n=0 Tn(x)tn = 1 2 ∞ n=0 (teiθ )n + (te−iθ )n = 1 2 ∞ n=0 (teiθ )n + ∞ n=0 (te−iθ )n Note that ∞ n=0(teiθ)n = 1 1−teiθ and ∞ n=0(teiθ)n = 1 1−te−iθ , so ∞ n=0 Tn(x)tn = 1 2 1 1 − teiθ + 1 1 − te−iθ = 1 2 1 − te−iθ + 1 − teiθ (1 − teiθ)(1 − te−iθ) = 1 2 2 − 2t(eiθ + e−iθ) 1 − (eiθ + eiθ) + t2 = 1 − t cos(θ) 1 − 2t cos(θ) + t2 . And since x = cos(θ) ∞ n=0 Tn(x)tn = 1 − xt 1 − 2xt + t2 . (4.6) 4.2 The Differential Equation Proposition 4.2.1. Chebyshev’s polynomials solve Chebyshev’s differential equation [6] (1 − x2 )Tn(x) − xTn(x) + n2 Tn(x) = 0, −1 ≤ x ≤ 1 (4.7) PROOF:
  • 23. 4.2 The Differential Equation 23 We differentiate the generating function (4.6) with respect to x and t: ∂ ∂x : ∞ n=0 Tn(x)tn = 2t(1 − tx) (1 − 2xt + t2)2 − t 1 − 2xt + t2 = t(1 − t2) (1 − 2xt + t2)2 (4.8) ∂2 ∂x2 : ∞ n=0 Tn (x)tn = 4t2(1 − t2) (1 − 2xt + t2)3 (4.9) ∂ ∂t : ∞ n=0 nTn(x)tn−1 = − x 1 − 2xt + t2 − (2t − 2x)(1 − xt) (t2 − 2xt + 1)2 = xt2 − 2t + x (1 − 2xt + t2)2 (4.10) ∂2 ∂t2 : ∞ n=0 n(n − 1)Tn(x)tn−2 = 2xt − 2 (t2 − 2xt + 1)2 − 2(2t − 2x)(xt2 − 2t + x) (1 − 2xt + t2)3 = − 2(xt3 − 3t2 + 3xt − 2x2 + 1) (t2 − 2xt + 1)3 (4.11) ∞ n=0 (1 − x2 )Tn (x)tn = 4t2(1 − x2)(1 − t2) (1 − 2xt + t2)3 (4.12) ∞ n=0 xTn(x)tn = xt(1 − t2) (1 − 2xt + t2)2 . (4.13) Multiplying (4.10) by t we get ∞ n=0 nTn(x)tn = t(xt2 − 2t + x) (1 − 2xt + t2)2 , (4.14) then multiplying (4.11) by t2 ∞ n=0 n(n − 1)Tn(x)tn = − 2t2(xt3 − 3t2 + 3xt − 2x2 + 1) (1 − 2xt + t2)3 (4.15) Now adding (4.14) to (4.15) we obtain the following: ∞ n=0 n2 Tn(x)tn = t(xt2 − 2t + x) (1 − 2xt + t2)2 − 2t2(xt3 − 3t2 + 3xt − 2x2 + 1) (1 − 2xt + t2)3 (4.16)
  • 24. 24 4 CHEBYSHEV’S POLYNOMIALS Adding (4.12), subtracting (4.14) and then adding (4.16) we see that ∞ n=0 (1 − x2 )Tn (x) − xTn(x) + n2 Tn(x) tn = 4t2(1 − x2)(1 − t2) (1 − 2xt + t2)3 (4.17) − xt(1 − t2) (1 − 2xt + t2)2 + t(xt2 − 2t + x) (1 − 2xt + t2)2 − 2t2(xt3 − 3t2 + 3xt − 2x2 + 1) (1 − 2xt + t2)3 Now writing everything under common denominator we obtain the following: ∞ n=0 (1 − x2 )Tn (x) − xTn(x) + n2 Tn(x) tn = 4t2(1 − t2 − x2 + x2t2) (1 − 2xt + t2)3 − xt(1 − t2)(1 − 2xt + t2) (1 − t2 + x2t2)3 (4.18) + t(xt2 − 2t + x)(1 − 2xt + t2) (1 − 2xt + t2)3 − 2t2(xt3 − 3t2 + 3xt + 2x2 + 1) (1 − 2xt + t2)3 = (4t2 − 4t4 − 4t2x2 + 4x2t4 − xt + 2x2t2xt3 − xt3 − 2x2t4 + xt5 + xt3 − 3x2t4 + xt5 − 2t2 + 4xt3 − 2t4 + tx − 2x2t2 + t3x − 2xt5 − t4 − 6xt3 + 4x2t2 − 2t2)/(1 − 2xt + 2x2 + 1)3 0/(1 − 2xt + 2x2 + 1) = 0 Hence ∞ n=0 (1 − x2 )Tn (x) − xTn(x) + n2 Tn(x) tn = 0 Now setting the coefficients of tn equal to zero we obtain (1 − x2 )Tn (x) − xTn(x) + n2 Tn(x) = 0 Proposition 4.2.2. Let u(x), v(x) be continuous and twice differentiable functions, x ∈ R. Let A be the differential operator 1 − x2 d dx 1 − x2 d dx . Then Au, v = u, Av , in L2 [−1, 1], dx√ 1−x2 i.e. A is self-adjoint.
  • 25. 4.2 The Differential Equation 25 PROOF: Au, v = 1 −1 (Au(x)) · v(x) dx √ 1 − x2 = 1 −1 1 − x2 d dx ( 1 − x2 d dx u(x) · v(x) dx √ 1 − x2 = 1 −1 1 − x2u (x) · v(x)dx = 1 − x2u (x)v(x) 1 −1 − 1 −1 1 − x2u (x)v (x)dx = 0 − 1 −1 u (x) · 1 − x2v (x) dx = 0 + 1 −1 u(x) · (Av(x)) dx √ 1 − x2 = u, Av i.e. A is self-adjoint. Remark: If we divide (4.7) by √ 1 − x2 we obtain the following: 1 − x2Tn (x) − x √ 1 − x2 Tn(x) + n2 √ 1 − x2 Tn(x) = 0 (4.19) Proposition 4.2.3. The expression 1 − x2Tn(x) + n2 √ 1 − x2 Tn(x) = 0 (4.20) is the self-adjoint form of the Chebyshev differential equation. PROOF: If we multiply (4.20) by √ 1 − x2, apply A as in Proposition 4.2.2 and let u(x) = Tn(x) √ 1−x2 and λ(x) = −n2, we obtain
  • 26. 26 4 CHEBYSHEV’S POLYNOMIALS Au = λu, d dx 1 − x2 d dx Tn(x) = − n2 √ 1 − x2 Tn(x) 1 − x2Tn(x) = − n2 √ 1 − x2 Tn(x), (4.21) hence A is a self-adjoint operator and (4.21) is a self-adjoint form of Chebyshev’s differential equation. Remark: We compare the above to Theorem 2.1.5 and see that the same idea repeats once again, but instead of having an eigenvector u, we have an eigenfunction u(x) and an eigenvalue λ = −n2 ∈ R. 4.3 Recurrence Relation Proposition 4.3.1. Chebyshev’s Polynomials satisfy the following recurrence relation [7]: Tn+1(x) = 2xTn(x) − Tn−1(x) PROOF: Let’s introduce the notation θ = arccos(x). Then (4.1) becomes Tn(θ(x)) = Tn(θ) = cos(nθ), where 0 θ 2π. We observe that replacing n by n + 1 Tn+1(θ) = cos((n + 1)θ) = cos(nθ) cos(nθ) − sin(nθ) sin(nθ), (4.22) Tn−1(θ) = cos((n − 1)θ) = cos(nθ) cos(θ) + sin(nθ) sin(θ). (4.23) Now adding (4.22) to (4.23) we obtain the following Tn+1 + Tn−1 = 2 cos(nθ)cos(θ) + sin(nθ) sin(θ), Tn+1(θ) = 2 cos(nθ) cos(θ) − Tn−1(θ), Tn+1(x) = 2xTn(x) − Tn−1(x),
  • 27. 4.4 Orthogonality 27 or equivalently Tn+1(x) = 2xTn(x) − Tn−1(x), (4.24) which is the recurrence relation for Chebyshev’s Polynomials. Observation: Now we know that Tn’s are indeed polynomials. 4.4 Orthogonality Proposition 4.4.1. Chebyshev’s Polynomials are orthogonal in L2 [−1, 1], 1√ 1−x2 . PROOF: We shall prove this by using (4.20) for d dx 1 − x2Tn + n2 √ 1 − x2 Tn = 0 (4.25) d dx 1 − x2Tm + m2 √ 1 − x2 Tm = 0 (4.26) Multiplying (4.25) by Tm and (4.26) by Tn and then subtracting the results we obtain d dx 1 − x2Tn Tm − d dx 1 − x2Tm Tn + n2 − m2 √ 1 − x2 TmTn = 0 d dx 1 − x2(TnTm − TmTn) − n2 − m2 √ 1 − x2 TmTn = 0 (4.27) Now if we integrate (4.27) over the interval [−1, 1] with respect to x we get 1 −1 TmTn √ 1 − x2 dx = √ 1 − x2 n2 − m2 TnTm − TmTn 1 −1 (4.28) 1 −1 TmTn √ 1 − x2 dx = 0 for n = m. We can clearly see that the value of (4.28) is zero if m = n = 0. But what happens when m = n = 0 and m = n = 0?
  • 28. 28 4 CHEBYSHEV’S POLYNOMIALS Let θ = arccos(x) dθ = − dx √ 1 − x2 . Tn(x) = cos(n arccos(x)) = cos(nθ) (4.29) Tm(x) = cos(m arccos(x)) = cos(mθ) (4.30) where θ ∈ [0, π], n, m ∈ N. Now if we take a scalar product in L2[0, π] of (4.29) and (4.30) we obtain the following: Tn(x), Tm(x) = π 0 cos(nθ) cos(mθ)dθ (4.31) = π 0 cos(n + m)θ − cos(n − m)θ 2 dθ (4.32) Now if n = m = 0 we can see that (4.32) equals to 1 2 π 0 (cos((2n)θ − cos(0)) dθ = 1 2 sin(2n)θ − x 2n π 0 = π 2 (4.33) and if n = m = 0 we have 1 2 π 0 (cos((2n)θ − cos(0)) dθ = 1 2 sin(θ) 2n − 2x π 0 = π (4.34) Now summarizing (4.28), (4.33) and (4.34) we finally get 1 −1 Tn(x)Tm(x) √ 1 − x2 =    0, for m = n, π 2 , for m = n = 0, π, for m = n = 0 Remark: We can refer back to Theorem 2.1.6 once again and and to summarize the above we can say that Tn’s are eigen functions, since they are orthogonal, i.e. linearly independent. However, similarly as for Legendre polynomials, we do not claim that they span entire L2 [−1, 1], dx√ 1−x2 , since showing it would require much further analysis and we shall not be focusing on that in this project.
  • 29. 29 5 Conclusion We can see that we have been following the same pattern through out the paper. Either we are examining Hermite matrices, Legendre polynomials or Chebyshev’s polynomials, in each case we have some scalar product and orthogonality in some Hilbert space. We have also noticed a strong connection between Hermite matrices and Orthogonal polynomials in general, where Hermmite matrices are self-adjoint and Orthogonal polynomials can be also expressed in their self adjoint form. It is also important to stress that the Spectral Theorem plays an important role in the analysis of Orthogonal polynomials.
  • 30. 30 REFERENCES References [1] Wikipedia, Spectral Theorem, (https://en.wikipedia.org/wiki/Spectraltheorem), (Accessed on 30/03/2016). [2] M. Youngson, Functional Analysis Lecture Notes, Heriot Watt University, Edinburgh, 2015. [3] Z. X. Wang, D. R. Guo. Special Functions, chapter 4. Hypergeometric Function, page 176. Singapore. [4] H. D. Sterck, Week 4 - Discrete Fourier Methods, Introduction to Computational Mathemat- ics Course Notes, University of Waterloo, Waterloo, 2015. [5] Holt, Rinehart and Winston, Special Functions of Mathematical Physics, New York, 1961. [6] G. Szego, American Mathematical Society. Orthogonal Polynomials. Colloquium Publication. Volume XXIII. Chapter II. Definition of Orthogonal Polynomials; Principal Examples, pages 23-29. Providence, Rhode Island, 1939. [7] R. A. Silverman (ed.). Special Functions and Their Applications, chapter 4, Orthogonal Polynomials, pages 43-50. PRENTICE-HALL, INC. Englewood Cliffs, N.J., 1965. [8] Wikipedia, Legendre Polynomials (https://en.wikipedia.org/wiki/Legendrepolynomials), (Accessed on 17/03/2016) [9] Wikipecia, Chebyshev Polynomials, (https://en.wikipedia.org/wiki/Chebyshevpolynomials), (Accessed on 21/03/2016). [10] M. Dreher, Mathematics for Physics III, (https://sites.google.com/site/michaeldreher7/home/ lecture-notes), (Accessed on 15/03/2016).