SlideShare a Scribd company logo
1 of 36
Download to read offline
The Spectral Theory Of Periodic Differential Equations
Patrick T M Hough
December 6, 2014
Chapter 1
Floquet Theory
1.1 Introduction
In this text I will be examining periodic differential equations and the nature of their
solutions through the methods of Spectral theory. By viewing the problem of solving such
an equation as an eigenvalue problem, I will hope to call on results already developed in
the theory of self adjoint and Sturm-Liouville problems. In doing so I aim to develop tools
that will allow me to determine the nature of solutions from the category of equation that
is under consideration. Such categories I hope to define as concisely as possible.
1.2 Floquet Theory
We start with the following general second order differential equation. Note that x is a
real variable and lies on (−∞, ∞).
a0(x)y (x) + a1(x)y (x) + a2(x)y(x) = 0. (1.1)
Here we state key properties of the coefficients ar(x) for r = 0, 1, 2.
• Complex Valued;
• Piecewise continuous;
• Period with period a i.e ar(x + a) = ar(x);
• a0(x) is assumed to be strictly positive.
We now study the nature of solutions to equation (1.1). These results with their proofs
are known as Floquet Theory after G.Floquet (1883).
Note that since the ar(x) are periodic with period a, if ψ(x) is a solution of equation (1.1)
then ψ(x + a) is also though these two solutions need not be the same and, indeed, the ex-
istence of a periodic solution is not assured. We do, however, have the following important
result.
1
Theorem 1.2.1. There exists a non-trivial solution ψ(x) of (1.1) and non-zero constant
ρ such that
ψ(x + a) = ρψ(x), (1.2)
for x ∈ R.
Before I give the proof, I note a standard result from the theory of ordinary differential
equations.
Remark. Since equation (1.1) is of order 2, there exist two linearly independent solutions
φ1(x) and φ2(x) and we can choose
φ1(0) = 1, φ1(0) = 0, φ2(0) = 0, φ2(0) = 1. (1.3)
I will refer to these two solutions many times throughout the text and so the reader should
recognise them as φ1(x) and φ2(x) with the above conditions.
Proof. Since φ1(x + a) and φ2(x + a) are also linearly independent solutions of (1.1), there
exist constants Aij (i, j = 1, 2) such that
φ1(x + a) = A11φ1(x) + A12φ2(x),
φ2(x + a) = A21φ1(x) + A22φ2(x),
(1.4)
where the matrix A = Aij is non-singular. Further, any solution ψ(x) of (1.1) has the form
ψ(x) = c1φ1(x) + c2φ2(x),
where c1 and c2 are constants. Now (1.2) implies that
c1φ1(x + a) + c2φ2(x + a) = ρc1φ1(x) + ρc2φ2(x),
and hence
c1(A11φ1(x) + A12φ2(x)) + c2(A21φ1(x) + A22φ2(x)) = ρc1φ1(x) + ρc2φ2(x).
Comparing coefficients of φi(x) (i, j = 1, 2) gives
A11 − ρ A21
A12 A22 − ρ
c1
c2
= 0.
Now c1 and c2 are not both zero if and only if the determinant of the first matrix is
non-zero, i.e.
ρ2
− (A11 + A22)ρ + detA = 0. (1.5)
This quadratic equation has two roots given by
ρ1,2 =
D ±
√
D2 − 4detA
2
,
where D = A11 + A22. Since detA = 0, both roots are non-zero. This proves the theorem.
2
Substituting the conditions (1.3) into (1.4), we find
A11 = φ1(a), A12 = φ1(a), A21 = φ2(a), A22 = φ2(a). (1.6)
Let us note a useful result that is used throughout the text.
The Wronskian and Abel’s Identity
Given the equation
y (x) + p(x)y (x) + q(x)y(x) = 0, (1.7)
where p(x) and q(x) are continuous, and two solutions y1(x) and y2(x), the Wronskian
W(y2, y2)(x) =
y1(x) y2(x)
y1(x) y2(x),
satisfies Abel’s identity
W(y1, y2)(x) = W(y1, y2)(x0)exp −
x
x0
p(ξ) dxξ , (1.8)
for any x, x0 ∈ R.
Using (1.6) and (1.8) and re-writing (1.1) in the form (1.7) we get
ρ2
− {φ1(a) + φ2(a)}ρ + exp −
a
0
a1(x)/a0(x) dx = 0. (1.9)
Here we have also used that W(φ1, φ2)(0) = 1.
We now come to the significant theorem of the chapter, which examines the ρ s of theorem
1.2.1.
Theorem 1.2.2. There exist two linearly independent solutions of (1.1) such that either
(i)
ψ1(x) = em1x
p1(x), ψ2(x) = em2x
p2(x),
where m1 and m2 are constants, not necessarily distinct, and p1(x) and p2(x) are
periodic with period a, or
(ii)
ψ2(x) = emx
p1(x), ψ2(x) = emx
{xp1(x) + p2(x)},
where m is a constant and p1(x) and p2(x) are periodic with period a.
The former occurs when either the solutions, ρ1 and ρ2, to the quadratic equation (1.5)
are distinct, or rank(AT − ρI) = 0 and ρ1 = ρ2. The latter occurs when ρ1 = ρ2 and
rank(AT − ρI) = 1.
3
We see that mis are given by ρk = eamk or mk = 1/a log ρk.
Here we call ρ1 and ρ2 the characteristic multipliers and m1 and m2 the characteristic
exponents.
Proof. We begin by considering (1.5) and we find that two cases arise.
• Case 1: ρ1,ρ2 distinct. Theorem 1.2.1 tells us that there are two non-trivial solutions
ψ1(x) and ψ2(x) such that
ψk(x + a) = ρkψk(x) (k = 1, 2).
We can easily verify that ψk(x) are linearly independent. Since detA = 0, ρ1 and ρ2
are non-zero also and so we can find real numbers m1 and m2 such that
ρk = eamk
.
Let
pk(x) = e−mkx
ψk(x).
Then
pk(x + a) = e−mk(x+a)
ψk(x + a) = ρkρk
−1
e−mkx
ψk(x) = pk(x),
and so p1(x) and p2(x) have period a. We conclude that there exist two linearly
independent solutions ψ1(x) and ψ2(x) such that
ψk(x) = emkx
pk(x).
• Case 2: ρ1 = ρ2 = ρ. Again, we know that ρ is non-zero and so we can find a real
number m such that
eam
= ρ.
By Theorem 1.2.1 there exists a solution Ψ1(x) to equation (1.1) such that
Ψ1(x + a) = ρΨ1(x).
Let Ψ2(x) be any other linearly independent solution of (1.1). We know that Ψ2(x+a)
is a solution to (1.1) also and so we can write
Ψ2(x + a) = d1Ψ1(x) + d2Ψ2(x),
where d1 and d2 are real constants. Let us now calculate d2. Note that
W(Ψ1, Ψ2)(x+a) = ρ{d1Ψ1(x)Ψ1(x)+d2Ψ1(x)Ψ2(x)}−ρ{d1Ψ1(x)Ψ1(x)+d2Ψ1(x)Ψ2(x)}
= ρd2W(Ψ1, Ψ2)(x). (1.10)
4
Rearranging (1.10) and using Abel’s identity we have that
ρd2 = exp −
x+a
x
{a1(t)/a0(t)} dt = exp −
a
0
{a1(t)/a0(t)} dt = detA.
Here we have used that the integrand has period a. Equation (1.5) tells us that
detA = ρ2 and so we conclude that d2 = ρ. After finding d2, we have
Ψ2(x + a) = d1ψ1(x) + ρΨ2(x). (1.11)
We now have two sub cases.
Suppose d1 = 0. Now equation (1.11) gives Ψ2(x+a) = ρΨ2(x) and we proceed as in
case 1, since we have two linearly independent solutions with property (1.2), where
m1 = m2 = m and ψk(x) = Ψk(x) for (k = 1, 2).
If d1 = 0, let
P1(x) = e−mx
Ψ1(x),
and
Ψ2(x) = e−mx
Ψ2(x) − (d1/aρ)xΨ1(x).
Now by (1.2) and (1.11), P1(x) and P2(x) have period a. Finally, we arrive with the
two solutions
Ψ1(x) = emx
P1(x),
and
Ψ2(x) = emx
{(d1/aρ)xP1(x) + P2(x)}.
On taking ψ2(x) = Ψ1(x) and ψ2(x) = (aρ/d1)Ψ2(x), we end up in Part two of the
Theorem.
On examining the above working we can clearly say which of the two situations occurs
based on the solutions of (1.5). If the solutions are distinct then we know immediately
that we are working within Part (i) of the Theorem. If the solutions are equal, however,
we must look a little more deeply. Consider the system of linear equations that arrises in
the proof of Theorem 1.2.1. In order to find a solution with property (1.2), we must find
an eigenvector for AT corresponding to the eigenvalue ρ. We may have either one or two
such eigenvectors. If we can find two eigenvectors, then we have two pairs of (c1, c2) and
thus two linearly independent solutions with property (1.2). From the working above we
can see that this leads us back to part (i) of the Theorem. If we cannot find a second
eigenvector, then we only have one solution with property (1.2) and we end up in part (ii)
of the Theorem.
We conclude that Part (i) of the theorem occurs when either ρ s are equal and rank(AT −
ρI) = 0 or ρ s are distinct. Part (ii) occurs when the ρ s are equal and rank(AT −ρI) = 1.
This conclusion follows from the fact that if rank(AT − ρI) = 0 then this matrix will map
all two-dimensional vectors to the origin and so our choice of c1 and c2 is not unique. If
this is not the case and rank(AT − ρI) = 1 then our choice of c1 and c2 is unique.
5
Remark. Note that if the solutions to the equation (1.5) are equal, then Part (i) of The-
orem 1.2.2 occurs if and only if
φ1(a) = φ2(a) = ρ, φ1(a) = φ2(a) = 0.
Proof. If ρ1 = ρ2 then Part (i) of Theorem 1.2.2 occurs if and only if rank(AT − ρI) = 0.
If the rank of a matrix is zero, it must itself be the zero matrix. It follows that if ρ1 = ρ2
then
A11 = A22 = ρ, A12 = A21 = 0. (1.12)
The remark follows from property (1.6).
Until now we have developed theory related to the solutions of (1.1) but we now turn our
attention to a more general form of equation (1.1); Hill’s Equation. It is in fact the case
that equations of the form (1.1) can always be transformed into a Hill’s equation so that
we may apply the results already developed here also.
1.3 Hill’s Equation
These equations, named after G. W. Hill (1877), have the form
{P(x)y (x)} + Q(x)y(x) = 0. (1.13)
Once again we state the key properties of the coefficients P(x) and Q(x).
• P(x) and Q(x) are real valued and periodic with period a;
• P(x) is continuous and nowhere zero;
• P (x) and Q(x) are piecewise continuous.
As mentioned, (1.1) may be transformed into an equation of the form (1.13). We now
examine two methods for doing so.
Method 1
Suppose
a
0 a1(t)/a0(t) dt = 0. Multiply (1.1) by A = {a0(x)}−1exp (
x
0 a1(t)/a0(t) dt to
give
[Ay (x)] + {a2(x)/a0(x)}Ay(x) = 0.
Method 2
Make the substitution y(x) = z(x)e(−1/2 x
0 a1(t)/a0(t) dt)
assuming that a1(x)/a0(x) has a
piecewise continuous derivative.
We get,
z (x) + ν(x)z(x) = 0, (1.14)
6
where ν(x) = a0(x) − 1/4(a1(x)/a0(x))2 − 1/2(a1(x)/a0(x)) .
Clearly these both give an equation of the form (1.13) but note that, using the second
method, our equation does not contain the first derivative. This equation will be the
main focus of the text.
Since equation (1.14) is of the form (1.1), we can apply all of the results form the previous
section and in particular Theorem 1.2.1. As there is no term in the first derivative our
quadratic equation (1.5) becomes
ρ2
− {φ1(a) + φ2(a)}ρ + 1 = 0. (1.15)
We now define D, the discriminant of (1.15) as
D = φ1(a) + φ2(a). (1.16)
As we will see, the value of this quantity will prove crucial in determining the nature of
solutions to (1.14) and so we examine its value implications case-wise. Also note that our
quadratic (1.5) implies that ρ1ρ2 = 1.
The Five cases of D
1. D > 2 Here D2 − 4 > 0 so ρ1,ρ2 are real and distinct. Clearly they are both positive
and can’t be equal to unity. The property ρ1ρ2 = 1 implies that there exists a real
number m such that
ρ1 = ema
, ρ2 = e−ma
.
By examining Theoreom 1.2.2, we see that distinct values of ρ give rise to the existence
of the two linearly independent solutions
ψ2(x) = emx
p1(x), ψ2(x) = e−mx
p2(x),
where p1(x) and p2(x) are periodic functions with period a.
2. D < −2. Similar to Case 1 but ρ1 and ρ2 are now negative. Our solutions, again by
Part 1 of Theorem 1.2.2, are
ψ1(x) = ex(m+πi/a)
p1(x), ψ2(x) = ex(m+πi/a)
p2(x).
3. −2 < D < 2. Here D2 − 4 < 0 so ρ1, ρ2 are non-real and distinct. Since they are
solutions of a quadratic, namely (1.15), together with identity ρ1ρ2 = 1 this means
|ρ1| = |ρ2| = 1. Therefore ρ1 = ρ2 and
ρ1 = eiaα
, ρ2 = e−iaα
.
7
with some α ∈ R. We always assume that 0 < aα < π. Again we work within Part
(i) of Theorem 1.2.2 and
ψ1(x) = eiαx
p1(x), ψ2(x) = e−iαx
p2(x).
4. D = 2 Here D2 − 4 = 0 so ρ1 = ρ2 = 1. We now examine two sub cases.
(i) φ2(a) = φ1(a) = 0: Using Abel’s identity W(φ1, φ2)(a) = W(φ1, φ2)(0) = 1. So
φ1(a)φ2(a) = 1,
D = φ1(a) + φ2(a) = 2.
These give φ1(a) = φ2(a) = 1. Thus
rank(AT
− I) = rank
0 0
0 0
= 0
Since ρ1 = ρ2 = 1, the characteristic exponents m1 and m2 are both zero and
Theorem 1.2.2 tells us that
ψ1(x) = p1(x), ψ2(x) = p2(x).
(ii) φ2(a), φ1(a) are not both zero: Now rank(AT − I) = 0 since
rank(AT
− I) = 0,
if and only if
φ2(a) = φ1(a) = 0,
so we work within Part 2 of the Theorem, again with mis = 0 , (1 i 2) to
conclude that
ψ1(x) = p1(x), ψ2(x) = xp1(x) + p2(x).
5. D = −2. Here, our quadratic equation (1.5) tells us that ρ1 = ρ2 = −1. Again we
examine two sub cases.
(i) φ2(a) = φ1(a) = 0: As in the above case, rank(A+I) = 0 and so working within
Part (i) of Theorem 1.2.2 gives us that, with m1 = m2 = πi/a
ψ1(x) = eπix/a
p1(x), ψ2(x) = eπix/a
p2(x),
where p1(x) and p2(x) have period a. It follows that all solutions to (1.14) have
semi-period a since
ψk(x + a) = −ψk(x) (k = 1, 2).
8
(ii) φ2(a), φ1(a) are not both zero: Here rank(A + I) = 0 and so we are in Part (ii)
of Theorem 1.2.2, with m = πi/a giving us
ψ2(x) = eπix/a
p1(x), ψ2(x) = eπix/a
{xp2(x) + p2(x)}.
From the above case analysis, we can simply state the following theorem by noting that,
since ψ1(x) and ψ2(x) are linearly independent, any solution to equation (1.14) may be
written as a linear combination of them.
Theorem 1.3.1. (i) If |D| > 2, all non-trivial solutions of (1.14) are unbounded in
(−∞ < x < ∞).
(ii) If |D| < 2, all solutions of (1.14) are bounded in (−∞ < x < ∞).
1.4 Boundedness and Periodicity of Solutions
We have now seen that the boundedness of solutions may be determined by simple exam-
ination of the equation in question. It is therefore possible to categorise equations of the
form (1.14) into those where solutions have certainly properties.
Definition 1.4.1. The equation (1.14) is said to be stable if all solutions are bounded in
(−∞ < x < ∞).
It is called unstable if all non-trivial solutions are unbounded in (−∞ < x < ∞)
and conditionally stable if there exists a non-trivial solution which is bounded in (−∞ <
x < ∞).
Diagram 1.4 displays the results of our case-wise examination of D. We remind ourselves
that the value of α is restricted by
0 < aα < π.
We can apply Theorem 1.3.1 to Diagram 1.4 to deduce, for which values of D, equation
(1.14) is stable/unstable/conditionally stable.
Clearly if |D| > 2 then there does not exist a non-trivial solution that is bounded, so we
can say that (1.14) is unstable here.
If D = −2, we have two cases:
• If φ2(a) = φ1(a) = 0 then all non-trivial solutions to (1.14) are bounded. This is
because exπi/a = 1, for all x and pi(x) are periodic for i = 1, 2 and are therefore
bounded. We also note that all solutions are semi-periodic with semi-period a.
9
• If φ2(a) and φ1(a) are not both zero then our equation is conditionally stable. To
see this, we note that, in this case, ψ1(x) is bounded by the previous argument but
ψ2(x) is not.
If D = 2, we again have two cases.
• If φ2(a) = φ1(a) = 0 then all solutions to (1.14) are bounded since p1(x) and p2(x)
are periodic and therefore bounded. Thus (1.14) is stable here and all solutions are
periodic with period a.
• If φ2(a) and φ1(a) are not both zero then (1.14) is conditionally stable. Note that
only ψ1(x) is bounded.
Notice that when −2 < D < 2 we have two linearly independent solutions to equation
(1.14)
ψ1(x) = eiαx
p1(x) and ψ2(x) = e−iαx
p2(x),
where 0 < aα < π. This implies that there does not exist a periodic solution to equation
(1.14) with period na for n ∈ Z.
We now state a theorem that encompasses the behaviour of solutions in the case |D| = 2,
which follows from the above observations.
Theorem 1.4.1. The equation (1.14) has non-trivial solutions with period a if and only
if D = 2 and with semi-period a if and only if D = −2. All solutions of (1.14) (when
|D| = 2) have period a or semi-period a if and only if in addition φ2(a) = φ1(a) = 0.
The following theorem investigates the existence of solutions with period ka.
Theorem 1.4.2. Let k be a positive integer. Then (1.14) has a non-trivial solution with
period ka if and only if there exists an integer l such that
D = 2 cos(2lπ/k).
Proof. Since periodic solutions are bounded we are not working in cases 1 or 2 where
|D| > 2.
In the instance that k = 0, the situation is covered by Case 4 choosing l = 0.
If k = 2, Case 3 does not occur since no non-trivial linear combination of ψ1(x) = eiαxp1(x)
and ψ2(x) = e−iαxp2(x) has period 2a. This follows from the fact that 0 < aα < π. k = 2
is therefore covered by Cases 4 and 5 (where |D| = 2), choosing l = 0 and l = 1 respectively.
10
-2+2StableUnstableUnstable
ψ1(x)=ex(m+iπ/a)p1(x),
ψ2(x)=e−x(m+iπ/a)p2(x).
ψ1(x)=eiαxp1(x),
ψ2(x)=e−iαxp2(x).
ψ1(x)=emxp1(x),
ψ2(x)=e−mxp2(x).
φ2(a)=φ1(a)=0.
φ2(a),φ1(a)
arenotbothzero.φ2(a)=φ1(a)=0.
φ2(a),φ1(a)
arenotbothzero.
ψ1(x)=exπi/ap1(x),
ψ2(x)=exπi/ap2(x).
ψ1(x)=exπi/ap1(x),
ψ2(x)=exπi/a{xp1(x)+p2(x)}.
ψ1(x)=p1(x),
ψ2(x)=p2(x).
ψ1(x)=p1(x),
ψ2(x)=xp1(x)+p2(x).
D
Diagram 1.4
6
?
6
?
6
?
-
-
-
-
11
If k > 2 then, by inspecting Diagram 1.4, our solution does not have period a or 2a so Case
3 occurs. We conclude that a non-trivial solution of (1.14) has period ka if and only if
c1p1(x)(1 − eikaα
) + c2p2(x)(1 − e−ikaα
) = 0,
which implies
eikaα
= 1, and therefore kaα = 2π. l ∈ Z. (1.17)
From our quadratic (1.15) we have that
D = ρ1 + ρ2 = 2 cos(aα) = 2 cos(2lπ/k).
From above, we can see that if k = 2, it is Cases 4 and 5 that occur. The only periodic
solutions that can occur here are ones with either period a or semi-period a. Hence we
have the following Corollary.
Corollary 1.4.1. A non-trivial solution of (1.14) with period 2a has either period a or
semi-period a.
Looking again at the proof, we see that if k > 2, then our solution cannot have pe-
riod a or 2a so we are in Case 3 with (1.17) holding. Under these conditions both of
ψ1,2(x) = e±iαxp1,2(x) have period ka. The next Corollary follows directly.
Corollary 1.4.2. If (1.14) has a non-trivial solution with period ka where k is a positive
integer and k > 2, then all solutions have period ka.
Furthermore, in the circumstances of Corollary 1.4.2, any solution ψ(x) of equation (1.14)
has the form
ψ(x) = c1e(2lπix/ka)
p1(x) + c2e(−2lπix/ka)
p2(x),
where c1 and c2 are real constants.
If, for (1.14), all solutions have period ka then we say that solutions with period ka coexist.
The coexistence problem for (1.14) is the task of deciding whether, if a solution of one of
these types exists, all solutions are of that type.
Corollary 1.4.2 has already given us an answer to the coexistence problem for k > 2. We
will touch on the coexistence problem for period a and semi-period a later on.
12
1.5 Even and odd periodic solutions
If the coefficient ν(x) in equation (1.14) is even then it is possible that we have even and
odd period solutions. The circumstances under which each occur are summarised in the
following theorem.
Theorem 1.5.1. Let ν(x) be even. Then (1.14) has a non-trivial solution which is
(i) even with period a if and only if φ1(1
2a) = 0;
(ii) odd with period a if and only if φ2(1
2a) = 0;
(iii) even with semi-period a if and only if φ1(1
2a) = 0;
(iv) odd with semi-period a if and only if φ2(1
2a) = 0.
Proof. We will prove parts (i) and (ii) since the other parts use similar methods.
For part (i) note that if ν(x) is even then ψ(x) is a solution of (1.14) if and only if ψ(−x) is
also. In particular φ1(x) and φ1(−x) are solutions satisfying the same conditions at x = 0.
Hence
φ1(x) = φ1(−x). (1.18)
Thus φ1(x) is even. Similarly, by the conditions (1.3), φ2(x) = −φ2(−x) so φ2(x) is odd.
We can now deduce that every even solution must be a multiple of φ1(x) and every odd
solution must be a multiple of φ2(x). Since φ1(x) is even, and therefore φ1(−1
2a) = φ1(1
2a),
φ1(x) has period a if and only if φ1(−1
2a) = φ1(1
2a). But φ1(x) is even so
φ1(−
1
2
a) = −φ1(
1
2
a). (1.19)
The last two formulae hold if and only if φ1(1
2a) = 0 and part (i) follows. For part (iii) note
that φ1(x) has semi-period a if and only if, in addition to (1.19), φ1(1
2a) = −φ2(−1
2a). But
in order that (1.18) holds, the necessary and sufficient condition is that φ1(1
2a) = 0.
13
Chapter 2
Stability and Instability Intervals
2.1 Introduction
We now focus on a special case of (1.14) where the coefficient of ν(x) depends on a real
parameter λ as follows.
ν(x) = λs(x) − q(x). (2.1)
The properties of s(x) and q(x) are as follows.
• s(x), q(x) piecewise continuous with period a;
• There exists a number s > 0 such that s(x) s for all x ∈ (−∞, ∞).
Simply swapping P(x) for p(x), (1.14) becomes
{p(x)y (x)} + {λs(x) − q(x)}y(x) = 0. (2.2)
To illustrate the dependence on λ, we shall write our two linearly independent solutions,
φ1(x) and φ2(x) satisfying (1.3), as φ1(x, λ) and φ2(x, λ) and write
D(λ) = φ1(a, λ) + φ2(a, λ). (2.3)
For now we regard λ as real although we will examine the scenario in which it may be
complex later on. Clearly D(λ) is an analytic function of λ since x is fixed at x = a and
further, φi(a, λ) and their derivatives are analytic for i = 1, 2.
If a function is analytic then we know it to be also continuous so, by the Open Mapping
Theorem, the set of λ such that |D(λ)| < 2 forms an open set on the real λ-axis. We shall
see that this is in fact a compact set, i.e. it can be expressed as a countable collection of
disjoint open intervals.
We now examine the implications of our new parameter λ in the setting of Theorem 1.3.1.
14
Stability intervals: Theorem 1.3.1 implies that (1.14) is stable when λ lies within the in-
tervals where |D(λ)| < 2.
Instability intervals: The intervals for which |D(λ)| > 2 are where (1.14) is unstable.
Conditional stability intervals: These are the closures of the stability intervals, i.e. where
|D(λ)| 2.
We now develop some theory which will allow us to determine and investigate the existence
of the above intervals of λ.
2.2 Interlude into Sturm-Liouville theory
The Sturm-Liouville operator is the most general second order differential operator that is
self-adjoint under the appropriate boundary conditions. These are
L ≡
1
w(x)
{
d
dx
(p(x)
d
dx
)+r(x)},
α1y(a) + β1y (a) = 0,
α2y(b) + β2y (b) = 0,
where α1, β1 not both zero and α2, β2 not both zero.
We now examine two sets of boundary conditions under which (2.2) may be considered a
Sturm-Liouville eigenvalue problem of the form Ly = −λy. Firstly note that our operator,
in the case of (2.2), is L ≡ 1
s(x){ d
dx (p(x) d
dx ) − q(x)}.
The periodic eigenvalue problem
This comprises equation 2.2 considered to hold on [0, a] with the boundary conditions
y(0) = y(a), y (0) = y (a). (2.4)
The natural, associated inner product space is the set of continuous functions on [0, a] with
inner product
f1, f2 =
a
0
f1(x)f2(x)s(x) dx.
A standard result from functional analysis about self-adjoint eigenvalue problems is the
existence of a countable infinity of eigenvalues (counting double eigenvalues). We note two
further properties of the problem that are also standard results
• The eigenvalues form an unbounded set.
15
• The eigenfunctions ψn(x) corresponding to distinct eigenvalues form and orthonormal
set over [0, a] with weight function s(x) such that
ψm, ψn =
1 m = n
0 m = n
We denote the eigenvalues by λn (n = 0, 1, 2, ...) where
λ0 λ1 λ2 , ... and λn → ∞ as n → ∞.
The boundary conditions (2.4) mean that we can extend each ψn(x) to (−∞, ∞) as a
continuously differentiable function with period a. This means that the λn are the values
of λ for which equation (2.2) has a non-trivial solution with period a. Furthermore, any
double eigenvalues are values of λ for which all solutions to (2.2) have period a. From
Case 4 in the examination of D(λ), it follows that the λn are the zeros of the function
D(λ) − 2 = 0 and that a given λn is a double eigenvalue if and only if
φ2(a, λn) = φ1(a, λn) = 0.
The semi-periodic eigenvalue problem
This is another Sturm Liouville problem with equation 2.2 considered on [0, a] where our
boundary conditions are
y(0) = −y(a), y (0) = −y (a). (2.5)
Since this is a Sturm-Liouville eigenvalue problem we, again, have an unbounded, countable
infinity of eigenvalues. We will denote these by µn such that
µ0 µ1 µ2 , ... and µn → ∞ as n → ∞.
We will denote the corresponding eigenfunctions by ξn(x) and note that the standard results
stated above also apply to the ξn(x). This time, however, the boundary conditions (2.5)
mean that our ξn(x) can be extended to (−∞, ∞) as continuously differentiable functions
with semi-period a.
We conclude that the µn are the values of λ for which equation (2.2) has a non-trivial
solution with semi-period a. Furthermore, any double eigenvalues are values of λ for which
all solutions have semi-period a. From Case 5 in the D(λ) case analysis, we see that the
µn are the zeros of the function D(λ) + 2 and that a given µn is a double eigenvalue if and
only if
φ2(a, µn) = φ1(a, µn).
Let F denote the set of all complex-valued functions f(x) which are continuous in [0, a]
and have a piecewise continuous derivative in [0, a].
16
2.3 Variational Results
Later on, we will require some results of a variational nature related to the previous two
problems, so I take a short interlude here to develop what we need.
We work with the λn since the results for µn are similar.
The Dirichlet Integral
Let f(x), g(x) ∈ F. We then define the Dirichlet integral of f and g; J(f, g) by
J(f, g) =
a
0
{p(x)f (x)g (x) + q(x)f(x)g(x)} dx. (2.6)
We use this integral to derive clear relationships between functions in F and the eigenfun-
tions ψn(x), and the corresponding eigenvalues λn of equation (2.2). It will become clear
that the structure of this integral lends itself very naturally to working with (2.2).
If it is also true that g (x) exists and is piecewise continuous then
J(f, g) = −
a
0
f(x)[{p(x)g (x)} − q(x)g(x)] dx + [p(x)f(x)g (x)]a
0, (2.7)
after integrating by parts.
When f(x) and g(x) satisfy the periodic boundary conditions (2.4), the boundary term
vanishes. In particular, when g(x) = ψn(x),
J(f, ψn(x)) = −
a
0
f(x)[−λnψn(x)s(x)] dx,
= λn
a
0
f(x)ψn(x)s(x) dx,
= λnfn, (2.8)
where fn is the Fourier coefficient in the second line. If f(x) = ψm(x) we get,
J(ψm, ψn) =
λn (m = n)
0 (m = n).
(2.9)
Thus the Dirichlet integral of an eigenfunction with itself gives the corresponding eigen-
value.
Proposition 2.3.1. A Lower Bound for J(f, f).
Let f(x) ∈ F satisfy the periodic boundary conditions (2.4). Then
∞
n=0
λn |fn| J(f, f). (2.10)
17
Proof. We first prove this result assuming that q(x) 0. Then
J(g, g) =
a
0
{p(x) g (x)
2
+ q(x) |g(x)|2
} dx 0 ∀g ∈ F,
since p(x) > 0. In particular
J f −
N
n=0
fnψn, f −
N
n=0
fnψn 0,
where N ∈ Z+. Now, the left hand side is
a
0
{p(x) f −
N
n=0
fnψn f −
N
n=0
fnψn + q(x) f −
N
n=0
fnψn f −
N
n=0
fnψn } dx,
which becomes
a
0
p(x) f f + q(x) ff) dx −
a
0
p(x) f
N
n=0
fnψn + q(x) f
N
n=0
fnψn dx
+
a
0
p(x)
N
n=0
fnψn.
N
n=0
fnψn + q(x)
N
n=0
fnψn.
N
n=0
fnψn dx
−
a
0
p(x) f
N
n=0
fnψn + q(x) f
N
n=0
fnψn dx.
The third term here can be manipulated, using properties (2.8) and (2.9), to show its
equivalence to
N
n=0
fnfn
a
0
p(x) ψn(x)ψn(x) + q(x) ψn(x)ψn(x) dx =
N
n=0
fnfnJ(ψn, ψn),
=
N
n=0
λnfnfn.
All of the other integrals are recognisable from the definition of the Dirichlet integral and
we have
J(f, f) −
N
n=0
fnJ(ψn, f) −
N
n=0
fnJ(f, ψn) +
N
n=0
λnfnfn 0.
18
Since J(ψn, f) = J(f, ψn), we have that
J(f, f) −
N
n=0
fnλnfn −
N
n=0
fnλnfn +
N
n=0
λnfnfn 0,
since the λn are real. Here we have also used property (2.8) Thus
N
n=0
λn |fn|2
J(f, f),
which, on letting N → ∞ gives the desired result.
Now suppose that we don’t have the condition q(x) 0. We may chose q0 to be a constant
sufficiently large to make
q(x) + q0s(x) 0, (2.11)
in [0, a]. We now use a clever method of variation of constants to transform the general
case into the ‘q(x) 0’ case just proven.
Now equation (2.2) can be written as
{p(x)y (x)} + {Λs(x) − Q(x)}y(x) = 0,
where Λ = λ + q0 and Q(x) = q(x) + q0s(x). Since Q(x) 0, we can use the first part of
the proof to write
∞
n=0
(λn + q0) |fn|2
a
0
[p(x) f (x)
2
+ {q(x) + q0s(x)} |f(x)|2
] dx.
We now state a useful result that shall be used directly.
Parseval’s Formula.
Let f(x) be in L2([0, a]), and let fn =
a
0 f(x)ψn(x)s(x) dx. Then
∞
n=0
|fn|2
=
a
0
|f(x)|2
s(x) dx.
This is from Titchmarsh:‘Eigenfunction Expansions’ Part II, §14.14 [3].
It follows from this formula that the terms involving q0, on each side, are equal and so our
result is proven in the general case.
19
Also note that, since λn λ0, we have that
J(f, f) λ0
∞
n=0
|fn|2
= λ0
a
0
|f(x)|2
s(x) dx. (2.12)
The equality here clearly holds only when f(x) is an eigenfunction corresponding to λ0.
Thus
λ0 = min J(f, f)
a
0
|f(x)|2
s(x) dx ,
where we remind ourselves that the minimum is taken over f ∈ F satisfying (2.4).
Our final variational result examines the behaviour of the eigenvalues λn in the periodic
problem as we increase the functions p(x) and q(x) slightly. In the following sections we
will adopt Eastham’s notation of writing p.p where we mean to say “equal everywhere but
at isolated points”.
Proposition 2.3.2. Let λ1,n (n 0) denote the eigenvalues in the period problem over
[0, a] where p(x), q(x) and s(x) are replaced by p1(x), q1(x) and s1(x) respectively, such
that
p1(x) p(x), q1(x) q(x) and s1(x) s(x). (2.13)
Then
(i) If s1(x) = s(x) p.p we have λ1,n λn ∀n;
(ii) Otherwise, we have that λ1,n λn for n only where λn 0.
This result claims that, in the first case, all eigenvalues increase (or stay the same) and in
the second case, only when λn are positive, do they increase (or stay the same).
Proof. Let ψ1,n(x) denote the eigenfunction corresponding to λ1,n and J1(f, g) be the
Dirichlet integral with p(x), q(x) now becoming p1(x), q1(x). By (2.13), it is clear that
J1(f, f) J(f, f). (2.14)
We begin by examining the case of n = 0. We aim to bound λ1,0 from below and show
that one such bound is λ0. Let f(x) = ψ1,0(x). From the relation (2.9)
λ1,0 = J1(ψ1,0, ψ1,0) J(ψ1,0, ψ1,0) λ0
a
0
ψ2
1,0(x)s(x) dx.
Here we have also used (2.14) and (2.12). As s1(x) s(x), we have
a
0
ψ2
1,0(x)s(x) dx
a
0
ψ2
1,0(x)s1(x) dx = 1,
20
since the set of ψ1,n forms an orthonormal set over [0, 1]. If s1(x) = s(x) p.p then the
equality holds and λ1,0 λ0. However, if s1(x) < s(x), then it is a strict inequality above.
If, in this case, λ0 < 0 then
λ0 = λ0
a
0
ψ2
1,0(x)s1(x) dx > λ0
a
0
ψ2
1,0(x)s(x) dx.
So we are unable to make the appropriate lower bound. We conclude that if s1(x) < s(x)
then λ1,0 λ0 only if λ0 0. This is all that need be said for n = 0.
For n = 1, let f(x) = c0ψ1,0(x)+c1ψ1,1(x). We may always choose c0 and c1 to be constants
where
c2
0 + c2
1 = 1.
and
c0
a
0
ψ1,0(x)ψ0(x)s(x) dx + c1
a
0
ψ1,1(x)ψ0(x)s(x) dx = 0.
Now
f2
(x) = c0ψ2
1,0(x) + c2
1ψ1,1(x) + 2c0c1ψ1,0(x)ψ1,1(x).
therefore
a
0
f2
(x)s1(x) dx = c2
0
a
0
ψ1,0(x)s1(x) dx + c2
1
a
0
ψ1,1(x)s1(x) dx,
by orthogonality. Thus
a
0
f2
(x)s1(x) dx = c2
0 + c2
1 = 1,
by orthonormality. Also we have that,
f0 =
a
0
f(x)ψ0(x)s(x) dx = 0.
Now
J1(f, f) =
a
0
{p1(x)f (x)f (x) + q(x)f(x)f(x)} dx,
=
a
0
{p1(x) c0ψ1,0(x) + c1ψ1,1(x) c0ψ1,0(x) + c1ψ1,1(x)
+ q1(x) c0ψ1,0(x) + c1ψ1,1(x) (c0ψ1,0(x) + c1ψ1,1(x) } dx,
= c2
0J(ψ1.0(x), ψ1,0(x)) + c2
1J(ψ1,1, ψ1,1),
= λ1,0c2
0 + λ1,1c2
1,
λ1,1(c2
0 + c2
1),
= λ1,1,
21
since λ1,n λ1,n−1 for all n. From our previous variational result, and by Pareseval’s
formula,
J(f, f)
∞
1
λnf2
n λ1
∞
1
f2
n = λ1
a
0
f2
(x)s(x) dx.
Here we have used that f0 = 0.
From (2.14) we have
λ1,1 λ1
a
0
f2
(x) dx.
The argument at the end of the n = 0 case can be applied here to yield the result for n = 1.
If we take f(x) = c0ψ1,0(x) + ... + cnψ1,n(x) where the c1 are real constants such that
c2
0 + ... + c2
n = 1,
and
fr = 0,
for (0 r n − 1) then the previous argument may be extended to the general case.
Finally note that unless p1(x) = p(x) and q1(x) = q(x) p.p then we have λ1,n > λn. Thus
the eigenvalues increase definitely here.
We will conclude the study of the two problems in this sections by examining an example
in which we can we can determine λn and µn explicitly.
We take p(x) = s(x) = 1 and q(x) = 0. Equation (2.2) now becomes
y(x) = A cos(x
√
λ) + B sin(x
√
λ).
In this instance we will solve the semi-period problem since this tends to be neglected in
other texts. Applying the conditions (2.5) we have
−A = A cos(a
√
λ) + B sin(a
√
λ), (2.15)
−B = B cos(a
√
λ) − A sin(a
√
λ). (2.16)
Rearranging (2.16) gives B = A sin(a
√
λ)/ cos(a
√
λ) + 1. Substituting this value of B into
(2.15) gives
A(cos(a
√
λ) + 1) = 0.
Writing λ = µ we have
µ2m = µ2m+1 =
(2m + 1)2π2
a2
,
for m ∈ Z. Also note that λ0 = 0.
22
Before we come to the next result we develop a bit of theory that will prove useful in the
proof of the following Theorem. In particular we focus on a few standard methods and
results for dealing with ordinary differential equations that are covered by Eastham in his
text Theory of ordinary differential equations (1970) [1]. Let us look at the homogeneous
ordinary, linear differential equation
a0(x)yn
(x) + ... + an(x)y(x) = 0, (2.17)
where ai(x) are continuous for 0 i n on some interval I and a0(x) = 0 on I. Now
consider its inhomogeneous relative
a0(x)yn
(x) + ...an(x)y(x) = b(x), (2.18)
where b(x) is continuous on I.
I will simply state the following result without proof as it is widely known.
Proposition 2.3.3. Let φ1(x),...,φn(x) form a fundamental set for (2.17) and let ψ0(x) be
a particular solution of (2.18). Then if ψ(x) is a solution of (2.18) then there are unique
constants c1, ..., cn such that
ψ(x) = c1φ1(x) + ... + cnφn(x) + ψ0(x).
The reader might have previously met a few situations in which ψ0(x) could be predicted
from the form of b(x). We now present a tool called the method of variation of constants
for finding ψ0(x) that works whenever ai(x) (0 i n) and b(x) are any continuous
functions of x.
Suppose φ1(x), ..., φn(x) form a fundamental set for (2.17). We know that every solution
φ(x) of (2.2) is of the form
φ(x) = c1φ1(x) + ... + cnφn(x),
where the cr are constants for 1 r n.
Since we have b(x) on the right hand side, we try to find a solution of the form
ψ0(x) = c1(x)φ1(x) + ... + cn(x)φn(x),
where the cr(x), are to be found. The result of this method is that the cr(x) are given by
cr(x) =
x
a
Wr(φ1, ...φn)(t)
W(φ1, ...φn)(t)
.
b(t)
a0(t)
dt,
where Wr(φ1, ..., φn) is the determinant obtained by replacing the rth column by (0, ..., 0, 1).
In the case n = 2 we find
ψ0(x) = −
x
a
φ1(x)φ2(t) − φ2(x)φ1(t)
W(φ1, φ2)(t)
.
b(t)
a0(t)
dt.
23
We now come to the most important result of the chapter. In the following Theorem, the
function D(λ) is examined through the existence of the eigenvalues λn and µn in the two
eigenvalue problems previously mentioned.
Theorem 2.3.1. (i) The numbers λn and µn occur in the order
λ0 < µ0 µ1 < λ1 λ2 < µ2 µ3 < λ3 λ4 < ... .
(ii) In [λ2m, µ2m], D(λ) decreases from 2 to −2.
(iii) In [µ2m+1, λ2m+1], D(λ) increases from −2 to 2.
(iv) In (−∞, λ0) and (λ2m+1, λ2m+2), D(λ) > 2.
(v) In (µ2m, µ2m+1), D(λ) < −2.
Proof. The proof of this must be given in several stages.
a) There exists a number Λ such that D(λ) > 2 for all λ Λ.
Since s(x) s > 0 we can choose Λ such that
q(x) − Λs(x) > 0 in (−∞, ∞). (2.19)
This is because q(x) is periodic on (−∞, ∞) and therefore bounded on (−∞, ∞).
Let y(x) be any non-trivial solution of equation (2.2) such that either y(0) 0 and y (0) > 0
or y(0) > 0 and y (0) 0. Then there is an interval (0, δ) in which y(x) > 0. Now consider
(0, X0) in which y(x) > 0. By (2.19) we have
{p(x)y (x)} = {q(x) − λs(x)}y(x) > 0 for all λ Λ.
This implies that p(x)y (x) is increasing in (0, X0). Since p(x) > 0 and y (x) 0, we have
p(x)y (x) > 0 on (0, X0). Thus y (x) > 0 on (0, X0).
So y(x) is increasing on (0, X0) and y(x) > 0 on (0, X0). Since y(x) ∈ F, there exist a
number X1 > X0 > 0 such that y(x) > 0 on (0, X1).
The above argument can be applied again to show that y(x) is increasing in (0, X1). We
conclude that y(x) has no zero in (0, ∞) so p(x)y (x) and y(x) are increasing in (0, ∞).
Now φ1(x) and φ2(x) satisfy the conditions of the arbitrary y(x) at x = 0 and in particular,
φ1(a, λ) > φ1(0, λ) = 1, φ2(a, λ) > φ2(0, λ) = 1,
where we have used that p(0) = p(a) in the second inequality. Hence
D(λ) = φ1(a, λ) + φ2(a, λ) > 2, for all λ Λ.
24
b) D (λ) is not zero at values of λ such that |D(λ)| < 2.
We differentiate Hill’s equation with respect to λ, taking y(x) = φ1(x, λ). This gives
p(x)
d2
dx2
∂φ1(x, λ)
∂λ
+p (x)
d
dx
∂φ1(x, λ)
∂λ
+{λs(x)−q(x)}
∂φ1(x, λ)
∂λ
= −s(x)φ1(x, λ).
In the context of our theory on inhomogeneous equations, through the method of variation
of parameters, we first note that φ1(x, λ) and φ2(x, λ) form a fundamental set for the
homogeneous form. Thus any solution φ0(x) can be written as
φ0(x) = c1(x)φ1(x, λ) + c2(x)φ2(x, λ),
where c1(x) and c2(c) can be found as previously described. In particular we can write
∂φ1(x, λ)
∂λ
= −
a
0
φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)
W(φ1, φ2)(t)
.
−s(x)φ1(t, λ)
p(x)
dt.
Now
W(φ1, φ2)(x) = W(φ1, φ2)(0)exp −
x
0
p (ξ)
p(ξ)
dξ ,
= 1.exp − [log p(ξ)]x
0 ,
= exp(log
p(0)
p(x)
),
=
p(0)
p(x)
.
Thus
∂φ1(x, λ)
∂λ
= {p(0)}−1
x
0
{φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)}s(t)φ1(t, λ) dt. (2.20)
Similarly
∂φ2(x, λ)
∂λ
= {p(0)}−1
x
0
{φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)}s(t)φ2(t, λ) dt. (2.21)
Differentiating (2.21) with respect to x gives
∂φ2(x, λ)
∂λ
= {p(0)}−1
x
0
{φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)}s(t)φ2(t, λ) dt.
25
Now
D (λ) =
∂φ1(a, λ)
∂λ
+
∂φ2(a, λ)
∂λ
= {p(0)}−1
a
0
{φ1φ2
2(t, λ) + (φ1 − φ2)φ1(t, λ)φ2(t, λ)
−φ2φ2
1(t, λ)}s(t) dt,
(2.22)
writing φi(a, λ) = φi and φi(aλ) = φi. Also
D2
(λ) = {φ1(a, λ) + φ2(a, λ)}2
,
= φ2
1 + φ2
2 + 2φ2φ2,
= (φ1 − φ2)2
+ 4φ1φ2,
= (φ1 − φ2)2
+ 4(1 + φ2φ1),
= 4 + (φ1 − φ2)2
+ 4φ2φ1,
since W(φ1, φ2) = φ1φ2 − φ2φ1 = 1. So
4φ2p(0)D (λ) = −
a
0
{2φ2φ1(t, λ) + (φ1 − φ2)φ2(t, λ)}2
s(t) dt
−{4 − D2
(λ)}
a
0
φ2
2(t, λ)s(t) dt.
(2.23)
Since p(x) > 0, we have that when |D(λ)| < 2, φ2D (λ) < 0. In particular D (λ) = 0 as
required.
c) At a zero of D(λ) − 2; λn, D (λn) = 0 if and only if φ2(a, λn) = φ1(a, λn) = 0. Also, if
D (λn) = 0 , then D (λn) < 0.
If φ2(a, λn) = φ1(a, λn) = 0 then Case 4 tells us that
φ1(a, λn) = φ2(a, λn) = 1.
Now from (2.22) we have that D (λn) = 0.
Conversely, if D (λn) = 0, then (2.23) gives us
{2φ2φ1(t, λ) + (φ1 − φ2)φ2(t, λ)}2
s(t) ≡ 0.
It follows that φ2(a, λn) = 0 and φ1(a, λn) = φ2(a, λn) since φ1(t, λ) and φ2(t, λ) are lin-
early independent.
Substituting these results into (2.22) we get φ1(a, λn) = 0 as required.
26
We now approach the result about D (λn). Differentiating (2.22) with respect to λ we get
D (λ) in terms of the λ−derivatives of φi and φi. Then (2.20) and (2.21) give
D (λn) = 2{p(0)−2
}
a
0
φ1(t, λ)φ2(t, λn)s(t) dt
2
−
a
0
φ2
1(t, λn)s(t) dt
a
0
φ2
2(t, λn)s(t) dt ,
where we have put λ = λn. Now for Riemann-integrable functions f(t), g(t) ∈ [0, a] the
Cauchy-Schwarz inequality (squared)
a
0
f(t)g(t) dt
2 a
0
f2
(t) dt
a
0
g2
(t) dt,
holds. Hence D (λn) 0.
Furthermore, the case of equality is ruled out since φ1(t, λn) and φ2(t, λn) are linearly in-
dependent.
There is a corresponding result to c) for zeros of D(λ) + 2; µn. The only difference being
that D (µn) > 0 if D (µn) = 0.
We now examine the implications of a), b) and c) on the function D(λ) as λ moves from
−∞ to ∞.
• When λ is large and negative, D(λ) > 2 by a) and remains greater than 2 until it
reaches its first zero of D(λ) − 2; λ0.
• Since D(λ0) is not a maximum, λ0 is a simple zero of D(λ)−2. From c), D (λ0) = 0,
thus D(λ) < 2 immediately to the right of λ0.
• Since D(λ) < 2 immediately to the right of λ0, b) tells us that D (λ) = 0 here and
this D(λ) is strictly decreasing.
• D(λ) continues in this fashion until it reaches its first zero of D(λ) + 2.
• In general, µ0 is a simple zero of D(λ) + 2 so D(λ) < 2 immediately to the right of
µ0. For λ increasing from µ0, we know that D(λ) < −2 until it reaches its next zero
of D(λ) + 2; µ1.
• Since µ1 is not a maximum for D(λ), then it is a simple zero of D(λ) + 2. Thus
D(λ) > −2 immediately to the right of µ1 and is strictly increasing until it reaches
the next zero of D(λ) − 2; λ1.
• In general, λ1 will be a simple zero of D(λ) − 2 and so D(λ) > 2 immediately to the
right of λ1.
27
• D(λ) remains less than 2 for λ immediately to the right of λ1 until it reaches the next
zero of D(λ) − 2; λ2. The argument is such that this behaviour repeats as λ → ∞.
This proves parts (i) and (ii) except when we have double zeros of D(λ)±2. Let’s examine
the case in which we have a double zero of D(λ) − 2 at λ1. Here D(λ) < 2 immediately to
the right of λ1 and the previous analysis still holds except that the interval (λ1, λ2) does
not figure. It follows from the examination of the periodic eigenvalue problem that λ1 = λ2
since the condition in part c) of the proof holds.
Presence of Stability and Instability Intervals.
The previous theorem shows us that the stability intervals for equation (2.2) are (λ2m, µ2m)
and (µ2m+1, λ2m+1). Further, the conditional stability intervals are the closures of these
intervals.
The instability intervals are (−∞, λ0) with (µ2m, µ2m+1) and (λ2m+1, λ2m). Note that no
stability interval of (2.2) is ever absent, nor is (−∞, λ0). Any other instability interval can
be absent as a result of D(λ) = ±2 having a double zero.
Finally, it is clear that the absence of an instability interval at a value of λ means that this
is a value of λ for which all solutions have period a or semi-period a.
We now return to a previous idea in which we generalised our interest in non-trivial solu-
tions of period a to those of period ka and indeed go one step further to look for solutions
that are not necessarily periodic, but have property (1.2). We re-state this property here.
There exists a non-zero constant ρ such that our solution ψ(x) has the property
ψ(x + a) = ρψ(x).
These two generalisations will take the form of the following two problems.
The periodic eigenvalue problem over [0, ka].
This is made up of equation (2.2) holding on [0, ka], where k ∈ Z+ and
y(ka) = y(0), y (ka) = y (0). (2.24)
We may now view this as a generalisation to the original periodic eigenvalue over [0, a]
(k = 1). It is easily shown that this is also a self-adjoint problem and we denote the
unbounded, countable infinity of eigenvalues by Λn(k) (n = 0, 1, ...). By the obvious
extension of our eigenfunctions, to (−∞, ∞) as continuously differentiable functions, we
see that the Λn(k) are the values of λ for which (2.2) has a non-trivial solution with period
ka. Clearly {λn} ⊂ {Λn(k)}. We let Σ denote the set of Λn(k) for n = 0, 1, ... and
k = 0, 1, ....
28
The t-periodic eigenvalue problem
This comprises equation (2.2) assumed to hold on [0, a] with
y(a) = y(0)exp(iπt), y (a) = y (0)exp(iπt),
where t ∈ R and −1 < t 1.
This is, again, a self-adjoint problem and we denote the eigenvalues by λn(t), (n = 0, 1, ...).
We can extend the eigenfunctions in the natural way to (−∞, ∞) so that the resulting
function has property (1.2) in the form
y(x + a) = y(x)exp(iπt).
Thus the eigenfunctions satisfy (1.2) with ρ = exp(iπt). Firstly we note that the periodic
and semi-periodic eigenvalue problems are lifted out of this more general case when t = 0
and t = 1 respectively. Substituting this value of ρ into (1.15) gives
e2πit
− D(λ)eiπt
+ 1 = 0.
This implies
D(λ) = eiπt
+ e−iπt
,
giving
D(λ) = 2 cos(πt).
We denote by S , the set of all λn(t) for n = 0, 1, ... and −1 < t 1. Now we let S denote
the set consisting of the conditional stability intervals of equation (2.2).
Theorem 2.3.2. The closure of Σ is S.
Proof. By Theorem 1.4.2, Λn(k) are the values of λ such that
D(λ) = 2 cos(2lπ/k),
where l is a variable integer. Now we must show that Σ is dense in S. If we choose d such
that |d| 2, we can clearly choose k, l such that 2 cos(2lπ/k) is arbitrarily close to d. Thus
Λn(k) forms a dense set among the values of λ such that |D(λ)| 2.
Theorem 2.3.3. The sets S and S are identical.
Proof. Now λn(t) are the solutions to (2.14). As t increases continuously from 0 to 1,
2 cos(πt) decreases continuously from 2 to −2. Now by Proposition 2.3.1, λ2m(t) increases
continuously from λ2m to µ2m and decreases continuously from λ2m+1 to µ2m+1. Finally,
we note that in the cases t = t and t = −t , for (0 < t < 1), the eigenvalues are the
same.
29
Now we examine a result that links the previous two problems. Let ψn(x; t) denote the
eigenfunctions in the t-periodic problem.
Theorem 2.3.4. Let k ∈ N and exp(iπtr), (r = 0, 1, ..., k − 1) be the kth roots of unity,
where −1 < tr 1. Then the set of all functions ψn(x, ; tr), where n 0, 0 r k − 1,
is a complete set of eigenfunctions for the periodic problem over [0, ka].
Proof. Firstly, from (2.13)
ψn(x + ka; tr) = ψn(x + (k − 1)a; tr)exp(itrπ).
Continuing in this fashion gives
ψn(x + ka; tr) = ψn(x; tr)exp(iktrπ).
The stated functions are clearly eigenfunctions of the periodic problem over [0, ka] since
(2.24) are satisfied. We suppose, by way of contradiction, that f(x) ∈ L2(0, ka) and that
ka
0
f(x)ψn(x; tr)s(x) dx = 0, ∀n 0, 0 r k − 1, (2.25)
i.e. that f(x) is orthogonal to all of the eigenfunctions and thus implying that the set of
ψn(x; tr) is not complete. We may re-write the integral (2.25) as
k−1
m=0
(m+1)a
ma
f(x)ψn(x; tr)s(x) dx =
k−1
m=0
a
0
f(x + ma)ψn(x + ma; tr)s(x) dx,
=
a
0
k−1
m=0
{f(x + ma)exp(imπtr)}ψn(x; tr)s(x) dx, (2.26)
by (2.54).
Considered on [0, a], ψn(x; tr) for (n 0) form a complete set. Thus (2.25) and (2.26) give
us that
k−1
m=0
f(x + ma)exp(imπtr) = 0, 0 r k − 1.
This system of linear, homogeneous equations can be expressed in matrix form as






exp(iπt1) exp(2iπt1) ... exp(miπt1)
. . .
. . .
. .
exp(iπtk−1) exp(2iπtk−1) ... exp(miπtk−1)












f(x)
f(x + a)
.
.
f(x + ma)






= 0.
30
Observe that each entry of the first matrix can be expressed in terms of sine and cosine
functions of multiples of trπ. It is then clear that this matrix has a non-zero determinant.
It follows that f(x + a) = 0 in [0, a]. We conclude that f(x) is not orthogonal to the set of
eigenfunction so ψn(x; tr) forms a complete set of eigenfunctions over [0, ka].
2.4 The Mathieu Equation
This is the name given to the equation
y (x) + (λ − 2q cos(2x))y(x) = 0, (2.27)
where q is a non-zero, real constant. Here our period a = π.
Unlike the previous two examples, the solutions cannot be given in terms of elementary
functions, however the existence of all its instability intervals can be established.
In order to to deduce this, we first discuss the circumstances under which an instability
interval may be absent.
In the examination of Theorem 1.2.2, we saw that an instability interval is absent in the
presence of a double eigenvalue which, in our introduction of the periodic and semi-periodic
problems, were seen to give rise to the coexistence of solutions of period a and semi-period
a respectively. It therefore suffices to show that solutions of equation (2.27) do not all have
period a or semi-period a for any values of λ or q (q = 0).We show this directly.
Theorem 2.4.1. For no values of λ and q (q = 0) do the solutions of (2.27) all have
either period a or semi-period a.
The proof of this result uses the method of proof by contradiction, supposing that all
solutions have period π and using Theorem 1.5.1 to deduce a contradiction involving the
coefficients of the sine and cosine Fourier series of φ1(x, λ) and φ2(x, λ).
2.5 Summary
We now reflect on the material we have covered.
The most significant equation so far has been (1.14). In section 1.3, we saw how every
equation of the form
a0(x)y (x) + a1(x)y (x) + a2(x)y(x) = 0, (2.28)
31
may be converted into one of the form
z (x) + ν(x)z(x) = 0. (2.29)
The significance of this transformation was that equation (2.29) did not contain the first
derivative, which made the constant term in our quadratic equation (1.5) equal to unity.
From Theorem 1.2.2 and the accompanying proof, we saw that the the solutions of the
quadratic (1.5) were crucial in determining the nature of solutions to (2.28).
It became clear that the significance of this quadratic equation was growing and the simpli-
fication that the transformation of section 1.3 gave would become routine as a preliminary
step in examining any equation of the form (2.28). Indeed, the Mathieu equation is already
in this form.
It is widely known that the nature of solutions to equation (2.29) will depend on the two
linearly independent solutions φ1(x) and φ2(x). We have seen that it is the value of the
discriminant
D = φ1(a) + φ2(a),
that precisely dictates this dependence. Toward the end of chapter one we encountered the
coexistence problem attached to our equation. Further, we saw that this problem can be
answered in the case of the ka-periodic problem where k > 2. In particular, we concluded
that if (2.29) has a non-trivial solution with period ka where k is a positive integer and
k > 2, then all solutions have period ka.
In Chapter 2, we narrowed our focus slightly, by looking at a particular type of equation
(2.28) and thus of (2.29). We introduced the real parameter λ, and since the discriminant,
now denoted D(λ), was dependent on λ only, this new parameter became the dictating
value in our equation. More significantly, the introduction of λ allows us to apply many
standard results in Sturm-Liouville Theory since the tasks of finding periodic or semi-
periodic solutions reduce to self-adjoint eigenvalue problems.
Self-adjointness gives us an unbounded infinity of eigenvalue for both problems over [0, a].
Theorem 2.3.1 and the observations within the proof form the significant results of the
chapter, showing how the stability or otherwise of equation (2.2) varies as we move from one
eigenvalue to the next. The stability intervals for equation (2.29), we saw, are (λ2m, µ2m)
and (µ2m+1, λ2m+1), with the conditional stability intervals being the closured of these
intervals. Returning to the more general problem over [0, ka] we see that the eigenvalues
here form a countable set that are contained within these intervals.
Taking one step further, we examined solutions ψ(x) with the property
ψ(x + a) = ρψ(x),
32
in the ‘t-periodic’ problem. Theorem 2.3.4 shows clearly that this problem also covers the
‘ka-periodic’ problem and we come to the beautiful result that is Theorem 2.3.3. This
Theorem tells us that the eigenvalues that give solutions to the ‘t-periodic’ problem form
a dense uncountable set that is equivalent to each stability interval. We can picture this
with the following analogy.
Taking any given stability interval, eigenvalues giving solutions to the periodic and
semi-periodic problems can be found at each end of the interval, (separately) like ‘integer
values’. Eigenvalues giving solutions to the ka-periodic problem can be found as a
countable set, like the rational numbers, scattered throughout the interval. Eigenvalues
giving solutions to the t-periodic problem ‘fill’ the intervals as an uncountable set like the
real numbers, thus asserting their equivalence to the whole interval.
The graph opposite gives a rough idea of how the function D(λ) fluctuates as the value
of lambda moves from −∞ to ∞. To the left of λ0, and in the intervals (λ2m+1, λ2m+2)
and (µ2m, µ2m+1), D(λ) remains greater than 2. The behaviour shown in this graph within
these intervals is not necessarily indicative of the behaviour of D(λ) here; all that is know
so far is that it is greater than 2.
33
34
Bibliography
[1] M.S.P Eastham, The spectral theory of periodic differential equations (Scottish Aca-
demic Press 1973).
[2] M.S.P Eastham, Theory of ordinary differential equations, (Van Nostrand Reinhold
company 1970)
[3] E.C Titchmarsh, Eigenfunction expansions Part II (Oxford University Press 1958)
35

More Related Content

What's hot

Eigenvalue eigenvector slides
Eigenvalue eigenvector slidesEigenvalue eigenvector slides
Eigenvalue eigenvector slidesAmanSaeed11
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equationsAhmed Haider
 
Lecture3 ode (1) lamia
Lecture3 ode (1) lamiaLecture3 ode (1) lamia
Lecture3 ode (1) lamiamalkinh
 
Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014 Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014 Rani Sulvianuri
 
Nonlinear perturbed difference equations
Nonlinear perturbed difference equationsNonlinear perturbed difference equations
Nonlinear perturbed difference equationsTahia ZERIZER
 
Initial value problems
Initial value problemsInitial value problems
Initial value problemsAli Jan Hasan
 
derogatory and non derogatory matrices
derogatory and non derogatory matricesderogatory and non derogatory matrices
derogatory and non derogatory matricesKomal Singh
 
Higher order ODE with applications
Higher order ODE with applicationsHigher order ODE with applications
Higher order ODE with applicationsPratik Gadhiya
 
First order non-linear partial differential equation & its applications
First order non-linear partial differential equation & its applicationsFirst order non-linear partial differential equation & its applications
First order non-linear partial differential equation & its applicationsJayanshu Gundaniya
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringshubham211
 
Simultaneous differential equations
Simultaneous differential equationsSimultaneous differential equations
Simultaneous differential equationsShubhi Jain
 
Differential equations
Differential equationsDifferential equations
Differential equationsCharan Kumar
 

What's hot (20)

Eigenvalue eigenvector slides
Eigenvalue eigenvector slidesEigenvalue eigenvector slides
Eigenvalue eigenvector slides
 
Ordinary differential equations
Ordinary differential equationsOrdinary differential equations
Ordinary differential equations
 
Lecture3 ode (1) lamia
Lecture3 ode (1) lamiaLecture3 ode (1) lamia
Lecture3 ode (1) lamia
 
Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014 Persamaan Differensial Biasa 2014
Persamaan Differensial Biasa 2014
 
Term project
Term projectTerm project
Term project
 
Unit1 vrs
Unit1 vrsUnit1 vrs
Unit1 vrs
 
patel
patelpatel
patel
 
Nonlinear perturbed difference equations
Nonlinear perturbed difference equationsNonlinear perturbed difference equations
Nonlinear perturbed difference equations
 
Ch05 7
Ch05 7Ch05 7
Ch05 7
 
Initial value problems
Initial value problemsInitial value problems
Initial value problems
 
derogatory and non derogatory matrices
derogatory and non derogatory matricesderogatory and non derogatory matrices
derogatory and non derogatory matrices
 
Higher order ODE with applications
Higher order ODE with applicationsHigher order ODE with applications
Higher order ODE with applications
 
First order non-linear partial differential equation & its applications
First order non-linear partial differential equation & its applicationsFirst order non-linear partial differential equation & its applications
First order non-linear partial differential equation & its applications
 
Differential equations
Differential equationsDifferential equations
Differential equations
 
Berans qm overview
Berans qm overviewBerans qm overview
Berans qm overview
 
Eigen values and eigen vectors engineering
Eigen values and eigen vectors engineeringEigen values and eigen vectors engineering
Eigen values and eigen vectors engineering
 
Eigenvalues
EigenvaluesEigenvalues
Eigenvalues
 
Ma2002 1.19 rm
Ma2002 1.19 rmMa2002 1.19 rm
Ma2002 1.19 rm
 
Simultaneous differential equations
Simultaneous differential equationsSimultaneous differential equations
Simultaneous differential equations
 
Differential equations
Differential equationsDifferential equations
Differential equations
 

Viewers also liked

how to track line chat remotely
how to track line chat remotelyhow to track line chat remotely
how to track line chat remotelyCatherineRai
 
Tatiana a higiene saúde e segurança
Tatiana a higiene saúde e segurançaTatiana a higiene saúde e segurança
Tatiana a higiene saúde e segurançaTatianadizz99
 
фізика Ii етап 2014-завдання
фізика Ii етап 2014-завданняфізика Ii етап 2014-завдання
фізика Ii етап 2014-завданняЕлена Гавриш
 
gigi tahun 1
gigi tahun 1gigi tahun 1
gigi tahun 1simbaking
 
результати олімпіади з фізики
результати олімпіади з фізикирезультати олімпіади з фізики
результати олімпіади з фізикиЕлена Гавриш
 
Management and leadership notu
Management and leadership notuManagement and leadership notu
Management and leadership notuMwiza Helen
 
Primera Edición VIBRACION PACHA
Primera Edición VIBRACION PACHAPrimera Edición VIBRACION PACHA
Primera Edición VIBRACION PACHAVibracion Pacha
 
Termo de referencia proj exec
Termo de referencia proj execTermo de referencia proj exec
Termo de referencia proj execResgate Cambuí
 
Plagiarism - How it Happens and Ways to Prevent it
Plagiarism - How it Happens and Ways to Prevent itPlagiarism - How it Happens and Ways to Prevent it
Plagiarism - How it Happens and Ways to Prevent itPAVAN CHOUDARY
 
Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...
Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...
Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...home
 
запрошення 2015
запрошення 2015запрошення 2015
запрошення 2015petrushaoo
 

Viewers also liked (18)

how to track line chat remotely
how to track line chat remotelyhow to track line chat remotely
how to track line chat remotely
 
Tatiana a higiene saúde e segurança
Tatiana a higiene saúde e segurançaTatiana a higiene saúde e segurança
Tatiana a higiene saúde e segurança
 
T03 conceptos seguridad
T03 conceptos seguridadT03 conceptos seguridad
T03 conceptos seguridad
 
фізика Ii етап 2014-завдання
фізика Ii етап 2014-завданняфізика Ii етап 2014-завдання
фізика Ii етап 2014-завдання
 
gigi tahun 1
gigi tahun 1gigi tahun 1
gigi tahun 1
 
Info bien
Info bienInfo bien
Info bien
 
гиа 2014new
гиа 2014newгиа 2014new
гиа 2014new
 
результати олімпіади з фізики
результати олімпіади з фізикирезультати олімпіади з фізики
результати олімпіади з фізики
 
Lafio cholos 13
Lafio cholos 13Lafio cholos 13
Lafio cholos 13
 
Management and leadership notu
Management and leadership notuManagement and leadership notu
Management and leadership notu
 
Primera Edición VIBRACION PACHA
Primera Edición VIBRACION PACHAPrimera Edición VIBRACION PACHA
Primera Edición VIBRACION PACHA
 
Termo de referencia proj exec
Termo de referencia proj execTermo de referencia proj exec
Termo de referencia proj exec
 
Plagiarism - How it Happens and Ways to Prevent it
Plagiarism - How it Happens and Ways to Prevent itPlagiarism - How it Happens and Ways to Prevent it
Plagiarism - How it Happens and Ways to Prevent it
 
Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...
Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...
Drawing the Line, Building Resiliency, and Creating Art, an art therapy anti-...
 
Management d'équipe
Management d'équipeManagement d'équipe
Management d'équipe
 
Sukaneka
SukanekaSukaneka
Sukaneka
 
Ae t01 introduccion_ae
Ae t01 introduccion_aeAe t01 introduccion_ae
Ae t01 introduccion_ae
 
запрошення 2015
запрошення 2015запрошення 2015
запрошення 2015
 

Similar to Summer Proj.

OrthogonalFunctionsPaper
OrthogonalFunctionsPaperOrthogonalFunctionsPaper
OrthogonalFunctionsPaperTyler Otto
 
Series solutions at ordinary point and regular singular point
Series solutions at ordinary point and regular singular pointSeries solutions at ordinary point and regular singular point
Series solutions at ordinary point and regular singular pointvaibhav tailor
 
Ma 104 differential equations
Ma 104 differential equationsMa 104 differential equations
Ma 104 differential equationsarvindpt1
 
Solution set 3
Solution set 3Solution set 3
Solution set 3慧环 赵
 
first order ode with its application
 first order ode with its application first order ode with its application
first order ode with its applicationKrishna Peshivadiya
 
Ph 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSPh 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSChandan Singh
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment HelpEdu Assignment Help
 
Week 8 [compatibility mode]
Week 8 [compatibility mode]Week 8 [compatibility mode]
Week 8 [compatibility mode]Hazrul156
 
Solution to schrodinger equation with dirac comb potential
Solution to schrodinger equation with dirac comb potential Solution to schrodinger equation with dirac comb potential
Solution to schrodinger equation with dirac comb potential slides
 
Solution of a subclass of singular second order
Solution of a subclass of singular second orderSolution of a subclass of singular second order
Solution of a subclass of singular second orderAlexander Decker
 
11.solution of a subclass of singular second order
11.solution of a subclass of singular second order11.solution of a subclass of singular second order
11.solution of a subclass of singular second orderAlexander Decker
 
AEM Integrating factor to orthogonal trajactories
AEM Integrating factor to orthogonal trajactoriesAEM Integrating factor to orthogonal trajactories
AEM Integrating factor to orthogonal trajactoriesSukhvinder Singh
 
Multivriada ppt ms
Multivriada   ppt msMultivriada   ppt ms
Multivriada ppt msFaeco Bot
 
What are free particles in quantum mechanics
What are free particles in quantum mechanicsWhat are free particles in quantum mechanics
What are free particles in quantum mechanicsbhaskar chatterjee
 

Similar to Summer Proj. (20)

OrthogonalFunctionsPaper
OrthogonalFunctionsPaperOrthogonalFunctionsPaper
OrthogonalFunctionsPaper
 
Series solutions at ordinary point and regular singular point
Series solutions at ordinary point and regular singular pointSeries solutions at ordinary point and regular singular point
Series solutions at ordinary point and regular singular point
 
Sol75
Sol75Sol75
Sol75
 
Sol75
Sol75Sol75
Sol75
 
Ma 104 differential equations
Ma 104 differential equationsMa 104 differential equations
Ma 104 differential equations
 
Solution set 3
Solution set 3Solution set 3
Solution set 3
 
Cs jog
Cs jogCs jog
Cs jog
 
first order ode with its application
 first order ode with its application first order ode with its application
first order ode with its application
 
Thesis
ThesisThesis
Thesis
 
Ph 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSPh 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICS
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment Help
 
Week 8 [compatibility mode]
Week 8 [compatibility mode]Week 8 [compatibility mode]
Week 8 [compatibility mode]
 
Solution to schrodinger equation with dirac comb potential
Solution to schrodinger equation with dirac comb potential Solution to schrodinger equation with dirac comb potential
Solution to schrodinger equation with dirac comb potential
 
Solution of a subclass of singular second order
Solution of a subclass of singular second orderSolution of a subclass of singular second order
Solution of a subclass of singular second order
 
11.solution of a subclass of singular second order
11.solution of a subclass of singular second order11.solution of a subclass of singular second order
11.solution of a subclass of singular second order
 
eigenvalue
eigenvalueeigenvalue
eigenvalue
 
Tesi
TesiTesi
Tesi
 
AEM Integrating factor to orthogonal trajactories
AEM Integrating factor to orthogonal trajactoriesAEM Integrating factor to orthogonal trajactories
AEM Integrating factor to orthogonal trajactories
 
Multivriada ppt ms
Multivriada   ppt msMultivriada   ppt ms
Multivriada ppt ms
 
What are free particles in quantum mechanics
What are free particles in quantum mechanicsWhat are free particles in quantum mechanics
What are free particles in quantum mechanics
 

Summer Proj.

  • 1. The Spectral Theory Of Periodic Differential Equations Patrick T M Hough December 6, 2014
  • 2. Chapter 1 Floquet Theory 1.1 Introduction In this text I will be examining periodic differential equations and the nature of their solutions through the methods of Spectral theory. By viewing the problem of solving such an equation as an eigenvalue problem, I will hope to call on results already developed in the theory of self adjoint and Sturm-Liouville problems. In doing so I aim to develop tools that will allow me to determine the nature of solutions from the category of equation that is under consideration. Such categories I hope to define as concisely as possible. 1.2 Floquet Theory We start with the following general second order differential equation. Note that x is a real variable and lies on (−∞, ∞). a0(x)y (x) + a1(x)y (x) + a2(x)y(x) = 0. (1.1) Here we state key properties of the coefficients ar(x) for r = 0, 1, 2. • Complex Valued; • Piecewise continuous; • Period with period a i.e ar(x + a) = ar(x); • a0(x) is assumed to be strictly positive. We now study the nature of solutions to equation (1.1). These results with their proofs are known as Floquet Theory after G.Floquet (1883). Note that since the ar(x) are periodic with period a, if ψ(x) is a solution of equation (1.1) then ψ(x + a) is also though these two solutions need not be the same and, indeed, the ex- istence of a periodic solution is not assured. We do, however, have the following important result. 1
  • 3. Theorem 1.2.1. There exists a non-trivial solution ψ(x) of (1.1) and non-zero constant ρ such that ψ(x + a) = ρψ(x), (1.2) for x ∈ R. Before I give the proof, I note a standard result from the theory of ordinary differential equations. Remark. Since equation (1.1) is of order 2, there exist two linearly independent solutions φ1(x) and φ2(x) and we can choose φ1(0) = 1, φ1(0) = 0, φ2(0) = 0, φ2(0) = 1. (1.3) I will refer to these two solutions many times throughout the text and so the reader should recognise them as φ1(x) and φ2(x) with the above conditions. Proof. Since φ1(x + a) and φ2(x + a) are also linearly independent solutions of (1.1), there exist constants Aij (i, j = 1, 2) such that φ1(x + a) = A11φ1(x) + A12φ2(x), φ2(x + a) = A21φ1(x) + A22φ2(x), (1.4) where the matrix A = Aij is non-singular. Further, any solution ψ(x) of (1.1) has the form ψ(x) = c1φ1(x) + c2φ2(x), where c1 and c2 are constants. Now (1.2) implies that c1φ1(x + a) + c2φ2(x + a) = ρc1φ1(x) + ρc2φ2(x), and hence c1(A11φ1(x) + A12φ2(x)) + c2(A21φ1(x) + A22φ2(x)) = ρc1φ1(x) + ρc2φ2(x). Comparing coefficients of φi(x) (i, j = 1, 2) gives A11 − ρ A21 A12 A22 − ρ c1 c2 = 0. Now c1 and c2 are not both zero if and only if the determinant of the first matrix is non-zero, i.e. ρ2 − (A11 + A22)ρ + detA = 0. (1.5) This quadratic equation has two roots given by ρ1,2 = D ± √ D2 − 4detA 2 , where D = A11 + A22. Since detA = 0, both roots are non-zero. This proves the theorem. 2
  • 4. Substituting the conditions (1.3) into (1.4), we find A11 = φ1(a), A12 = φ1(a), A21 = φ2(a), A22 = φ2(a). (1.6) Let us note a useful result that is used throughout the text. The Wronskian and Abel’s Identity Given the equation y (x) + p(x)y (x) + q(x)y(x) = 0, (1.7) where p(x) and q(x) are continuous, and two solutions y1(x) and y2(x), the Wronskian W(y2, y2)(x) = y1(x) y2(x) y1(x) y2(x), satisfies Abel’s identity W(y1, y2)(x) = W(y1, y2)(x0)exp − x x0 p(ξ) dxξ , (1.8) for any x, x0 ∈ R. Using (1.6) and (1.8) and re-writing (1.1) in the form (1.7) we get ρ2 − {φ1(a) + φ2(a)}ρ + exp − a 0 a1(x)/a0(x) dx = 0. (1.9) Here we have also used that W(φ1, φ2)(0) = 1. We now come to the significant theorem of the chapter, which examines the ρ s of theorem 1.2.1. Theorem 1.2.2. There exist two linearly independent solutions of (1.1) such that either (i) ψ1(x) = em1x p1(x), ψ2(x) = em2x p2(x), where m1 and m2 are constants, not necessarily distinct, and p1(x) and p2(x) are periodic with period a, or (ii) ψ2(x) = emx p1(x), ψ2(x) = emx {xp1(x) + p2(x)}, where m is a constant and p1(x) and p2(x) are periodic with period a. The former occurs when either the solutions, ρ1 and ρ2, to the quadratic equation (1.5) are distinct, or rank(AT − ρI) = 0 and ρ1 = ρ2. The latter occurs when ρ1 = ρ2 and rank(AT − ρI) = 1. 3
  • 5. We see that mis are given by ρk = eamk or mk = 1/a log ρk. Here we call ρ1 and ρ2 the characteristic multipliers and m1 and m2 the characteristic exponents. Proof. We begin by considering (1.5) and we find that two cases arise. • Case 1: ρ1,ρ2 distinct. Theorem 1.2.1 tells us that there are two non-trivial solutions ψ1(x) and ψ2(x) such that ψk(x + a) = ρkψk(x) (k = 1, 2). We can easily verify that ψk(x) are linearly independent. Since detA = 0, ρ1 and ρ2 are non-zero also and so we can find real numbers m1 and m2 such that ρk = eamk . Let pk(x) = e−mkx ψk(x). Then pk(x + a) = e−mk(x+a) ψk(x + a) = ρkρk −1 e−mkx ψk(x) = pk(x), and so p1(x) and p2(x) have period a. We conclude that there exist two linearly independent solutions ψ1(x) and ψ2(x) such that ψk(x) = emkx pk(x). • Case 2: ρ1 = ρ2 = ρ. Again, we know that ρ is non-zero and so we can find a real number m such that eam = ρ. By Theorem 1.2.1 there exists a solution Ψ1(x) to equation (1.1) such that Ψ1(x + a) = ρΨ1(x). Let Ψ2(x) be any other linearly independent solution of (1.1). We know that Ψ2(x+a) is a solution to (1.1) also and so we can write Ψ2(x + a) = d1Ψ1(x) + d2Ψ2(x), where d1 and d2 are real constants. Let us now calculate d2. Note that W(Ψ1, Ψ2)(x+a) = ρ{d1Ψ1(x)Ψ1(x)+d2Ψ1(x)Ψ2(x)}−ρ{d1Ψ1(x)Ψ1(x)+d2Ψ1(x)Ψ2(x)} = ρd2W(Ψ1, Ψ2)(x). (1.10) 4
  • 6. Rearranging (1.10) and using Abel’s identity we have that ρd2 = exp − x+a x {a1(t)/a0(t)} dt = exp − a 0 {a1(t)/a0(t)} dt = detA. Here we have used that the integrand has period a. Equation (1.5) tells us that detA = ρ2 and so we conclude that d2 = ρ. After finding d2, we have Ψ2(x + a) = d1ψ1(x) + ρΨ2(x). (1.11) We now have two sub cases. Suppose d1 = 0. Now equation (1.11) gives Ψ2(x+a) = ρΨ2(x) and we proceed as in case 1, since we have two linearly independent solutions with property (1.2), where m1 = m2 = m and ψk(x) = Ψk(x) for (k = 1, 2). If d1 = 0, let P1(x) = e−mx Ψ1(x), and Ψ2(x) = e−mx Ψ2(x) − (d1/aρ)xΨ1(x). Now by (1.2) and (1.11), P1(x) and P2(x) have period a. Finally, we arrive with the two solutions Ψ1(x) = emx P1(x), and Ψ2(x) = emx {(d1/aρ)xP1(x) + P2(x)}. On taking ψ2(x) = Ψ1(x) and ψ2(x) = (aρ/d1)Ψ2(x), we end up in Part two of the Theorem. On examining the above working we can clearly say which of the two situations occurs based on the solutions of (1.5). If the solutions are distinct then we know immediately that we are working within Part (i) of the Theorem. If the solutions are equal, however, we must look a little more deeply. Consider the system of linear equations that arrises in the proof of Theorem 1.2.1. In order to find a solution with property (1.2), we must find an eigenvector for AT corresponding to the eigenvalue ρ. We may have either one or two such eigenvectors. If we can find two eigenvectors, then we have two pairs of (c1, c2) and thus two linearly independent solutions with property (1.2). From the working above we can see that this leads us back to part (i) of the Theorem. If we cannot find a second eigenvector, then we only have one solution with property (1.2) and we end up in part (ii) of the Theorem. We conclude that Part (i) of the theorem occurs when either ρ s are equal and rank(AT − ρI) = 0 or ρ s are distinct. Part (ii) occurs when the ρ s are equal and rank(AT −ρI) = 1. This conclusion follows from the fact that if rank(AT − ρI) = 0 then this matrix will map all two-dimensional vectors to the origin and so our choice of c1 and c2 is not unique. If this is not the case and rank(AT − ρI) = 1 then our choice of c1 and c2 is unique. 5
  • 7. Remark. Note that if the solutions to the equation (1.5) are equal, then Part (i) of The- orem 1.2.2 occurs if and only if φ1(a) = φ2(a) = ρ, φ1(a) = φ2(a) = 0. Proof. If ρ1 = ρ2 then Part (i) of Theorem 1.2.2 occurs if and only if rank(AT − ρI) = 0. If the rank of a matrix is zero, it must itself be the zero matrix. It follows that if ρ1 = ρ2 then A11 = A22 = ρ, A12 = A21 = 0. (1.12) The remark follows from property (1.6). Until now we have developed theory related to the solutions of (1.1) but we now turn our attention to a more general form of equation (1.1); Hill’s Equation. It is in fact the case that equations of the form (1.1) can always be transformed into a Hill’s equation so that we may apply the results already developed here also. 1.3 Hill’s Equation These equations, named after G. W. Hill (1877), have the form {P(x)y (x)} + Q(x)y(x) = 0. (1.13) Once again we state the key properties of the coefficients P(x) and Q(x). • P(x) and Q(x) are real valued and periodic with period a; • P(x) is continuous and nowhere zero; • P (x) and Q(x) are piecewise continuous. As mentioned, (1.1) may be transformed into an equation of the form (1.13). We now examine two methods for doing so. Method 1 Suppose a 0 a1(t)/a0(t) dt = 0. Multiply (1.1) by A = {a0(x)}−1exp ( x 0 a1(t)/a0(t) dt to give [Ay (x)] + {a2(x)/a0(x)}Ay(x) = 0. Method 2 Make the substitution y(x) = z(x)e(−1/2 x 0 a1(t)/a0(t) dt) assuming that a1(x)/a0(x) has a piecewise continuous derivative. We get, z (x) + ν(x)z(x) = 0, (1.14) 6
  • 8. where ν(x) = a0(x) − 1/4(a1(x)/a0(x))2 − 1/2(a1(x)/a0(x)) . Clearly these both give an equation of the form (1.13) but note that, using the second method, our equation does not contain the first derivative. This equation will be the main focus of the text. Since equation (1.14) is of the form (1.1), we can apply all of the results form the previous section and in particular Theorem 1.2.1. As there is no term in the first derivative our quadratic equation (1.5) becomes ρ2 − {φ1(a) + φ2(a)}ρ + 1 = 0. (1.15) We now define D, the discriminant of (1.15) as D = φ1(a) + φ2(a). (1.16) As we will see, the value of this quantity will prove crucial in determining the nature of solutions to (1.14) and so we examine its value implications case-wise. Also note that our quadratic (1.5) implies that ρ1ρ2 = 1. The Five cases of D 1. D > 2 Here D2 − 4 > 0 so ρ1,ρ2 are real and distinct. Clearly they are both positive and can’t be equal to unity. The property ρ1ρ2 = 1 implies that there exists a real number m such that ρ1 = ema , ρ2 = e−ma . By examining Theoreom 1.2.2, we see that distinct values of ρ give rise to the existence of the two linearly independent solutions ψ2(x) = emx p1(x), ψ2(x) = e−mx p2(x), where p1(x) and p2(x) are periodic functions with period a. 2. D < −2. Similar to Case 1 but ρ1 and ρ2 are now negative. Our solutions, again by Part 1 of Theorem 1.2.2, are ψ1(x) = ex(m+πi/a) p1(x), ψ2(x) = ex(m+πi/a) p2(x). 3. −2 < D < 2. Here D2 − 4 < 0 so ρ1, ρ2 are non-real and distinct. Since they are solutions of a quadratic, namely (1.15), together with identity ρ1ρ2 = 1 this means |ρ1| = |ρ2| = 1. Therefore ρ1 = ρ2 and ρ1 = eiaα , ρ2 = e−iaα . 7
  • 9. with some α ∈ R. We always assume that 0 < aα < π. Again we work within Part (i) of Theorem 1.2.2 and ψ1(x) = eiαx p1(x), ψ2(x) = e−iαx p2(x). 4. D = 2 Here D2 − 4 = 0 so ρ1 = ρ2 = 1. We now examine two sub cases. (i) φ2(a) = φ1(a) = 0: Using Abel’s identity W(φ1, φ2)(a) = W(φ1, φ2)(0) = 1. So φ1(a)φ2(a) = 1, D = φ1(a) + φ2(a) = 2. These give φ1(a) = φ2(a) = 1. Thus rank(AT − I) = rank 0 0 0 0 = 0 Since ρ1 = ρ2 = 1, the characteristic exponents m1 and m2 are both zero and Theorem 1.2.2 tells us that ψ1(x) = p1(x), ψ2(x) = p2(x). (ii) φ2(a), φ1(a) are not both zero: Now rank(AT − I) = 0 since rank(AT − I) = 0, if and only if φ2(a) = φ1(a) = 0, so we work within Part 2 of the Theorem, again with mis = 0 , (1 i 2) to conclude that ψ1(x) = p1(x), ψ2(x) = xp1(x) + p2(x). 5. D = −2. Here, our quadratic equation (1.5) tells us that ρ1 = ρ2 = −1. Again we examine two sub cases. (i) φ2(a) = φ1(a) = 0: As in the above case, rank(A+I) = 0 and so working within Part (i) of Theorem 1.2.2 gives us that, with m1 = m2 = πi/a ψ1(x) = eπix/a p1(x), ψ2(x) = eπix/a p2(x), where p1(x) and p2(x) have period a. It follows that all solutions to (1.14) have semi-period a since ψk(x + a) = −ψk(x) (k = 1, 2). 8
  • 10. (ii) φ2(a), φ1(a) are not both zero: Here rank(A + I) = 0 and so we are in Part (ii) of Theorem 1.2.2, with m = πi/a giving us ψ2(x) = eπix/a p1(x), ψ2(x) = eπix/a {xp2(x) + p2(x)}. From the above case analysis, we can simply state the following theorem by noting that, since ψ1(x) and ψ2(x) are linearly independent, any solution to equation (1.14) may be written as a linear combination of them. Theorem 1.3.1. (i) If |D| > 2, all non-trivial solutions of (1.14) are unbounded in (−∞ < x < ∞). (ii) If |D| < 2, all solutions of (1.14) are bounded in (−∞ < x < ∞). 1.4 Boundedness and Periodicity of Solutions We have now seen that the boundedness of solutions may be determined by simple exam- ination of the equation in question. It is therefore possible to categorise equations of the form (1.14) into those where solutions have certainly properties. Definition 1.4.1. The equation (1.14) is said to be stable if all solutions are bounded in (−∞ < x < ∞). It is called unstable if all non-trivial solutions are unbounded in (−∞ < x < ∞) and conditionally stable if there exists a non-trivial solution which is bounded in (−∞ < x < ∞). Diagram 1.4 displays the results of our case-wise examination of D. We remind ourselves that the value of α is restricted by 0 < aα < π. We can apply Theorem 1.3.1 to Diagram 1.4 to deduce, for which values of D, equation (1.14) is stable/unstable/conditionally stable. Clearly if |D| > 2 then there does not exist a non-trivial solution that is bounded, so we can say that (1.14) is unstable here. If D = −2, we have two cases: • If φ2(a) = φ1(a) = 0 then all non-trivial solutions to (1.14) are bounded. This is because exπi/a = 1, for all x and pi(x) are periodic for i = 1, 2 and are therefore bounded. We also note that all solutions are semi-periodic with semi-period a. 9
  • 11. • If φ2(a) and φ1(a) are not both zero then our equation is conditionally stable. To see this, we note that, in this case, ψ1(x) is bounded by the previous argument but ψ2(x) is not. If D = 2, we again have two cases. • If φ2(a) = φ1(a) = 0 then all solutions to (1.14) are bounded since p1(x) and p2(x) are periodic and therefore bounded. Thus (1.14) is stable here and all solutions are periodic with period a. • If φ2(a) and φ1(a) are not both zero then (1.14) is conditionally stable. Note that only ψ1(x) is bounded. Notice that when −2 < D < 2 we have two linearly independent solutions to equation (1.14) ψ1(x) = eiαx p1(x) and ψ2(x) = e−iαx p2(x), where 0 < aα < π. This implies that there does not exist a periodic solution to equation (1.14) with period na for n ∈ Z. We now state a theorem that encompasses the behaviour of solutions in the case |D| = 2, which follows from the above observations. Theorem 1.4.1. The equation (1.14) has non-trivial solutions with period a if and only if D = 2 and with semi-period a if and only if D = −2. All solutions of (1.14) (when |D| = 2) have period a or semi-period a if and only if in addition φ2(a) = φ1(a) = 0. The following theorem investigates the existence of solutions with period ka. Theorem 1.4.2. Let k be a positive integer. Then (1.14) has a non-trivial solution with period ka if and only if there exists an integer l such that D = 2 cos(2lπ/k). Proof. Since periodic solutions are bounded we are not working in cases 1 or 2 where |D| > 2. In the instance that k = 0, the situation is covered by Case 4 choosing l = 0. If k = 2, Case 3 does not occur since no non-trivial linear combination of ψ1(x) = eiαxp1(x) and ψ2(x) = e−iαxp2(x) has period 2a. This follows from the fact that 0 < aα < π. k = 2 is therefore covered by Cases 4 and 5 (where |D| = 2), choosing l = 0 and l = 1 respectively. 10
  • 13. If k > 2 then, by inspecting Diagram 1.4, our solution does not have period a or 2a so Case 3 occurs. We conclude that a non-trivial solution of (1.14) has period ka if and only if c1p1(x)(1 − eikaα ) + c2p2(x)(1 − e−ikaα ) = 0, which implies eikaα = 1, and therefore kaα = 2π. l ∈ Z. (1.17) From our quadratic (1.15) we have that D = ρ1 + ρ2 = 2 cos(aα) = 2 cos(2lπ/k). From above, we can see that if k = 2, it is Cases 4 and 5 that occur. The only periodic solutions that can occur here are ones with either period a or semi-period a. Hence we have the following Corollary. Corollary 1.4.1. A non-trivial solution of (1.14) with period 2a has either period a or semi-period a. Looking again at the proof, we see that if k > 2, then our solution cannot have pe- riod a or 2a so we are in Case 3 with (1.17) holding. Under these conditions both of ψ1,2(x) = e±iαxp1,2(x) have period ka. The next Corollary follows directly. Corollary 1.4.2. If (1.14) has a non-trivial solution with period ka where k is a positive integer and k > 2, then all solutions have period ka. Furthermore, in the circumstances of Corollary 1.4.2, any solution ψ(x) of equation (1.14) has the form ψ(x) = c1e(2lπix/ka) p1(x) + c2e(−2lπix/ka) p2(x), where c1 and c2 are real constants. If, for (1.14), all solutions have period ka then we say that solutions with period ka coexist. The coexistence problem for (1.14) is the task of deciding whether, if a solution of one of these types exists, all solutions are of that type. Corollary 1.4.2 has already given us an answer to the coexistence problem for k > 2. We will touch on the coexistence problem for period a and semi-period a later on. 12
  • 14. 1.5 Even and odd periodic solutions If the coefficient ν(x) in equation (1.14) is even then it is possible that we have even and odd period solutions. The circumstances under which each occur are summarised in the following theorem. Theorem 1.5.1. Let ν(x) be even. Then (1.14) has a non-trivial solution which is (i) even with period a if and only if φ1(1 2a) = 0; (ii) odd with period a if and only if φ2(1 2a) = 0; (iii) even with semi-period a if and only if φ1(1 2a) = 0; (iv) odd with semi-period a if and only if φ2(1 2a) = 0. Proof. We will prove parts (i) and (ii) since the other parts use similar methods. For part (i) note that if ν(x) is even then ψ(x) is a solution of (1.14) if and only if ψ(−x) is also. In particular φ1(x) and φ1(−x) are solutions satisfying the same conditions at x = 0. Hence φ1(x) = φ1(−x). (1.18) Thus φ1(x) is even. Similarly, by the conditions (1.3), φ2(x) = −φ2(−x) so φ2(x) is odd. We can now deduce that every even solution must be a multiple of φ1(x) and every odd solution must be a multiple of φ2(x). Since φ1(x) is even, and therefore φ1(−1 2a) = φ1(1 2a), φ1(x) has period a if and only if φ1(−1 2a) = φ1(1 2a). But φ1(x) is even so φ1(− 1 2 a) = −φ1( 1 2 a). (1.19) The last two formulae hold if and only if φ1(1 2a) = 0 and part (i) follows. For part (iii) note that φ1(x) has semi-period a if and only if, in addition to (1.19), φ1(1 2a) = −φ2(−1 2a). But in order that (1.18) holds, the necessary and sufficient condition is that φ1(1 2a) = 0. 13
  • 15. Chapter 2 Stability and Instability Intervals 2.1 Introduction We now focus on a special case of (1.14) where the coefficient of ν(x) depends on a real parameter λ as follows. ν(x) = λs(x) − q(x). (2.1) The properties of s(x) and q(x) are as follows. • s(x), q(x) piecewise continuous with period a; • There exists a number s > 0 such that s(x) s for all x ∈ (−∞, ∞). Simply swapping P(x) for p(x), (1.14) becomes {p(x)y (x)} + {λs(x) − q(x)}y(x) = 0. (2.2) To illustrate the dependence on λ, we shall write our two linearly independent solutions, φ1(x) and φ2(x) satisfying (1.3), as φ1(x, λ) and φ2(x, λ) and write D(λ) = φ1(a, λ) + φ2(a, λ). (2.3) For now we regard λ as real although we will examine the scenario in which it may be complex later on. Clearly D(λ) is an analytic function of λ since x is fixed at x = a and further, φi(a, λ) and their derivatives are analytic for i = 1, 2. If a function is analytic then we know it to be also continuous so, by the Open Mapping Theorem, the set of λ such that |D(λ)| < 2 forms an open set on the real λ-axis. We shall see that this is in fact a compact set, i.e. it can be expressed as a countable collection of disjoint open intervals. We now examine the implications of our new parameter λ in the setting of Theorem 1.3.1. 14
  • 16. Stability intervals: Theorem 1.3.1 implies that (1.14) is stable when λ lies within the in- tervals where |D(λ)| < 2. Instability intervals: The intervals for which |D(λ)| > 2 are where (1.14) is unstable. Conditional stability intervals: These are the closures of the stability intervals, i.e. where |D(λ)| 2. We now develop some theory which will allow us to determine and investigate the existence of the above intervals of λ. 2.2 Interlude into Sturm-Liouville theory The Sturm-Liouville operator is the most general second order differential operator that is self-adjoint under the appropriate boundary conditions. These are L ≡ 1 w(x) { d dx (p(x) d dx )+r(x)}, α1y(a) + β1y (a) = 0, α2y(b) + β2y (b) = 0, where α1, β1 not both zero and α2, β2 not both zero. We now examine two sets of boundary conditions under which (2.2) may be considered a Sturm-Liouville eigenvalue problem of the form Ly = −λy. Firstly note that our operator, in the case of (2.2), is L ≡ 1 s(x){ d dx (p(x) d dx ) − q(x)}. The periodic eigenvalue problem This comprises equation 2.2 considered to hold on [0, a] with the boundary conditions y(0) = y(a), y (0) = y (a). (2.4) The natural, associated inner product space is the set of continuous functions on [0, a] with inner product f1, f2 = a 0 f1(x)f2(x)s(x) dx. A standard result from functional analysis about self-adjoint eigenvalue problems is the existence of a countable infinity of eigenvalues (counting double eigenvalues). We note two further properties of the problem that are also standard results • The eigenvalues form an unbounded set. 15
  • 17. • The eigenfunctions ψn(x) corresponding to distinct eigenvalues form and orthonormal set over [0, a] with weight function s(x) such that ψm, ψn = 1 m = n 0 m = n We denote the eigenvalues by λn (n = 0, 1, 2, ...) where λ0 λ1 λ2 , ... and λn → ∞ as n → ∞. The boundary conditions (2.4) mean that we can extend each ψn(x) to (−∞, ∞) as a continuously differentiable function with period a. This means that the λn are the values of λ for which equation (2.2) has a non-trivial solution with period a. Furthermore, any double eigenvalues are values of λ for which all solutions to (2.2) have period a. From Case 4 in the examination of D(λ), it follows that the λn are the zeros of the function D(λ) − 2 = 0 and that a given λn is a double eigenvalue if and only if φ2(a, λn) = φ1(a, λn) = 0. The semi-periodic eigenvalue problem This is another Sturm Liouville problem with equation 2.2 considered on [0, a] where our boundary conditions are y(0) = −y(a), y (0) = −y (a). (2.5) Since this is a Sturm-Liouville eigenvalue problem we, again, have an unbounded, countable infinity of eigenvalues. We will denote these by µn such that µ0 µ1 µ2 , ... and µn → ∞ as n → ∞. We will denote the corresponding eigenfunctions by ξn(x) and note that the standard results stated above also apply to the ξn(x). This time, however, the boundary conditions (2.5) mean that our ξn(x) can be extended to (−∞, ∞) as continuously differentiable functions with semi-period a. We conclude that the µn are the values of λ for which equation (2.2) has a non-trivial solution with semi-period a. Furthermore, any double eigenvalues are values of λ for which all solutions have semi-period a. From Case 5 in the D(λ) case analysis, we see that the µn are the zeros of the function D(λ) + 2 and that a given µn is a double eigenvalue if and only if φ2(a, µn) = φ1(a, µn). Let F denote the set of all complex-valued functions f(x) which are continuous in [0, a] and have a piecewise continuous derivative in [0, a]. 16
  • 18. 2.3 Variational Results Later on, we will require some results of a variational nature related to the previous two problems, so I take a short interlude here to develop what we need. We work with the λn since the results for µn are similar. The Dirichlet Integral Let f(x), g(x) ∈ F. We then define the Dirichlet integral of f and g; J(f, g) by J(f, g) = a 0 {p(x)f (x)g (x) + q(x)f(x)g(x)} dx. (2.6) We use this integral to derive clear relationships between functions in F and the eigenfun- tions ψn(x), and the corresponding eigenvalues λn of equation (2.2). It will become clear that the structure of this integral lends itself very naturally to working with (2.2). If it is also true that g (x) exists and is piecewise continuous then J(f, g) = − a 0 f(x)[{p(x)g (x)} − q(x)g(x)] dx + [p(x)f(x)g (x)]a 0, (2.7) after integrating by parts. When f(x) and g(x) satisfy the periodic boundary conditions (2.4), the boundary term vanishes. In particular, when g(x) = ψn(x), J(f, ψn(x)) = − a 0 f(x)[−λnψn(x)s(x)] dx, = λn a 0 f(x)ψn(x)s(x) dx, = λnfn, (2.8) where fn is the Fourier coefficient in the second line. If f(x) = ψm(x) we get, J(ψm, ψn) = λn (m = n) 0 (m = n). (2.9) Thus the Dirichlet integral of an eigenfunction with itself gives the corresponding eigen- value. Proposition 2.3.1. A Lower Bound for J(f, f). Let f(x) ∈ F satisfy the periodic boundary conditions (2.4). Then ∞ n=0 λn |fn| J(f, f). (2.10) 17
  • 19. Proof. We first prove this result assuming that q(x) 0. Then J(g, g) = a 0 {p(x) g (x) 2 + q(x) |g(x)|2 } dx 0 ∀g ∈ F, since p(x) > 0. In particular J f − N n=0 fnψn, f − N n=0 fnψn 0, where N ∈ Z+. Now, the left hand side is a 0 {p(x) f − N n=0 fnψn f − N n=0 fnψn + q(x) f − N n=0 fnψn f − N n=0 fnψn } dx, which becomes a 0 p(x) f f + q(x) ff) dx − a 0 p(x) f N n=0 fnψn + q(x) f N n=0 fnψn dx + a 0 p(x) N n=0 fnψn. N n=0 fnψn + q(x) N n=0 fnψn. N n=0 fnψn dx − a 0 p(x) f N n=0 fnψn + q(x) f N n=0 fnψn dx. The third term here can be manipulated, using properties (2.8) and (2.9), to show its equivalence to N n=0 fnfn a 0 p(x) ψn(x)ψn(x) + q(x) ψn(x)ψn(x) dx = N n=0 fnfnJ(ψn, ψn), = N n=0 λnfnfn. All of the other integrals are recognisable from the definition of the Dirichlet integral and we have J(f, f) − N n=0 fnJ(ψn, f) − N n=0 fnJ(f, ψn) + N n=0 λnfnfn 0. 18
  • 20. Since J(ψn, f) = J(f, ψn), we have that J(f, f) − N n=0 fnλnfn − N n=0 fnλnfn + N n=0 λnfnfn 0, since the λn are real. Here we have also used property (2.8) Thus N n=0 λn |fn|2 J(f, f), which, on letting N → ∞ gives the desired result. Now suppose that we don’t have the condition q(x) 0. We may chose q0 to be a constant sufficiently large to make q(x) + q0s(x) 0, (2.11) in [0, a]. We now use a clever method of variation of constants to transform the general case into the ‘q(x) 0’ case just proven. Now equation (2.2) can be written as {p(x)y (x)} + {Λs(x) − Q(x)}y(x) = 0, where Λ = λ + q0 and Q(x) = q(x) + q0s(x). Since Q(x) 0, we can use the first part of the proof to write ∞ n=0 (λn + q0) |fn|2 a 0 [p(x) f (x) 2 + {q(x) + q0s(x)} |f(x)|2 ] dx. We now state a useful result that shall be used directly. Parseval’s Formula. Let f(x) be in L2([0, a]), and let fn = a 0 f(x)ψn(x)s(x) dx. Then ∞ n=0 |fn|2 = a 0 |f(x)|2 s(x) dx. This is from Titchmarsh:‘Eigenfunction Expansions’ Part II, §14.14 [3]. It follows from this formula that the terms involving q0, on each side, are equal and so our result is proven in the general case. 19
  • 21. Also note that, since λn λ0, we have that J(f, f) λ0 ∞ n=0 |fn|2 = λ0 a 0 |f(x)|2 s(x) dx. (2.12) The equality here clearly holds only when f(x) is an eigenfunction corresponding to λ0. Thus λ0 = min J(f, f) a 0 |f(x)|2 s(x) dx , where we remind ourselves that the minimum is taken over f ∈ F satisfying (2.4). Our final variational result examines the behaviour of the eigenvalues λn in the periodic problem as we increase the functions p(x) and q(x) slightly. In the following sections we will adopt Eastham’s notation of writing p.p where we mean to say “equal everywhere but at isolated points”. Proposition 2.3.2. Let λ1,n (n 0) denote the eigenvalues in the period problem over [0, a] where p(x), q(x) and s(x) are replaced by p1(x), q1(x) and s1(x) respectively, such that p1(x) p(x), q1(x) q(x) and s1(x) s(x). (2.13) Then (i) If s1(x) = s(x) p.p we have λ1,n λn ∀n; (ii) Otherwise, we have that λ1,n λn for n only where λn 0. This result claims that, in the first case, all eigenvalues increase (or stay the same) and in the second case, only when λn are positive, do they increase (or stay the same). Proof. Let ψ1,n(x) denote the eigenfunction corresponding to λ1,n and J1(f, g) be the Dirichlet integral with p(x), q(x) now becoming p1(x), q1(x). By (2.13), it is clear that J1(f, f) J(f, f). (2.14) We begin by examining the case of n = 0. We aim to bound λ1,0 from below and show that one such bound is λ0. Let f(x) = ψ1,0(x). From the relation (2.9) λ1,0 = J1(ψ1,0, ψ1,0) J(ψ1,0, ψ1,0) λ0 a 0 ψ2 1,0(x)s(x) dx. Here we have also used (2.14) and (2.12). As s1(x) s(x), we have a 0 ψ2 1,0(x)s(x) dx a 0 ψ2 1,0(x)s1(x) dx = 1, 20
  • 22. since the set of ψ1,n forms an orthonormal set over [0, 1]. If s1(x) = s(x) p.p then the equality holds and λ1,0 λ0. However, if s1(x) < s(x), then it is a strict inequality above. If, in this case, λ0 < 0 then λ0 = λ0 a 0 ψ2 1,0(x)s1(x) dx > λ0 a 0 ψ2 1,0(x)s(x) dx. So we are unable to make the appropriate lower bound. We conclude that if s1(x) < s(x) then λ1,0 λ0 only if λ0 0. This is all that need be said for n = 0. For n = 1, let f(x) = c0ψ1,0(x)+c1ψ1,1(x). We may always choose c0 and c1 to be constants where c2 0 + c2 1 = 1. and c0 a 0 ψ1,0(x)ψ0(x)s(x) dx + c1 a 0 ψ1,1(x)ψ0(x)s(x) dx = 0. Now f2 (x) = c0ψ2 1,0(x) + c2 1ψ1,1(x) + 2c0c1ψ1,0(x)ψ1,1(x). therefore a 0 f2 (x)s1(x) dx = c2 0 a 0 ψ1,0(x)s1(x) dx + c2 1 a 0 ψ1,1(x)s1(x) dx, by orthogonality. Thus a 0 f2 (x)s1(x) dx = c2 0 + c2 1 = 1, by orthonormality. Also we have that, f0 = a 0 f(x)ψ0(x)s(x) dx = 0. Now J1(f, f) = a 0 {p1(x)f (x)f (x) + q(x)f(x)f(x)} dx, = a 0 {p1(x) c0ψ1,0(x) + c1ψ1,1(x) c0ψ1,0(x) + c1ψ1,1(x) + q1(x) c0ψ1,0(x) + c1ψ1,1(x) (c0ψ1,0(x) + c1ψ1,1(x) } dx, = c2 0J(ψ1.0(x), ψ1,0(x)) + c2 1J(ψ1,1, ψ1,1), = λ1,0c2 0 + λ1,1c2 1, λ1,1(c2 0 + c2 1), = λ1,1, 21
  • 23. since λ1,n λ1,n−1 for all n. From our previous variational result, and by Pareseval’s formula, J(f, f) ∞ 1 λnf2 n λ1 ∞ 1 f2 n = λ1 a 0 f2 (x)s(x) dx. Here we have used that f0 = 0. From (2.14) we have λ1,1 λ1 a 0 f2 (x) dx. The argument at the end of the n = 0 case can be applied here to yield the result for n = 1. If we take f(x) = c0ψ1,0(x) + ... + cnψ1,n(x) where the c1 are real constants such that c2 0 + ... + c2 n = 1, and fr = 0, for (0 r n − 1) then the previous argument may be extended to the general case. Finally note that unless p1(x) = p(x) and q1(x) = q(x) p.p then we have λ1,n > λn. Thus the eigenvalues increase definitely here. We will conclude the study of the two problems in this sections by examining an example in which we can we can determine λn and µn explicitly. We take p(x) = s(x) = 1 and q(x) = 0. Equation (2.2) now becomes y(x) = A cos(x √ λ) + B sin(x √ λ). In this instance we will solve the semi-period problem since this tends to be neglected in other texts. Applying the conditions (2.5) we have −A = A cos(a √ λ) + B sin(a √ λ), (2.15) −B = B cos(a √ λ) − A sin(a √ λ). (2.16) Rearranging (2.16) gives B = A sin(a √ λ)/ cos(a √ λ) + 1. Substituting this value of B into (2.15) gives A(cos(a √ λ) + 1) = 0. Writing λ = µ we have µ2m = µ2m+1 = (2m + 1)2π2 a2 , for m ∈ Z. Also note that λ0 = 0. 22
  • 24. Before we come to the next result we develop a bit of theory that will prove useful in the proof of the following Theorem. In particular we focus on a few standard methods and results for dealing with ordinary differential equations that are covered by Eastham in his text Theory of ordinary differential equations (1970) [1]. Let us look at the homogeneous ordinary, linear differential equation a0(x)yn (x) + ... + an(x)y(x) = 0, (2.17) where ai(x) are continuous for 0 i n on some interval I and a0(x) = 0 on I. Now consider its inhomogeneous relative a0(x)yn (x) + ...an(x)y(x) = b(x), (2.18) where b(x) is continuous on I. I will simply state the following result without proof as it is widely known. Proposition 2.3.3. Let φ1(x),...,φn(x) form a fundamental set for (2.17) and let ψ0(x) be a particular solution of (2.18). Then if ψ(x) is a solution of (2.18) then there are unique constants c1, ..., cn such that ψ(x) = c1φ1(x) + ... + cnφn(x) + ψ0(x). The reader might have previously met a few situations in which ψ0(x) could be predicted from the form of b(x). We now present a tool called the method of variation of constants for finding ψ0(x) that works whenever ai(x) (0 i n) and b(x) are any continuous functions of x. Suppose φ1(x), ..., φn(x) form a fundamental set for (2.17). We know that every solution φ(x) of (2.2) is of the form φ(x) = c1φ1(x) + ... + cnφn(x), where the cr are constants for 1 r n. Since we have b(x) on the right hand side, we try to find a solution of the form ψ0(x) = c1(x)φ1(x) + ... + cn(x)φn(x), where the cr(x), are to be found. The result of this method is that the cr(x) are given by cr(x) = x a Wr(φ1, ...φn)(t) W(φ1, ...φn)(t) . b(t) a0(t) dt, where Wr(φ1, ..., φn) is the determinant obtained by replacing the rth column by (0, ..., 0, 1). In the case n = 2 we find ψ0(x) = − x a φ1(x)φ2(t) − φ2(x)φ1(t) W(φ1, φ2)(t) . b(t) a0(t) dt. 23
  • 25. We now come to the most important result of the chapter. In the following Theorem, the function D(λ) is examined through the existence of the eigenvalues λn and µn in the two eigenvalue problems previously mentioned. Theorem 2.3.1. (i) The numbers λn and µn occur in the order λ0 < µ0 µ1 < λ1 λ2 < µ2 µ3 < λ3 λ4 < ... . (ii) In [λ2m, µ2m], D(λ) decreases from 2 to −2. (iii) In [µ2m+1, λ2m+1], D(λ) increases from −2 to 2. (iv) In (−∞, λ0) and (λ2m+1, λ2m+2), D(λ) > 2. (v) In (µ2m, µ2m+1), D(λ) < −2. Proof. The proof of this must be given in several stages. a) There exists a number Λ such that D(λ) > 2 for all λ Λ. Since s(x) s > 0 we can choose Λ such that q(x) − Λs(x) > 0 in (−∞, ∞). (2.19) This is because q(x) is periodic on (−∞, ∞) and therefore bounded on (−∞, ∞). Let y(x) be any non-trivial solution of equation (2.2) such that either y(0) 0 and y (0) > 0 or y(0) > 0 and y (0) 0. Then there is an interval (0, δ) in which y(x) > 0. Now consider (0, X0) in which y(x) > 0. By (2.19) we have {p(x)y (x)} = {q(x) − λs(x)}y(x) > 0 for all λ Λ. This implies that p(x)y (x) is increasing in (0, X0). Since p(x) > 0 and y (x) 0, we have p(x)y (x) > 0 on (0, X0). Thus y (x) > 0 on (0, X0). So y(x) is increasing on (0, X0) and y(x) > 0 on (0, X0). Since y(x) ∈ F, there exist a number X1 > X0 > 0 such that y(x) > 0 on (0, X1). The above argument can be applied again to show that y(x) is increasing in (0, X1). We conclude that y(x) has no zero in (0, ∞) so p(x)y (x) and y(x) are increasing in (0, ∞). Now φ1(x) and φ2(x) satisfy the conditions of the arbitrary y(x) at x = 0 and in particular, φ1(a, λ) > φ1(0, λ) = 1, φ2(a, λ) > φ2(0, λ) = 1, where we have used that p(0) = p(a) in the second inequality. Hence D(λ) = φ1(a, λ) + φ2(a, λ) > 2, for all λ Λ. 24
  • 26. b) D (λ) is not zero at values of λ such that |D(λ)| < 2. We differentiate Hill’s equation with respect to λ, taking y(x) = φ1(x, λ). This gives p(x) d2 dx2 ∂φ1(x, λ) ∂λ +p (x) d dx ∂φ1(x, λ) ∂λ +{λs(x)−q(x)} ∂φ1(x, λ) ∂λ = −s(x)φ1(x, λ). In the context of our theory on inhomogeneous equations, through the method of variation of parameters, we first note that φ1(x, λ) and φ2(x, λ) form a fundamental set for the homogeneous form. Thus any solution φ0(x) can be written as φ0(x) = c1(x)φ1(x, λ) + c2(x)φ2(x, λ), where c1(x) and c2(c) can be found as previously described. In particular we can write ∂φ1(x, λ) ∂λ = − a 0 φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ) W(φ1, φ2)(t) . −s(x)φ1(t, λ) p(x) dt. Now W(φ1, φ2)(x) = W(φ1, φ2)(0)exp − x 0 p (ξ) p(ξ) dξ , = 1.exp − [log p(ξ)]x 0 , = exp(log p(0) p(x) ), = p(0) p(x) . Thus ∂φ1(x, λ) ∂λ = {p(0)}−1 x 0 {φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)}s(t)φ1(t, λ) dt. (2.20) Similarly ∂φ2(x, λ) ∂λ = {p(0)}−1 x 0 {φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)}s(t)φ2(t, λ) dt. (2.21) Differentiating (2.21) with respect to x gives ∂φ2(x, λ) ∂λ = {p(0)}−1 x 0 {φ1(x, λ)φ2(t, λ) − φ2(x, λ)φ1(t, λ)}s(t)φ2(t, λ) dt. 25
  • 27. Now D (λ) = ∂φ1(a, λ) ∂λ + ∂φ2(a, λ) ∂λ = {p(0)}−1 a 0 {φ1φ2 2(t, λ) + (φ1 − φ2)φ1(t, λ)φ2(t, λ) −φ2φ2 1(t, λ)}s(t) dt, (2.22) writing φi(a, λ) = φi and φi(aλ) = φi. Also D2 (λ) = {φ1(a, λ) + φ2(a, λ)}2 , = φ2 1 + φ2 2 + 2φ2φ2, = (φ1 − φ2)2 + 4φ1φ2, = (φ1 − φ2)2 + 4(1 + φ2φ1), = 4 + (φ1 − φ2)2 + 4φ2φ1, since W(φ1, φ2) = φ1φ2 − φ2φ1 = 1. So 4φ2p(0)D (λ) = − a 0 {2φ2φ1(t, λ) + (φ1 − φ2)φ2(t, λ)}2 s(t) dt −{4 − D2 (λ)} a 0 φ2 2(t, λ)s(t) dt. (2.23) Since p(x) > 0, we have that when |D(λ)| < 2, φ2D (λ) < 0. In particular D (λ) = 0 as required. c) At a zero of D(λ) − 2; λn, D (λn) = 0 if and only if φ2(a, λn) = φ1(a, λn) = 0. Also, if D (λn) = 0 , then D (λn) < 0. If φ2(a, λn) = φ1(a, λn) = 0 then Case 4 tells us that φ1(a, λn) = φ2(a, λn) = 1. Now from (2.22) we have that D (λn) = 0. Conversely, if D (λn) = 0, then (2.23) gives us {2φ2φ1(t, λ) + (φ1 − φ2)φ2(t, λ)}2 s(t) ≡ 0. It follows that φ2(a, λn) = 0 and φ1(a, λn) = φ2(a, λn) since φ1(t, λ) and φ2(t, λ) are lin- early independent. Substituting these results into (2.22) we get φ1(a, λn) = 0 as required. 26
  • 28. We now approach the result about D (λn). Differentiating (2.22) with respect to λ we get D (λ) in terms of the λ−derivatives of φi and φi. Then (2.20) and (2.21) give D (λn) = 2{p(0)−2 } a 0 φ1(t, λ)φ2(t, λn)s(t) dt 2 − a 0 φ2 1(t, λn)s(t) dt a 0 φ2 2(t, λn)s(t) dt , where we have put λ = λn. Now for Riemann-integrable functions f(t), g(t) ∈ [0, a] the Cauchy-Schwarz inequality (squared) a 0 f(t)g(t) dt 2 a 0 f2 (t) dt a 0 g2 (t) dt, holds. Hence D (λn) 0. Furthermore, the case of equality is ruled out since φ1(t, λn) and φ2(t, λn) are linearly in- dependent. There is a corresponding result to c) for zeros of D(λ) + 2; µn. The only difference being that D (µn) > 0 if D (µn) = 0. We now examine the implications of a), b) and c) on the function D(λ) as λ moves from −∞ to ∞. • When λ is large and negative, D(λ) > 2 by a) and remains greater than 2 until it reaches its first zero of D(λ) − 2; λ0. • Since D(λ0) is not a maximum, λ0 is a simple zero of D(λ)−2. From c), D (λ0) = 0, thus D(λ) < 2 immediately to the right of λ0. • Since D(λ) < 2 immediately to the right of λ0, b) tells us that D (λ) = 0 here and this D(λ) is strictly decreasing. • D(λ) continues in this fashion until it reaches its first zero of D(λ) + 2. • In general, µ0 is a simple zero of D(λ) + 2 so D(λ) < 2 immediately to the right of µ0. For λ increasing from µ0, we know that D(λ) < −2 until it reaches its next zero of D(λ) + 2; µ1. • Since µ1 is not a maximum for D(λ), then it is a simple zero of D(λ) + 2. Thus D(λ) > −2 immediately to the right of µ1 and is strictly increasing until it reaches the next zero of D(λ) − 2; λ1. • In general, λ1 will be a simple zero of D(λ) − 2 and so D(λ) > 2 immediately to the right of λ1. 27
  • 29. • D(λ) remains less than 2 for λ immediately to the right of λ1 until it reaches the next zero of D(λ) − 2; λ2. The argument is such that this behaviour repeats as λ → ∞. This proves parts (i) and (ii) except when we have double zeros of D(λ)±2. Let’s examine the case in which we have a double zero of D(λ) − 2 at λ1. Here D(λ) < 2 immediately to the right of λ1 and the previous analysis still holds except that the interval (λ1, λ2) does not figure. It follows from the examination of the periodic eigenvalue problem that λ1 = λ2 since the condition in part c) of the proof holds. Presence of Stability and Instability Intervals. The previous theorem shows us that the stability intervals for equation (2.2) are (λ2m, µ2m) and (µ2m+1, λ2m+1). Further, the conditional stability intervals are the closures of these intervals. The instability intervals are (−∞, λ0) with (µ2m, µ2m+1) and (λ2m+1, λ2m). Note that no stability interval of (2.2) is ever absent, nor is (−∞, λ0). Any other instability interval can be absent as a result of D(λ) = ±2 having a double zero. Finally, it is clear that the absence of an instability interval at a value of λ means that this is a value of λ for which all solutions have period a or semi-period a. We now return to a previous idea in which we generalised our interest in non-trivial solu- tions of period a to those of period ka and indeed go one step further to look for solutions that are not necessarily periodic, but have property (1.2). We re-state this property here. There exists a non-zero constant ρ such that our solution ψ(x) has the property ψ(x + a) = ρψ(x). These two generalisations will take the form of the following two problems. The periodic eigenvalue problem over [0, ka]. This is made up of equation (2.2) holding on [0, ka], where k ∈ Z+ and y(ka) = y(0), y (ka) = y (0). (2.24) We may now view this as a generalisation to the original periodic eigenvalue over [0, a] (k = 1). It is easily shown that this is also a self-adjoint problem and we denote the unbounded, countable infinity of eigenvalues by Λn(k) (n = 0, 1, ...). By the obvious extension of our eigenfunctions, to (−∞, ∞) as continuously differentiable functions, we see that the Λn(k) are the values of λ for which (2.2) has a non-trivial solution with period ka. Clearly {λn} ⊂ {Λn(k)}. We let Σ denote the set of Λn(k) for n = 0, 1, ... and k = 0, 1, .... 28
  • 30. The t-periodic eigenvalue problem This comprises equation (2.2) assumed to hold on [0, a] with y(a) = y(0)exp(iπt), y (a) = y (0)exp(iπt), where t ∈ R and −1 < t 1. This is, again, a self-adjoint problem and we denote the eigenvalues by λn(t), (n = 0, 1, ...). We can extend the eigenfunctions in the natural way to (−∞, ∞) so that the resulting function has property (1.2) in the form y(x + a) = y(x)exp(iπt). Thus the eigenfunctions satisfy (1.2) with ρ = exp(iπt). Firstly we note that the periodic and semi-periodic eigenvalue problems are lifted out of this more general case when t = 0 and t = 1 respectively. Substituting this value of ρ into (1.15) gives e2πit − D(λ)eiπt + 1 = 0. This implies D(λ) = eiπt + e−iπt , giving D(λ) = 2 cos(πt). We denote by S , the set of all λn(t) for n = 0, 1, ... and −1 < t 1. Now we let S denote the set consisting of the conditional stability intervals of equation (2.2). Theorem 2.3.2. The closure of Σ is S. Proof. By Theorem 1.4.2, Λn(k) are the values of λ such that D(λ) = 2 cos(2lπ/k), where l is a variable integer. Now we must show that Σ is dense in S. If we choose d such that |d| 2, we can clearly choose k, l such that 2 cos(2lπ/k) is arbitrarily close to d. Thus Λn(k) forms a dense set among the values of λ such that |D(λ)| 2. Theorem 2.3.3. The sets S and S are identical. Proof. Now λn(t) are the solutions to (2.14). As t increases continuously from 0 to 1, 2 cos(πt) decreases continuously from 2 to −2. Now by Proposition 2.3.1, λ2m(t) increases continuously from λ2m to µ2m and decreases continuously from λ2m+1 to µ2m+1. Finally, we note that in the cases t = t and t = −t , for (0 < t < 1), the eigenvalues are the same. 29
  • 31. Now we examine a result that links the previous two problems. Let ψn(x; t) denote the eigenfunctions in the t-periodic problem. Theorem 2.3.4. Let k ∈ N and exp(iπtr), (r = 0, 1, ..., k − 1) be the kth roots of unity, where −1 < tr 1. Then the set of all functions ψn(x, ; tr), where n 0, 0 r k − 1, is a complete set of eigenfunctions for the periodic problem over [0, ka]. Proof. Firstly, from (2.13) ψn(x + ka; tr) = ψn(x + (k − 1)a; tr)exp(itrπ). Continuing in this fashion gives ψn(x + ka; tr) = ψn(x; tr)exp(iktrπ). The stated functions are clearly eigenfunctions of the periodic problem over [0, ka] since (2.24) are satisfied. We suppose, by way of contradiction, that f(x) ∈ L2(0, ka) and that ka 0 f(x)ψn(x; tr)s(x) dx = 0, ∀n 0, 0 r k − 1, (2.25) i.e. that f(x) is orthogonal to all of the eigenfunctions and thus implying that the set of ψn(x; tr) is not complete. We may re-write the integral (2.25) as k−1 m=0 (m+1)a ma f(x)ψn(x; tr)s(x) dx = k−1 m=0 a 0 f(x + ma)ψn(x + ma; tr)s(x) dx, = a 0 k−1 m=0 {f(x + ma)exp(imπtr)}ψn(x; tr)s(x) dx, (2.26) by (2.54). Considered on [0, a], ψn(x; tr) for (n 0) form a complete set. Thus (2.25) and (2.26) give us that k−1 m=0 f(x + ma)exp(imπtr) = 0, 0 r k − 1. This system of linear, homogeneous equations can be expressed in matrix form as       exp(iπt1) exp(2iπt1) ... exp(miπt1) . . . . . . . . exp(iπtk−1) exp(2iπtk−1) ... exp(miπtk−1)             f(x) f(x + a) . . f(x + ma)       = 0. 30
  • 32. Observe that each entry of the first matrix can be expressed in terms of sine and cosine functions of multiples of trπ. It is then clear that this matrix has a non-zero determinant. It follows that f(x + a) = 0 in [0, a]. We conclude that f(x) is not orthogonal to the set of eigenfunction so ψn(x; tr) forms a complete set of eigenfunctions over [0, ka]. 2.4 The Mathieu Equation This is the name given to the equation y (x) + (λ − 2q cos(2x))y(x) = 0, (2.27) where q is a non-zero, real constant. Here our period a = π. Unlike the previous two examples, the solutions cannot be given in terms of elementary functions, however the existence of all its instability intervals can be established. In order to to deduce this, we first discuss the circumstances under which an instability interval may be absent. In the examination of Theorem 1.2.2, we saw that an instability interval is absent in the presence of a double eigenvalue which, in our introduction of the periodic and semi-periodic problems, were seen to give rise to the coexistence of solutions of period a and semi-period a respectively. It therefore suffices to show that solutions of equation (2.27) do not all have period a or semi-period a for any values of λ or q (q = 0).We show this directly. Theorem 2.4.1. For no values of λ and q (q = 0) do the solutions of (2.27) all have either period a or semi-period a. The proof of this result uses the method of proof by contradiction, supposing that all solutions have period π and using Theorem 1.5.1 to deduce a contradiction involving the coefficients of the sine and cosine Fourier series of φ1(x, λ) and φ2(x, λ). 2.5 Summary We now reflect on the material we have covered. The most significant equation so far has been (1.14). In section 1.3, we saw how every equation of the form a0(x)y (x) + a1(x)y (x) + a2(x)y(x) = 0, (2.28) 31
  • 33. may be converted into one of the form z (x) + ν(x)z(x) = 0. (2.29) The significance of this transformation was that equation (2.29) did not contain the first derivative, which made the constant term in our quadratic equation (1.5) equal to unity. From Theorem 1.2.2 and the accompanying proof, we saw that the the solutions of the quadratic (1.5) were crucial in determining the nature of solutions to (2.28). It became clear that the significance of this quadratic equation was growing and the simpli- fication that the transformation of section 1.3 gave would become routine as a preliminary step in examining any equation of the form (2.28). Indeed, the Mathieu equation is already in this form. It is widely known that the nature of solutions to equation (2.29) will depend on the two linearly independent solutions φ1(x) and φ2(x). We have seen that it is the value of the discriminant D = φ1(a) + φ2(a), that precisely dictates this dependence. Toward the end of chapter one we encountered the coexistence problem attached to our equation. Further, we saw that this problem can be answered in the case of the ka-periodic problem where k > 2. In particular, we concluded that if (2.29) has a non-trivial solution with period ka where k is a positive integer and k > 2, then all solutions have period ka. In Chapter 2, we narrowed our focus slightly, by looking at a particular type of equation (2.28) and thus of (2.29). We introduced the real parameter λ, and since the discriminant, now denoted D(λ), was dependent on λ only, this new parameter became the dictating value in our equation. More significantly, the introduction of λ allows us to apply many standard results in Sturm-Liouville Theory since the tasks of finding periodic or semi- periodic solutions reduce to self-adjoint eigenvalue problems. Self-adjointness gives us an unbounded infinity of eigenvalue for both problems over [0, a]. Theorem 2.3.1 and the observations within the proof form the significant results of the chapter, showing how the stability or otherwise of equation (2.2) varies as we move from one eigenvalue to the next. The stability intervals for equation (2.29), we saw, are (λ2m, µ2m) and (µ2m+1, λ2m+1), with the conditional stability intervals being the closured of these intervals. Returning to the more general problem over [0, ka] we see that the eigenvalues here form a countable set that are contained within these intervals. Taking one step further, we examined solutions ψ(x) with the property ψ(x + a) = ρψ(x), 32
  • 34. in the ‘t-periodic’ problem. Theorem 2.3.4 shows clearly that this problem also covers the ‘ka-periodic’ problem and we come to the beautiful result that is Theorem 2.3.3. This Theorem tells us that the eigenvalues that give solutions to the ‘t-periodic’ problem form a dense uncountable set that is equivalent to each stability interval. We can picture this with the following analogy. Taking any given stability interval, eigenvalues giving solutions to the periodic and semi-periodic problems can be found at each end of the interval, (separately) like ‘integer values’. Eigenvalues giving solutions to the ka-periodic problem can be found as a countable set, like the rational numbers, scattered throughout the interval. Eigenvalues giving solutions to the t-periodic problem ‘fill’ the intervals as an uncountable set like the real numbers, thus asserting their equivalence to the whole interval. The graph opposite gives a rough idea of how the function D(λ) fluctuates as the value of lambda moves from −∞ to ∞. To the left of λ0, and in the intervals (λ2m+1, λ2m+2) and (µ2m, µ2m+1), D(λ) remains greater than 2. The behaviour shown in this graph within these intervals is not necessarily indicative of the behaviour of D(λ) here; all that is know so far is that it is greater than 2. 33
  • 35. 34
  • 36. Bibliography [1] M.S.P Eastham, The spectral theory of periodic differential equations (Scottish Aca- demic Press 1973). [2] M.S.P Eastham, Theory of ordinary differential equations, (Van Nostrand Reinhold company 1970) [3] E.C Titchmarsh, Eigenfunction expansions Part II (Oxford University Press 1958) 35