SlideShare a Scribd company logo
1 of 265
Download to read offline
A Second Gourse
in Elementary
Differential Equations
Paul Waltman
Emory University, Atlanta
Academic Press, Inc.
(Harcourt Brace Jovanovich, Publishers)
Orlando San Diego San Francisco New York
London Toronto Montreal Sydney Tokyo Säo Paulo
Copyright © 1986 by Academic Press, Inc.
All rights reserved.
No part of this publication may be reproduced or transmitted
in any form or by any means, electronic or mechanical, including
photocopy, recording, or any information storage and retrieval
system, without permission in writing from the publisher.
Academic Press, Inc.
Orlando, Florida 32887
United Kingdom Edition Published by Academic Press, Inc.
(London) Ltd., 24/28 Oval Road, London NW1 7DX
ISBN: 0-12-733910-8
Library of Congress Catalog Card Number: 85-70251
Printed in the United States of America
TO RUTH
For her patience
and understanding
Preface
The once-standard course in elementary differential equations has undergone a
considerable transition in the past two decades. In an effort to bring the subject to a
wider audience, a gentle introduction to differential equations is frequently incorpo­
rated into the basic calculus course—particularly in the last semester of a calculus-
for-engineers course-or given separately, but strictly as a problem-solving course.
With this approach, students frequently learn how to solve constant coefficient scalar
differential equations and little else. If systems of differential equations are treated,
the treatment is usually incomplete. Laplace transform techniques, series solutions,
or some existence theorems are sometimes included in such a course, but seldom is
any of theflavorof modern differential equations imparted to the student. Graduates
of such a course are often ill-equipped to take the next step in their education, which
all too frequently is a graduate-level differential equations course with considerable
analytical prerequisites.
Even when a "good" elementary course in ordinary differential equations is
offered, the student who needs to know more sophisticated topics mayfindhis or her
way to further study blocked by the need to first study real and complex variables,
functional analysis, and so on. Yet many of the more modern topics can be taught
along with a marginal amount of the necessary analysis. This book is for a course
directed toward students who need to know more about ordinary differential equa­
tions; who, perhaps as mathematics or physics students, have not yet had the time to
study sufficient analysis to be able to master an honest course, or who, perhaps as
biologists, engineers, economists, and so on, cannot take the necessary time to
master the prerequisites for a graduate course in mathematics but who need to know
more of the subject.
This book, then, is a second course in (elementary) ordinary differential
equations, a course that may be taken by those with minimal—but not zero—
preparation in ordinary differential equations, and yet which treats some topics from a
ix
X PREFACE
sufficiently advanced point of view so that even those students with good preparation
will find something of interest and value. I have taught the subject at this level at
Arizona State University and The University of Iowa to classes with extremely varied
backgrounds and levels of mathematical sophistication with very satisfactory results.
This book is the result of those courses.
Before describing the contents, I wish to further emphasize a topic alluded to
above. For some students from other disciplines, this may be the only analysis course
they will see in their mathematical education. Thus, whenever possible, basic real
analysis, as well as differential equations, is taught. The concepts of analysis are
brought into play wherever possible—ideas such as norms, metric spaces, com­
pleteness, inner products, asymptotic behavior, and so on, are introduced in the
natural setting of a need to solve, or to set, a problem in differential equations. For
example, metric spaces could be avoided in the proof of the existence theorem but
they are deliberately used, because the idea of an abstract space is important in much
of applied mathematics and it can be introduced easily and naturally in the context of
very simple operators.
The book has applications as well. However, rather than tossing in trivial
applications of dubious practical use, few, but detailed, applications are treated, with
some attention given to the mathematical modeling that leads to the equation. By and
large, however, the book is about applicable, rather than truly applied, mathematics.
Chapter 1 gives a thorough treatment of linear systems ofdifferential equations.
Necessary concepts from linear algebra are reviewed and the basic theory is pre­
sented. The constant coefficient case is presented in detail, and all cases are treated,
even that of repeated eigenvalues. The novelty here is the treatment of the case of the
nondiagonalizable coefficient matrix without the use of the Jordan form. I have had
good results substituting the Putzer algorithm—it gives a computational procedure
that students can master. This part of the course, which goes rather quickly, is
computational and helps pull students with different backgrounds to the same level.
Topics in stability of systems and the case of periodic coefficients are included for a
more able class.
Chapter 2 is the heart of the course, where the ideas of stability and qualitative
behavior are developed. Two-dimensional linear systems form the starting point—the
phase plane concepts. Polar coordinate techniques play a role here. Liapunov
stability and elementary ideas from dynamic systems are treated. Limit cycles appear
here as an example of a truly nonlinear phenomenon. In a real sense, this is "applied
topology" and some topological ideas are gently introduced. The Poincaré-Bendixson
theorem is stated and its significance discussed. Ofcourse, proofs at this stage are too
difficult to present; so, if thefirstsection can be described as computational, then this
section is geometrical and intuitive.
Chapter 3 presents existence and uniqueness theorems in a rigorous way. Not all
students will profit from this, but many can. The ideas of metric spaces and operators
defined on them are important in applied mathematics and appear here in an elemen­
tary and natural way. Moreover, the contraction mapping theoremfindsapplication in
PREFACE Xi
many parts of mathematics, and this seems to be a good place for the student to learn
about it. To contrast this chapter with the previous ones, the approach here is
analytical. Although everything up to this point pertained to initial value problems, a
simple boundary value problem appears in this chapter as an application of the
contraction mapping technique.
Chapter 4 treats linear boundary value problems, particularly the Sturm-
Liouville problem, in one of the traditional ways—polar coordinate transformations.
Ideas of inner products and orthogonality appear here in developing the rudiments of
eigenfunction expansions. A nonlinear eigenvalue problem- a bifurcation problem -
also appears, just to emphasize the effect of nonlinearities.
The book contains more material than can be covered in a semester. The
instructor can pick and choose among the topics. Students in a course at this level will
differ in ability and the material can be adjusted for this. I have usually taught Chapter
1 through the Putzer algorithm, skipped ahead and taught all of Chapter 2, presented
the scalar existence theory in Chapter 3, and spent the remaining time in Chapter 4
(never completing it). Other routes through the material are possible, however, and
the chapters are relatively independent. For example, Chapter 1 can be skipped
entirely if students have a good background in systems (although a brief discussion of
norms in Rn
would help). I have made an effort to alert the reader when understanding
of previous material is critical.
Professor John Baxley of Wake Forest University, Professor Juan Gatica of the
University of Iowa, and Dr. Gail Wolkowicz of Emory University read the entire
manuscript and made detailed comments. The presentation has benefited consider­
ably from their many suggestions, and I wish to acknowledge their contributions and
express my gratitude for their efforts. Several others-Gerald Armstrong of Brigham
Young University, N. Cac of the University of Iowa, T. Y. Chow of California State
University, Sacramento, Donald Smith of the University of California, San Diego,
Joseph So of Emory University, and Monty Straus of Texas Tech University —read
portions of the manuscript and made constructive comments. I gratefully acknowl­
edge their assistance and express my appreciation for their efforts.
J
Systems of
Linear
Differential Equations
1. Introduction
Many problems in physics, biology, and engineering involve rates of
change dependent on the interaction of the basic elements—particles, popula­
tions, charges, etc.—on each other. This interaction is frequently expressed as a
system of ordinary differential equations, a system of the form
y =fi(t9yl9y29
yf
i =fi(Uyi,y2,
yn=fn(t,yi,y2,
Here the functions f^t, yl5... ,yn) take values in R (the real numbers) and are
defined on a set in Rn+1
(RxRx-xR, n+ times). We seek a set of n
unknown functions (yi(t),y2(t),... ,j>n(0) defined on a real interval /such that,
when these functions are inserted into the equations above, an identity results
for every t e I. In addition, certain other constraints (initial conditions or bound­
ary conditions) may need to be satisfied. In this chapter we will be concerned
with the special case that the functions/) are linear in the variables yhi= 1,...,
->yn)
,yn)·
(1.1)
1
2 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
n. The problem takes the form
yl = *n(0j>i + al2(t)y2 + · · · + aln(t)yn + ^ ( 0
y'i = 021 (0>Ί + ci22{i)y2 + · ' · + û2w(0^„ + e2(t)
(12)
J
>
» = anl(t)yx + αη2(0^2 + ■ · · + ^„(Ο^ + ^(0·
In many applications the equations occur naturally in this form, or (1.2)
may be an approximation to the nonlinear system (1.1). Moreover, some prob­
lems already familiar to the reader may be put into the form (1.2). For example,
solving the second-order linear differential equation
y" + a(t)y' + b(t)y = e(i) (1.3)
is equivalent to solving a system of the form
y'i = y2
(1.4)
y'i= -Hi)yi-a(t)y2 + e(t).
To see the equivalence, suppose (yi(t),y2(t)) is a solution of (1.4). Then^(r) is a
solution of (1.3), since y2 = {y'J = -6(0^1 - «(O^i + e(t), which is (1.3). On
the other hand, if y(t) is a solution of (1.3), then define y1 (t) = y(t) and y2(t) =
y'(i). This yields a solution of (1.4). Equation (1.3) is called a scalar equation;
(1.4) is called a system.
The study of systems of the form (1.2) is made simpler by the use of matrix
algebra. In the next section the basic notation, conventions, and theorems from
linear algebra that are needed for the study of differential equations are collected
for the convenience of the reader. Few proofs are given, and the reader meeting
these concepts for the first time may wish to consult a textbook on linear algebra
for an expanded development.
2. Some Elementary Matrix Algebra
If m and n are positive integers, an m x n matrix A is defined to be a set of
mn numbers aij9 1 < / < m, 1 <j < n. (This is properly written as aitj but the
comma is omitted.) For notational purposes we write
2. SOME ELEMENTARY MATRIX ALGEBRA 3
A =
an a12
α2ί <*2η
or A =
«2n
that is, üij occupies a position in the /th row and the y th column of A. It is
convenient to write
A = [ad
to save space when the specific entries are not important or when they share a
common property that can be illustrated by the bracket. For example,
-H! i = l , 2 , ; = 1 , 2
denotes the matrix
A =
'2 2 '
3 ¥
?0T
First, we will develop an algebra for these matrices. Then we will consider
matrices with functions as entries and define continuous, differentiable, and
integrable matrices.
Two m x n matrices A = [α0·], Β = [b^] are defined to be equal, written
A = B, if ciij = bij for every / and j . Given m x n matrices A and B, we define
their sum, A + B, by
A + B = [αυ + bu].
For example, if
_ [ 1 0 11 Γ4 - 2 - 1 ]
~2 2 7_T |_6 2 -4_T
then
+
L2
+6 2
+2 7
- 4
J ~ L 8 4 3
J*
4 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
From this definition of addition, it is obvious that if A, B, C are m x n matrices,
then
A+B=B+A
and
A + (B + C) = (A + B) + C,
since numbers have these properties. We define multiplication of a matrix A by
the number λ (called scalar multiplication) by
XA = [AflyJ.
Thus, for example, — A = (—1)^4 = [ — α0·]; or, if ^ is as in the example above
and λ = 29 then
~|_4 4 14J
If 0 denotes the matrix with all zero entries, that is, atj = 0, for all / andy, then
A-A = 0
and
, 4 + ο = Λ = 0 + Λ.
If A is an m x p matrix and B is ap x n matrix, the product of A and B, written
AB, is defined to be the m x n matrix whose entries are given by
= [Cij, <i<m, <j<n.
The z/th element of the product is the sum of the products of the elements in the
/th row of A with the corresponding elements in theyth column of B. A simple
example illustrates this definition.
2. SOME ELEMENTARY MATRIX ALGEBRA 5
Γθ-4+1-7 0-5 + 1-8 0-6+1-9Ί
~|_2·4 + 3·7 2-5 + 3-8 2-6 + 3-9J
_ Γ 7 8 9Ί
~|_29 34 39J'
Note first that BA is not defined, since A is 2 x 2 and i? is 2 x 3. The
product is defined only when the number of columns of A is equal to the number
of rows of B. If n = m, the matrix is said to be a square matrix. If y4 and B are
square matrices of the same size, then both AB and BA are defined, but these
need not be the same matrix. For example, if
then
while
Γθ ΠΓ2 3"| Γ01 + 1-4 0-3 + 1-51
[2 3_||_4 5j [2-2 + 3-4 2-3 + 3-5J
Lio 21J'
Γ2 3ΊΓ0 1 Ί Γ 2 - 0 + 3-2 2-1 + 3-31
~L4 5
JL2
3 j _
L 4 0
+5
· 2
4-1 + 5-3J
Γ6 11-I
Lio 19J'
and AB φ BA.
The matrix B = [b^], where bi} = ajh is called the transpose of A = [a0] and is
denoted by AT
. If
A =
1 0 1
4 2 5
-1 - 2 3
then
6 CHAPTER 1 / SYSTEMS OF UNEAR DIFFERENTIAL EQUATIONS
AT
=
1
0
1
4
2
5
- 1
- 2
3
The rows and columns have been interchanged.
THEOREM 2.1
The matrix product has the properties
i. A(BC) = (AB)C
ii. x(AB) = (<*A)B = A(xB)
iii. (A + B)C=AC+BC
iv. C(A + B) = CA + CB
v. (AB)T
=BT
AT
where A, B, C are matrices, a is a real or complex number, and the above
products are defined.
The proofs of these properties are exercises in manipulating subscripts and
are omitted.
A matrix with n = 1 (i.e., an n x 1 matrix)
a21
is called an «-dimensional (column) vector. The matrix / (denoted Ip if the
dimension p x pis important), called the identity matrix, is defined by
/ =
1 0
0 1
0 0 1
0 0 0
2. SOME ELEMENTARY MATRIX ALGEBRA 7
that is, atì = 1 and ai} = 0, i φ j . If A is an m x n matrix and In is the n x n identity
matrix, then, from the definition of multiplication, it follows that
AIn = A .
If Im is the m x m identity matrix,
ImA = A .
Our interest is principally in vectors and square matrices.
Let A be an n x n matrix. A real number called the determinant of A is
associated with each square matrix. The definition of this real number is induc­
tive on n. If n = 1, det^4 = all. Suppose that atiA has been defined for n =
k > 1. Given an element ai} of a matrix A, M0·, the minor of aij9 is the matrix
obtained from A by deleting the ith row and they'th column. Aij9 the cofactor
of αψ is defined by
4 , = (-l)l+
'detA#„.
For n = k + 1, we define
deL4 = allAll + α21Λ21 + ··· + anlAnl.
The following examples clarify this definition. If
U21 «22 J
then dei A = a11a22 — ^21^12· If
A = ' 2 1 ' 2 2 *23
then
A + A A Xa
^ a
lA A Xa
^ a
^A _L Α Λ ^ ^ Ί
dety4 = a11det — <z21det +ö3 1 det
Lö
32 «33j Lö
32 ^33j Lö
22 ^ J
= «11022^33 - fl
ll«23«32 ~ «21«12«33 + 021013ΰ
32
+ α
3 1
α
1 2
ΰ
2 3
- « 3 1 ^ 1 3 ^ 2 2 ·
By definition, a determinant, for n > 1, is the sum of all of the elements of
the first column multiplied by their cofactors. Actually, the first column need not
necessarily be used to find a determinant; in fact, the following is also true.
8 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
THEOREM 2.2
n n
àttA = £ UijAij = £ aijAq.
i = l y = l
The content of this theorem is that in the preceding inductive definition of a
determinant, the first column can be replaced by an arbitrary column or an
arbitrary row. We will accept this theorem without proof. An important property
of expansions of the type in the theorem is that
n n
Σ a
iJA
ik = Σ a
iJA
kj = o,
i = l 7=1
if/ Φ k on the left and / φ k on the right. That is, if the cofactors are taken from a
different column or a different row, the resulting sum is zero.
An important property of determinants (which also will not be proved
here) is given in the following.
THEOREM 2.3
If A and B are n x n matrices,
tet(AB) = (det/0(det£);
that is, the determinant of the product of two matrices is the product of the
determinants.
A square matrix is said to be singular if det A = 0 and nonsingular if
det^ Φ0.
A matrix B is called the inverse of the square matrix A if AB = /. Suppose
det^ φ 0. We define B by
*-SiW CL·)
that is, B is the transpose of the matrix obtained by replacing each element by its
cofactor and then dividing by the scalar det A. Then
AB = -]
— YaikAjk .
If / = j , then the entry is det A; if / #y, the product is zero, as noted previously.
2. SOME ELEMENTARY MATRIX ALGEBRA 9
Hence, the matrix ß = 1 aikAjk] looks like
Tdet^ 0 0 ··· 0 1
0 dei A 0 · ··
LO ··· det^J
Therefore, AB — I, so B is the inverse of A. This matrix B is usually written A'1
.
Since AA'1
= /, then 1 = d e t ^ " 1
) = (det^)(det^f_1
), and it cannot be the
case that A has an inverse if det A = 0. The arguments above can be used to show
the following.
THEOREM 2.4
A necessary and sufficient condition that A~l
exists is that dei A φ 0.
It can also be shown that A'1
A = /, that is, that A and A"1
commute.
Further, A~l
is unique. To see this, suppose there exists a matrix X such that
AX = /. Then, multiplying both sides of this equation on the left by A'1
yields
A-1
AX=A-1
I=A~1
or
X=A~K
Similarly, if XA = /, then XAA'1
= A~ or X = A'1
.
If Al and A2 are nonsingular, then
(A1A2) = A2 Αγ ,
since AlA2A2~l
Aì1
= /.
Tofitinitial conditions for solutions of systems of differential equations, it
will be necessary to consider systems of linear algebraic equations in the variables
JC1? ..., xn of the form
*11*1 + û
12*2 + ' * ' + an*n = C
a
2X
 + ' " + a
2nx
n = C
2
tf„i*i + *·· + amxH = cH.
10 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
This can be written
Ax = c, (2.2)
where A = [α0·] is an n x n matrix and x = [xt] and c = [cj are vectors. If
det^ 7*0, then
x = A~x
c
is a solution, since
Ax = A{A~l
c) = AA~l
c = c.
It is not difficult to see that this is the only solution. Suppose x and y are both
solution vectors of (2.2). Since Ax = c and Ay = c, Ax — Ay = c — c = 0, the
null vector (all entries zero). The distributive law (x and y are « x 1 matrices) says
that
A(x-y) = 0.
Since A is inverüble and v4_1
0 = 0, we have
A-1
A(x-y) = A-i
0 = 0;
hence
x - j > = 0
or
Thus there is only one solution. The converse of this result is also true.
THEOREM 2.5
A necessary and sufficient condition for the system (2.2) to have a unique
solution is that A be nonsingular.
In particular, note that if the vector c is null, that is, has all its components
zero, x = 0 is the only solution vector if A is nonsingular. (If A is singular, then
x = 0 is one solution and there must be another solution, since solutions are
not unique.)
2. SOME ELEMENTARY MATRIX ALGEBRA 11
A finite set of w-dimensional vectors xx,..., xk, that is, n x 1 matrices, is
said to be linearly dependent if there exist constants ci,..., ck, not all zero, such
that
c1x1 + c2x2 + * · * + ckxk = 0.
A set of vectors that is not linearly dependent is said to be linearly independent.
An expression of the form c1x1 + c2x2 + * * · + ckxk is said to be a linear com­
bination of the vectors xi9..., xn. The following theorem offers a way to check
whether a matrix is singular.
THEOREM 2.6
A necessary and sufficient condition for a matrix to be nonsingular is that its
columns are linearly independent vectors.
Proof. Let A be a matrix and let x1
, ..., xn
denote the w-dimensional column
vectors of A. We inquire whether there exist real numbers cl9..., cn such
that
0. (2.3)
cxxl
+ c2x2
+ · · · + cnxn
If we let C be the vector
C =
then (2.3) i
~Cl~
Cl
Cn
may
5
be written
AC = 0, (2.4)
since A = [x1
,...^*]. The equation AC — 0 has a nontrivial solution
(Theorem 2.5) ifand only ifA is singular. Thus, ifA is nonsingular, the only
solution of (2.3) is c1 = c2 = · · · = c„ = 0 and the vectors x1
, ..., xn
are
linearly independent. If A is singular, there exists a nontrivial solution C of
(2.4) and x1
,..., xn
are linearly dependent, using the components of C as
the constants in the definition of linear dependence.
Finally, it will be necessary to consider matrices whose elements are func­
tions. We can think of a matrix A(t) as a mapping from the set of real numbers
into the set ofn x n matrices. It is simpler, however, to think of such a matrix as
n2
functions labeled a^t) and make our definitions in terms of the entries rather
12 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
than in terms of the mapping. Proceeding this way, a matrix of functions A(t) =
[a0(0] is said to be
1. continuous at a point t0 if each α(](ί) is continuous at t0,
2. differentiable at a point t0 if each a^(t) is differentiable at t0,
3. integrable over [a, b] if each a^t) is integrable over [a, b],
lfA(t) is differentiable, define A'(i) = [a'^t)]. From the definition of the product
of two matrices, we have at once that if A(t) and B(t) are differentiable and the
product A(t)B(t) is defined, then A{t)B{i) is differentiable. Further,
(AB)' = A'B + AB'. (2.5)
To see this, note that
= A'B + AB'.
This fact will be very important in our development of a theory for systems of
differential equations. Note that the order of multiplication is important.
Similarly, we define
and the usual rules for integration apply. For example,
[A(s) + B(s)]ds= A(s)ds+ B(s)ds.
Jo Jo Jo
EXERCISES
1. Find A + B, AB, and BA when
--Gil· -[-:a
2. SOME ELEMENTARY MATRIX ALGEBRA 13
(b) A
(c)
-[-; a- - G -a
(d)
(e) Λ =
Λ =
Λ =
"1 2 3"
0 1 0
4 5 6
" - 1 - 2
1 2
0 1
B =
- 3 "
3
- 4 _
"2 0 1
0 - 1 4
_2 2 0
£ =
" 2 13"]
- 1 4 2 |
_ 6 1 Oj
1 1
0 1
0 0
0 0
0 0
1 0
1 0
0 2
B =
2 0 0
0 3 1
0 0 3
0 0 0
2. A matrix is said to be diagonal if atj = 0 when i Φ j . Show that the product of two
diagonal matrices is diagonal.
3. Show that A(BC) = (AB)C by using the definition of product for matrices.
4. If a is a scalar and A and B are matrices, show that <x(AB) = (<xA)B = Α(μΒ), that is,
scalars (numbers) "factor through" matrix multiplication.
5. Establish the distributive laws for matrix multiplication,
(A + B)C AC + BC
C{A + B) = CA + CB.
6. Prove that (ABf = BT
AT
.
7. Construct Λ"1
, if A =
<·>Β
(c)
(e)
"2
0
L°
"2
0
0
0
a
1 0"
2 0
0 3
1 0 0"
2 1 0
0 2 0
0 0 2_
(b)
(d)
[2 0
0 3
[o o
°1
0
4j
[1
0
[
_
1
0
1
1
i"|
0
oj
8. Find solutions of the system
when A is as in Exercise 7(b), (c), and (d).
14 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
9. Determine whether the following sets of vectors are linearly independent or linearly
dependent.
(a) 0 , 1 , 0 (b)
« , U , U Λ M Λ W
10. Find A'(/) and f0 A(s) ds if A(t) =
( cos t sin / 
— sin / cos tl
(c)
GO
(:*::) * (::::r,)
11. Verify equation (2.5) where A{i) is given by Exercise 10(a) and B(t) by Exercise
10(b).
3. The Structure of Solutions of Homogeneous Linear Systems
Let x represent an «-dimensional vector, let A be an n x n matrix of
continuous functions defined on an interval /, and let e{t) be an «-vector of
continuous functions defined on /. The system (1.2), using matrix notation, can
be written
x' = A(t)x + e(t). (3.1)
A solution of (3.1) on /is a differentiable vector function φ(ί) such that
<pi) = Α(ί)φ(ή + e(t)
for every / e /. If x0 is a constant vector and t0 G I, then the initial value problem
for (3.1) is to find a solution of (3.1) that satisfies, in addition,
<K'o) = *o· (3
·2
)
We state the basic existence theorem, which is a special case of a theorem to be
proved in Chapter 3.
3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 15
THEOREM 3.1
Let A (t) be a continuous n x n matrix defined on an interval / and let e(t) be a
continuous n-vector defined on /. For every constant n-vector JC0 and every
f0 e /, there exists a unique differentiable vector φ(ί) defined on / such that
<p'(i) = A(t)9(f) + e(t), tel,
For the remainder of this section we consider the case e(t) = 0, called the
homogeneous case. (The quantity e(t) sometimes represents an external force, so
this case is also called the unforced case.) Here we attempt to develop a structure
similar to that for scalar equations. It will be convenient to think of Equation
(3.1) in a slightly different way. Let sé be one set of functions and 0$ another.
Suppose that for each element x in the set sé we associate a unique element of
the set Jf, called Tx. T is a mapping from the set sé into the set 0& and
symbolically we write Tse^0^. The mapping T is also called an operator, sé
is called the domain of T and ^ , the set of all y such that y = Tx, is called the
range of T. (Sometimes it is convenient to indicate a larger set, a set containing
the range, in the symbolic definition.) For example, let sé be the set of all
continuous «-vectors on [0,1]. Define Tx = y, xesé, by
y(t)= x(s)ds.
Set ^ , in this case, could be the set of continuous vectors defined on [0,1] that
have a continuous derivative on (0,1). As another example, let sé be as above and
let Ω be an n x n constant matrix; then an operator can be defined by y = Ωχ. All
of our sets will have the property that if a is a number and x and y are in the set, ocx
and x + y are in the set.
Now let x(t) be a continuously differentiable vector. Define an operator L
on the set of all such functions by
L[x] = x' — Ax,
where A is an n x n continuous matrix. L maps continuously differentiable
functions into continuous ones and a solution of (3.1) is just the set of functions
that are mapped by L onto the function e(t), or, in the homogeneous case, onto
the constant function that is everywhere zero.
An operator Tis said to be linear if for any two elements x, y in its domain,
and any two numbers (scalars) a and /?,
T(ocx + ßy) = OLT(X) + ßT(y).
16 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
THEOREM 3.2
The operator L is a linear operator.
Proof. Let Xi(t), x2{t) be differentiable vector functions and let cl5 c2 be real
constants. Then
L[cix1 + c2x2](t) = (c^M + c2x2(t)Y - Λ(0(^ι*ι(0 + c2x2(t))
= ογχ[{ί) + c2x2(t) - £^(0*1 (0 - c2A(t)x2(t)
= c^x'^t) - Α(ήΧί(ή) + c2(x£(0 - A(t)x2(t))
= c1L[x1](0 + c2L[x2](/).
THEOREM 3.3
Every linear combination of solutions of
Z,[x] = 0 (3.3)
is a solution of (3.3).
Proof. Ifx^i), x2{t) are solutions of (3.3) and c1 and c2 are constants,
L[c1x1 + c2.x2]=
^î^t-^i] + c2L[x2] = 0,
since L[xJ = 0 and L[x2] = 0.
Suppose now that we have n solution vectors, JC£(Î), of (3.3) defined on an
interval /. Then we can form a matrix Φ whose columns are these solutions. We
write
Φ = [χί9χ2,...,χ„].
Since the elements of Φ are differentiable, we can compute Φ'. Now
ith column Φ'(ή = x[(t) = A(t)xt(t),
so that
Φ'(0 = [Λ(0*ι(0, Λ(0*2(0, · · ·, A(t)xm(t)]
= A(t)[xl(t),x2(t),...,xn(t)]
= Λ(ήΦ(ή.
That is, Φ satisfies
Φ'(0 = Α(ήΦ(ή.
3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 17
Equation (3.4) is a shorthand method for writing n-vector differential equations
(n2
scalar differential equations). For this reason, Theorem 3.1—the existence
and uniqueness theorem—applies if we specify an initial condition Φ(0) = C,
where C is a constant matrix. Using (3.4), we have the following useful fact.
If<b is a matrix whose columns are solutions of (3.3) and c is a constant vector,
then Φι is a solution of (3.3).
Proof The proof is a straightforward computation,
L[<bc] = (Oc)' - ΑΦο
= Φ'ε — A<bc
= (Φ' - A®)c
= 0,
since Φ satisfies (3.4).
If Φ(/) is a matrix that is nonsingular for each t and that satisfies the matrix
differential equation (3.4), then Φ is said to be a fundamental matrix for the
differential equation x' — Ax = 0.
We are now equipped to prove the principal theorem of this section. The
content of this theorem is that finding any fundamental matrix for (3.3) allows us
to find all of the solutions of (3.3).
THEOREM 3.4
If Φ is a fundamental matrix for (3.3) on an interval / where A(t) is con­
tinuous, then every solution of (3.3) can be written Oc for an appropriate
constant vector c.
Proof Let x(t) be an arbitrary solution defined on an interval /, let t0 e I, and let
Φ be a given fundamental matrix for (3.3). Now Φ(/0) is a nonsingular
constant matrix, so Φ_1
(/0) exists (Theorem 2.4). Let c = &~1
(t0)x(t0).
Since c is a vector, y(t) = Φ(/)<: is a solution of (3.3).
Furthermore,
y(to) = Q(*o)c = ΦΟο)*"1
^)*^))
= *('o).
Since solutions of the initial value problem are unique (Theorem 3.1), it
follows that x(t) = y(t) for tel, or that x(t) = Φ(ήϋ, as claimed.
18 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
The reader will recognize that Theorem 3.4 is a wholesale way of doing
what was done in particular for scalar equations. For example, an arbitrary
solution of a second-order scalar equation can be represented as a linear combi­
nation of two linearly independent solutions. The quantity y(t) = Q)(t)c repre­
sents the /th component of y(t) as a linear combination of the /th components
of n given solutions—each row is in fact the same linear combination, and the
components of c are the coefficients.
We illustrate Theorem 3.4 by comparing it with the theory for second-order
scalar equations. Consider
y'i = y i
(3.5)
y2 = -y I-
This system is the system we obtain from
y"+y = 0. (3.6)
By substitution in (3.5) it may be verified that the vectors x and
Leos (0J
Γ cos (0~|
. , I are two solutions of (3.5). Let
|_-sin(0j
o = |"sin(0 cos (01
|_cos (t) - sin (/)J '
Since detΦ = —sin2
(/) — cos2
(/) = — 1, Φ is nonsingular and hence is a funda­
mental matrix. By Theorem 3.4, any solution y(t) of (3.5) can be written
y(t) = Φο ■
sin (/) cos (0
cos (0 — sin (t)
][:;]
for appropriate cx and c2.
Note that two linearly independent solutions of (3.6) are φί = sin(/),
φ2 = cos(t). The Wronskian of φ1 and φ2, W((p1,(p2)(t)9 is the determinant of
thefundamental matrix φ. That is,
detO(r0) =^(Φι,φ2 )(0·
This is the connecting link between the theory for second-order scalar equations
and the theory for the equivalent systems.
Consider now the system (n = 3),
3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 19
or
y'i = yi + 4
^3
y'i = - 3 Ί - 2
^3
0 1
1 0
0 0
Λ
- 2
1 /
ri
[ y 2
U
(3.7)
It can be verified by substitution that three solutions are
Γ sin (t)
cos (0
L o
9
cos (0
-sin(f)
0
, and
e' 1
-3e«
β' J
Then Φ is given by
Φ(0 =
sin (i)
cos (f)
_ 0
cos (0
-sin(i)
0
e'
- 3 e
e'
(3.8)
A computation shows that
detO(0 = e'(-sin2
(0 - cos2
(0)
= - e V Ö ,
so Φ is nonsingular and hence is a fundamental matrix. Every solution of (3.7)
can be written 0(i)c for an appropriate constant vector c.
Suppose, for example, we desire to find the solution of (3.7) that satisfies
the initial conditions >Ί(0) = 1, y2(0) = 1, ^3(0) = 1. It is necessary to choose
c =
such that
<D(0)c =
20 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
or
0 1
1 0
0 0
1]
- 3
lj
pi
Cl
LC
3_
=
ΊΊ
1
lj
In equation form, this is
c2 + c3 = 1
3 c 3 = l
c3 = l.
(We could, of course, invert the matrix, but that would involve more labor.) Thus
c =
and
<&(t)c =
sin (t) cos (/)
cos(/) — sin(0
0 0
4 sin (0 + ex
~
4cos(/)- 3er
é
»
e' Ί
-3e'
e' J
|4
0
Li
is the desired solution.
The principal issue, however, the question of the existence of a fundamental
matrix (and how to find it), has been sidestepped. In the examples above the
matrix was exhibited explicitly, but the question as to whether a fundamental
matrix can always be found thus far remains open. Let
φί) =
ΦΪ(0
L<pUt).
be the solution of (3.3) satisfying φ]{ί0) = 0J Φ i, and φ(ΐ0) = 1, that is,
3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 21
«P'COH
i th place.
0
The set of solutions φ%ί), i ■
to form a solution matrix
1,2,...,«, which exists by Theorem 3.1, can be used
Φ(0 = [φι
ω,Φ2
ω,.··,Φ"(0].
Further, for t = 0, det Φ(ή = 1. If the determinant should remain nonzero on an
interval /, then Φ would be a fundamental matrix on this interval. This is indeed
the case, as given by Theorem 3.5. The trace of a matrix A{i) = [α0·(0] (written
trA(t)) is defined to be the sum of the diagonal elements, that is, trA(t) =
Σ?=ι a
iM- Note that tr A(t) is a scalar function.
THEOREM 3.5 (Abel's formula)
Let A (t) be an « x « matrix of continuous functions on / = [a, b and let Φ(ί)
be a matrix of differentiate functions such that
Φ'(ί) = Α(ί)Φ(ί).
Then for t910 e /,
detO(0 = detlOi^let^W*.
Since an exponential is never zero (note that this exponential is a real-
valued function, not a matrix), Theorem 3.5 says that if Φ is a matrix whose
columns are solutions of (3.3), then detO(i) is, identically zero or never zero.
Thus, to find a fundamental matrix, we need only to find « solutions that at
some point t0 are linearly independent vectors. We omit the proof of Theorem
3.5, although the exercises following this section indicate how it can be done.
Combining the preceding arguments, we have
THEOREM 3.6
If A(t) is continuous, there exists a fundamental matrix for the system (3.3).
22 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
EXERCISES
1. If Φ(/) is a fundamental matrix for x' = Ax and C is a nonsingular matrix of the
same dimension, show that 0(/)C is a fundamental matrix. (Recall that C = 0, if C
is a constant matrix.)
2. Show that if Φ(ί) and Ψ(ή are fundamental matrices for x' = Ax, then there is a
constant, nonsingular matrix C such that <D(i)C = Ψ(0·
3. Verify that the matrix (3.8) is a fundamental matrix for (3.7). Illustrate Theorem 3.5
with the matrix (3.8).
4. Define an operator Tx on the set of continuous functions on [0,1] by (Tx)(t) =
Y0x(s)ds, te[0,1]. Show that ris a linear operator. Can you define Ton a larger
domain?
5. Show that multiplication of vectors in R" by a (fixed) matrix A defines a linear
operator.
6. LetB(t) = ,* L
1 2
^ . Let bM) be differentiable. Compute (det B(t))' by first
expanding det B(t) and then differentiating. Then show that
L*21
Γ«1ΐ(0 <*12(')~|
7. Let Φ(0
l(0 *1
I be a fundamental matrix for x' = A(t), where Λ =
(0 *22(')J
Show that
(detO)' = I det
2
ΣΛ
1***1
1
* 2 1
Σα
ι*^
* 2 2
+ det
* 1 1 * 1 2
2 2
Z-^Jk-^kl La
2kX
k2
1 1
= ^û»detO.
1
8. Let z(i) = det Φ(ί) where Φ(ί) is as given in Exercise 7. Use Exercise 7 to conclude
that z(t) = z(0)e&itTA{s))ds
and establish Theorem 3.5 for this special case.
4. Matrix Analysis and the Matrix Exponential
In Section 2 some of the basic ideas of matrices and their algebra were
developed. Matrices were used in Section 3, but mostly for notational conve­
nience; they were used to give a simple representation to complicated ex­
pressions. We now need to take a further step, to learn how to take a limit of a
4. MATRIX ANALYSIS AND THE MATRIX EXPONENTIAL 23
sequence of matrices. The power of the simple idea of limit is familiar to every
calculus student and lies at the heart of all of analysis. It is possible to carry over
to matrices (and to more general settings) many of the ideas from elementary
calculus. We limit our scope to one simple idea, convergence. We will use this
notion to define a very useful matrix, which is a sum of an infinite series of
matrices. This matrix, called the exponential of another matrix, has many (and
fails to have many other) properties of the real exponential function. It is an
example of one of the fundamental themes of mathematics, taking an idea in one
setting and developing it in another—the process called generalization. As we
shall see in the material that follows, the concept of the exponential of a matrix
is a very useful generalization.
The reader is assumed to be familiar with limits of sequences and sums of
series from calculus. The notion ofmaking the absolute value (or modulus, in the
complex case) of a quantity small is crucial. The first step on the way to the
definition we need is to replace absolute value with a notion applicable to vectors
and matrices. This concept is called the norm of a matrix or a vector. There are
many ways to do this, and for some applications the clever choice of a norm is
very important. For what we need here, any of the usual notions of norm would
be satisfactory, so we choose one that makes the proofs easy.
Define, for an r x r matrix A, a real number, called the norm of A, written
MII, by
ij
IfA is a vector (aX9...9 ar)T
, define A = £'=1 |af|. The norm of a matrix has the
following properties:
1. M||>0,^^0,and||0||=0;
2. MU = M MU;
3. A + B < MU + ||*||; and
4. ΑΒ<ΑΒ,
where A and Äarerxr matrices and c is a real or complex number.
If* is an n-vector, the definitions for norms of matrices and vectors are so
related that || Ax  <  A  x. Note that the norm of a vector satisfies properties
(1), (2), and (3). We also note that there are other "norms" for vectors and
matrices.
The definitions that follow distinguish between the concept of convergence
and the concept of converging to a limit. The distinction is not really needed at
this point, but it will be important in Chapter 3 where these ideas must be
extended further.
24 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
A sequence of r x r matrices, An, is convergent (or is a Cauchy sequence)
if for ε > 0 there exists a positive integer N such that if m, n > N9 then || An —
Am || < ε. The definition of convergence of a sequence of matrices is exactly the
same as the definition of convergence of a sequence of real numbers except that
norm, || ||, has replaced absolute value, | |. This is true of the definition of limit
of a sequence of matrices as well. A matrix A is said to be the limit of a sequence
of matrices An if for each ε > 0 there exists a positive integer N such that if n >
TV, then A - An < e.
THEOREM 4.1
Every convergent sequence of matrices An has a limit.
Proof. The proof will follow from the fact that every convergent sequence of
real numbers has a limit and the fact that α» — a™ <  An — Am ||, where αξ
is the element in the /th row,y th column of thepth matrix in the sequence.
Hence, if An converges, so does a». Let lim,,.^ ajj = alj9l < i < r, 1 < j < r,
and let A = [α$. Fore > 0, choose^· such that ais — α% < e/r2
fovn > Νψ
and let Λ^ = maxuNu. Then A - An = ΣυΙ«ί/ - a
ij < ^Φ1
^ ε f o r
n>N.
Given a sequence of matrices An, we can form another sequence (called the
sequence of partial sums) by defining Sn = Ax + · · · + An. We denote the se­
quence {Sn} by £"Ä 1 Ai and call J ^ Ai an infinite series. If limM_00 Sn = S, then
the series is said to converge and its sum is defined to be S. If {Sn} does not
converge, the series is said to diverge, and the sum is not defined. It is important
to note that we can often show that a series converges without being able to find
the limit. For example, we could investigate the (real) infinite series
00 ν Π
o ni
or the sequence of partial sums
x2
xn
and deduce that it converges. Then we could define a new function by
00 v "
o n
4. MATRIX ANALYSIS AND THE MATRIX EXPONENTIAL 25
All of the common properties of the exponential can be deduced from this series.
While this approach is not commonly used in calculus for the real exponential, it
is often used to define the exponential of a complex number. We will take this
approach to define the exponential of a matrix and then deduce some of its
properties through the limiting process. The series of interest is
Â2
1+Λ+
Λ
- + (4.1)
THEOREM 4.2
The series (4.1) is convergent.
Proof. The nth partial sum is Sn = £ï=0 Ak
/k, so
Sn-Sm =
A Ak
f —
£, Ak
~à>fcT ~
~
1 V Α
*
*Jr+iir
where we have chosen labeling so that n > m. By property (iii) of the norm,
the last quantity above is
< A Μ Ί < A Mf.
k=m+l k k=m+i k
Since || A || is a real number, the right-hand side is a part of the convergent
series of real numbers
- Σ
Mil* (4.2)
Hence, since (4.2) is convergent, ifε > 0, there is an N such that for m> N,
Z MU* ^
This is sufficient to prove that {Sn} is convergent. It has a limit, by Theorem
4.1.
The sum of this series is denoted eA
, that is,
oo Ak
eA
=Y—.
kkk (4.3)
Similarly, for a real number t,
26 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
Since each entry of eAt
is defined as a convergent power series, it is diflferentiable
(and hence continuous and integrable) and may be differentiated term by term.
The/?th term of the series is Ap
tp
/p. Hence, for/? > 1 (for/? = 0, (d/dt)I = 0, the
derivative of a constant matrix is the null matrix) making use of (2.5),
dt p Idt p  V p  P
= A
AH
p
(p - 1)! '
where αξ is the t-y'th element of Ap
. Hence,
oo ^ * - l f * - l oo Ak
tk
= ΑοΛ'
(eAt
)' = Σ A =AYd-— = Ae
*tl (k - 1)! k% k
THEOREM 4.3
(e4
')' = Ae*' m ^Α.
A matrix A is said to be similar to a matrix B if there exists a nonsingular
matrix T such that T~l
AT = B. Similar matrices are related in ways that are
useful to us in the study of differential equations. Here we note one such property
involving the exponential of a matrix of the form eTAT
' Since
(TAT-1
)" = (TAT'1
) (TAT'1
) ■ ■ (TAT-1
),
> V '
n times
(TAT'1
)- = TAT'1
TAT'1
■ · ■ TAT'1
= TA'T'1
.
If S„ is the «th partial sum of eTAT
' that is,
s = A (TAT'1
)1
then
4. MATRIX ANALYSIS AND THE MATRIX EXPONENTIAL 27
» ΤΑΎ'1
S
n- Σ J]
= <Ιοΐ)Γ
"'
From this it follows that
lim S„ = TeA
T~
n~*ao
that is,
,ΤΛΤ' = TeAT-l_
(4.4)
We need an additional fact about eA
, whose proof we defer to the exercises.
THEOREM 4.4
aete** φθ for any matrix M; that is, e^ is always nonsingular.
EXERCISES
1. Let x = I 1
1 be a vector in R2
. Describe (geometrically) the set ofpoints x such that
W
||*|| = 1 and such that ||x|| < 1.
2. If A is an n x n matrix and JC is an n-vector, show that || Ax < A  x. (Hint:
Use the definition of multiplication.)
3. Compute eA
by summing (4.3) when A =
(a)
U (b)
[l 0 0"
0 1 0
[o 0 3
(c)
"2 0 0]
0 2 0
0 0 2J
4. Show that if A =
*1
0
0
0
λ2
0
0
0
^3j
' e A

eXi
0 0
0 ex
* 0
0 0 ex
>
5. Let A =
"o
0
0
1 0l
0 1
0 OJ
. Compute A2
, A3
, and A4
.
28 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
6. Find eA
where A is as in Exercise 5.
7. If A and B are n x n matrices such that AB = BA, show that eA
eB
— eA+B
.
8. Combine Exercises 4 and 7 and the definition (4.3) to find eA
where A =
(a)
Γ.3 (b)
[2 1 0"
0 2 0
L° ° L
(c)
~i i ol
0 1 0
_0 0 l j
9. Show that (eA
)'x
= e~A
. (Hint: See Exercise 7.)
10. Show that UQieM
φ 0, for any M. {Hint: eM
e~M
= I, by Exercise 9.)
11. Let A(t) be a continuous matrix defined on an interval /. Show that the scalar
function M (Oil is a continuous function.
12. Let v : be a vector in Rn
. Let N(v) = max,·^!,v2,...,vn]. Show that
N(v) satisfies properties (i), (ii), and (iii) of the listed properties for the norm of a
matrix.
13. Let N(v) be as in Exercise 12. Show that
N(v)< ||t; || <nN(v).
14. Let A(t) be a continuous matrix defined on an interval [a, b]. Show that
I ffe
II Γ b
A{t)dt<
I Jo II Ja
(Hint: Use the definition of
functions.)
A(f)dt.
and a similar property of absolute value for scalar
5. The Constant Coefficient Case: Real and Distinct Eigenvalues
The theory developed in Section 3 exhibited the rich structure of linear
systems of differential equations. Unfortunately, that theory gave no clues as to
how we construct the fundamental matrix on which that theory depends. This is
not unexpected since, even for the simple scalar equation y" + a(t)y = 0, there is
no "formula" for the general solution. The class of equations that we can actually
solve is far smaller than that to which the theory of Section 3 applies. However,
using the material developed in Section 4, it is possible to construct solutions to
those systems where the coefficient matrix is constant, that is, systems of the form
5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 29
/ = Ay
where A is a constant matrix. The amount of work required to do this is
somewhat more than in the case of scalar equations that the reader may have
encountered previously, but the added difficulties are those of linear algebra
rather than of differential equations. Several new concepts from linear algebra
appear here, particularly the important concepts of eigenvalue and eigenvector.
The first theorem shows the importance of the concept of the exponential of a
matrix, developed in Section 4. The remainder of this section is devoted to
making this idea constructive.
THEOREM 5.1
Let Abe a constant matrix. A fundamental matrix Φ for
is given by
(5.1)
(5.2)
Proof. From Theorem 4.3, (eAt
)' = AeA
 so Φ'(ί) = ΑΦ(ή. Furthermore,
det eAt
φ 0 (Theorem 4.4), and thus Φ is a fundamental matrix.
The matrix eAt
is, however, not readily accessible, for since
A2
eAt
= 1+ At + —t2
+ (5.3)
it is necessary to sum the series. There is one tractable case (see Exercise 3 of
Section 4), the case where the matrix is diagonal, that is, αΗ = λί9 ai} = 0, / #y, or
A =
K
0
0
0
for then A2
is
' X 0
0 λ
0
0
λ2
0
0
A3
0
(f
0
λ2
0
0
... o
K
>
(5.4)
30 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
A3
has Xf on the diagonal, and so on. The series (4.3) may be summed to obtain
eA
' =
ex
><
0
. 0
0
eA2r
0 ·
0
• 0
ex
"'
This is not surprising, of course, since the system (4.1) in this case is
x1 = λ1χ1
X
2 =
^2-*2
The system is uncoupled (each equation does not involve any other) and each
equation may be solved directly to yield x^i) = eXi

lï A is not diagonal, it still may be the case that there is a matrix Tsuch that
TAT~l
has the form (5.4). (Recall that this means that A is similar to a diagonal
matrix.) From (4.4) it follows that
TeAt
T-x
= eTATi
t. (5.5)
Thus, if we can find a matrix rand the >l's resulting after the transformation, we
can, of course, recover eAt
. Saying this another way, let B = TAT'1
and have the
form (5.4). Then the solution of
y' = By
can be found, since eBt
can be computed. From eAt
= T~l
eBt
T, we have eAt
.
Rather than attempt to find the Γ, we reason as follows: eBt
has entries eki

Premultiphcation (multiplication on the left) by Γ"1
and postmultiplication
(multiplication on the right) by T rearranges and combines these. Suppose we
disregard B and simply look for a solution of (5.1) of the form
y =
[cYekit
c2eXit
_cne^_
= eXtt
[~c
i]
_cn
= eXit
c,
5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 31
where Af is one of the diagonal elements of B = TAT l
. Then
~kiCie^~
lfne kit
= Ky,
or
hy = Ay,
which we write as
(A - XJ)y = 0. (5.6)
Ify is not to be identically zero, then A — ^/must be singular (Theorem 2.5). The
(complex) numbers λ such that
detG4-A/) = 0,
are called the eigenvalues of the matrix A. Vectors c, not identically zero, such
that
(A - XI)c = 0,
are called eigenvectors. Equation (5.6) says that A, must be an eigenvalue of A
and substitution for y gives
(A - XJ)ekit
c = 0,
or
{A - kj)c = 0, (5.7)
since eXi%
φ 0. Equation (5.7) says that c must be an eigenvector of A. Since the
At are fixed, that is, they are the diagonal elements of B, it would seem that we
have no hope of satisfying (5.6). (Equation (5.7) can be satisfied, since the
constant vector c has been arbitrary up to now.) This matter is resolved in the
following.
THEOREM 5.2
If TAT'1
= B, then A and B have the same eigenvalues.
32 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
Proof. B-XI=TAΤ'1
- XTT'1
= Τ(Α-λΙ)Τ-
Thus, using Theorem 2.3,
det (B - XI) = (det r)(det(,4 - /l/))(det Γ"1
).
Now both det Tand det T'1
φ 0, since Γ"1
exists. Thus, det (B - XI) = 0
if and only if det {A — XI) = 0 or A and B have the same eigenvalues.
Note that for a diagonal matrix B, the eigenvalues are the n entries. Clearly,
det (2? — XI) is a polynomial of degree n in X; hence, so is det (A — XI). This
polynomial is called the characteristic polynomial.
Thus, if X is an eigenvalue of A and if c is chosen as an eigenvector, eXit
c is a
solution. The following theorem summarizes this argument.
THEOREM 5.3
If A is a constant matrix, λ an eigenvalue of A9 and c a corresponding
eigenvector, then j = eXt
c is a solution of (5.1)
Since A has n eigenvalues, we can find n such solutions, and it would seem
then that we have found the columns for a fundamental matrix. The difficulty,
however, is that the eigenvalues are not necessarily distinct and the eigenvectors
corresponding to a repeated eigenvalue may not be linearly independent. (Eigen­
vectors corresponding to distinct eigenvalues are always linearly independent.) If
this occurs, we have not found n linearly independent column vectors to make a
fundamental matrix. The analysis in this case is a good bit more complicated, and
we defer it for the moment. However, it is the case that if all of the eigenvalues of
A are distinct, then A is similar to a diagonal matrix, so the n solutions obtained
actually are linearly independent, and a fundamental matrix has been found.
THEOREM 5.4
Let A be a constant n x n matrix with distinct eigenvalues λί9...,λ„ and let
C,..., cn be corresponding eigenvectors. Then a fundamental matrix for (5.1)
is given by
Φ(ί) = [ ^ ^ , Λ 2 , . . . , Λ Λ 1 .
5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 33
We illustrate the foregoing analysis with some examples, where the Λ,/s a r e
real numbers. First, consider the system
X ■
— Z A J ^2
x2 = 4x2
X~ ^— Ζ Λ | ~I~ DX2 1 ■^•^'3
or, in matrix form,
* i ~
x2
* 3 -
1
=
'2
0
_2
- 1 0Ί
4 0
5 3J
Γ*1
x2
L*3
(5.8)
The eigenvalues are the solutions of
det
2
0
2
- 1 (Γ
4 0
5 3_
- λ
" 1 0 0
0 1 0
_0 0 1
or
det
2-λ - 1 0
0 4 - 1 0
2 5 3 - A .
= 0.
Expansion gives
(2
,T4 - λ 0 1 „ Γ -1 01 Λ
-  5 3 - Α Μ 4 - Λ OJ = °
or
(2-λ)(4-λ)(3-λ) = 0.
Thus, the eigenvalues are λί = 2, λ2 = 3, and λ3 = 4, all distinct. Hence, if we can
find the eigenvectors, we can find a fundamental matrix.
An eigenvector can be determined by solving
2-λ - 1 0
0 4-λ 0
2 5 3 -λ. -3-J
= 0
34 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
for λ = 2, 3, 4. If λ = 2, this becomes
0.
"0 - 1 0"|
0 2 0
_2 5 lj
pi]
Γ2
UJ
This yields c2 — 0 and
2c! + c3 = 0.
Thus, an eigenvector (clearly there are infinitely many, since a constant multiple
of an eigenvector satisfies the defining equation) is
c =
1"
0
t-2.
For λ = 3, the system becomes
1
0
2
- 1 0~|
1 0
5 OJ
P1
!
°2
U3J
= 0
or
c
i + c
2 = 0
c2 = 0
2CÌ + 5c2 = 0.
Here c1 and c2 are zero, and since c3 does not appear in the equations, it may be
chosen arbitrarily. Hence, an eigenvector is
Finally, for λ = 4, the equations are
2cj + c 2 = 0
2cx + 5c2 — c3 = 0.
5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 35
Setting c2 = 2 (arbitrarily), ct = — 1 and c3 = 2c1 + 5c2 = — 2 + 10 = 8. Thus,
afinaleigenvector is
Three linearly independent solutions are
Γ Γ
0
L-2_
e2
',
"0"
0
_1_
e3
',
"-1Ί
2
8j
and a fundamental matrix corresponding to (5.8) is
Φ(ΐ) =
Γ e2
'
0
L-2e2
'
0
0
e3
«
-e4
'
2eM
ieM
(5.9)
We now have detO(i) = -é?3i
(é?2r
)(2*?4i
) = -2e9
 which, as it must be, is #0.
As another example, consider the system
Χγ =
Χ^ "Τ X$
x
2 = *i + 2JC2 + 3JC3
* 3 =
3·Χ3
which, in matrix form, is
with x = I JC2 J · The eigenvalues are the solutions of
/ 1 - A 0 1
det(yi-A/) = det( 1 2 - λ 3
 0 0 3-A>
= (1-A)(2-A)(3-A) = 0
36 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
or λγ = 1, λ2 = 2, λ3 = 3. For λ = Xt = 1 an eigenvector is the solution of
/ 0 0 I W c A
(A - I)c = I 1 1 3 11 c2 ) = 0.
0 0 2/cJ
In equation form, this is
c3 = 0
cl + c2 + 3c3 = 0
2c3 = 0.
A solution of this system of linear equations is given by ct = 1, c2 = — 1, c3 = 0
and one solution of the system of differential equations takes the form
For λ = λ2 = 2, an eigenvector is the solution of
/ - l 0 1  / < V
(^ - 2I)c = I 1 0 3 1 I c2
 0 0 1/  c 3
or
- C ! +C3 = 0
Cl + c3 = 0
c3 = 0.
One solution of these equations is cx = c3 = 0, c2 = 1, yielding a solution to the
differential equation
x2(t) = e2
<l 1 Y
Finally, for λ = λ3 — 3, an eigenvector is a solution of
5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 37
/ - 2 0 Γ
(A - 3/)c = ( 1 - 1 3 ) [ c2 ) = 0
 0 0 0,
or
-2c! + c3 = 0
c1 — c2 4- 3c3 = 0.
Since c3 = 2cx, then
7Ci =c2.
Choosing (arbitrarily) cx = 1 yields c2 = 7 and c3 = 2. Hence, a third solution of
the system of differential equations is given by
x3 = e3
'l 7
Using these three (linearly independent) solutions, a fundamental matrix Φ takes
the form
/ é 0
Φ(ί)= ( -e1
e2
' le3
'
V 0 0 2é>3
',
Suppose that, in addition to solving the system of differential equations,
( l

we want the solution through the vector I 1 I at time t = 0. Since Φ(ί) is a
 0 /
fundamental matrix, any solution takes the form Φ(ί)ο, for some vector c. To fit
the initial condition it is then necessary to solve
O(0)c = ( 1
(Cl

for c = I c2 ] · Thus, we must solve
c3J
38 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
In equation form this is,
or
c3
Cl
c2
Cl +
= 0
= 1
= 2.
Cx +C3
c2 + 7c3
2c3
= 1
= 1
= 0
The desired solution, y(t), is then given by
/ e' 0 e3
'  /1
y(t) = j - e > e2
' 7e3
' J I 2
 0 0 2e3
'/  0 .
Finally, consider the system
*r
x2
. ^ 3 .
/
=
"1 3
0 1
.0 0
- 2 Ί
4
lj
Γ^ιΊ
*2
L^aJ
The eigenvalues are the roots of
det
1-A 3 - 2
0 Ι-λ 4
0 0 1 - λ.
= 0
or
(1 - λ)(1 - λ)(1 - X) = 0;
5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 39
that is, λ = 1 is a triple root. An eigenvector can be found by solving
"0 3 -21
0 0 4
_o o oj
An eigenvector is
c =
"Γ
0
_oj
and one solution is
x =
[e'~
0
_0_
= 0.
Tofindtwo additional linearly independent solutions requires additional analy­
sis, which is presented in Section 7. It should be noted here that a theorem from
linear algebra states that if the matrix A is similar to a diagonal matrix, there will
always be enough (that is, n) linearly independent eigenvectors to successfully
carry out the procedure for finding a fundamental matrix. Two particular cases
are worthy of note. For each eigenvalue there is always one nontrivial eigenvec­
tor, so the procedure above will work (as noted above) if there are n distinct
eigenvalues. If the matrix A satisfies the additional property that ai} = an—such
matrices are said to be symmetric—then A is similar to a diagonal matrix and
there will be "enough" eigenvectors to find a fundamental matrix.
EXERCISES
Compute eigenvalues of the following matrices:
(·)
(c)
(e)
C 2
0 2
(-1
{ 2
f3
h
 2
)
- 1
A
- 1
0
- 2

)
0
0
3
0>)
(d)
(O
40 CHAPTER 1 I SYSTEMSOF LINEAR DIFFERENTIAL EQUATIONS
2. Find eigenvectors corresponding to the eigenvalues found in Exercise 1.
3. Find a fundamental matrix for x' = Ax for each A given in Exercise 1.
4. Find the solution of x' = Ax, x(0) = ,where A is given in Exercise l(a)-(d).
(3
,where A is given in Exercise 1(f).
(3
5. Find the solution of x' = Ax, x(0) =
6
. Show that the matrices [ :]
and [- 3 ''1are similar. (Hint: Let T = [tij] and
7. Let A = (t :>. Let tl and t, be the eigenvectors found in Exercise 2(b). Define
-2 8
try to solve AT = TB.)
T = [tl, tz],a 2 x 2 matrix. Compute T'ATand TAT-'.
8. Let A = (::).Find two linearlyindependent eigenvectorsand repeat Exercise7.
9. For A given in equation (5.8), (5.9) is not eAr.(Why?) Find a matrix C such that
@(t)C= eAt.(Hint: It is sufficient that @(O)C= I.)
10. Show that eigenvectors corresponding to distinct eigenvalues of a matrix A are
linearly independent. (Hint: Suppose eigenvectors u1 and u, correspond to eigen-
values I,, I,, A, # A,, and that clul + czu2 = 0. Apply A to both sides.)
6. The ConstantCoefficientCase: Complex and Distinct
Eigenvalues
Nowhere in the development of the theory in Section 5 was any explicit use
made of the assumption that the eigenvalues of the matrix A were real numbers.
If some of the eigenvalues Izi turn out to be complex numbers, then the corre-
sponding eigenvectors, ci,will contain complex entries, but e""ci will still be a
solution. For most problems with real coefficients, we are interested in having
real-valued solutions. Since all initial conditions can be satisfied, given a funda-
mental matrix 0,
real solutions are of the form 0(t)c, where 0 ( t )and c may have
complex entries. Representing a real vector as the product of a matrix with
complex entries and a constant vector with complex entries is, at least, inelegant
and frequently may be awkward. For this reason we seek a way to find a real
fundamental matrix. That this can always be done is a consequence of the
following theorem.
6. THE CONSTANT COEFFICIENT CASE 41
THEOREM 6.1
If <p(t) is a solution of
x' = Ax, (6.1)
where A is a constant matrix with real-valued entries, then the real part of φ(ί)
(written Re φ(ί)) and the imaginary part of <p(t) (written Im φ(ί)) are both
solutions of (6.1).
Proof. The complex-valued function φ(ί) can be written as
q>(t) = u(t) + iv(t)
where u(i) and v(t) are real-valued functions (u(t) = Re φ(ή, v(t) =
lm<p(0). Since φ(ί) is a solution of (6.1),
(ii(0 + w(t)Y = A(u(t) + fe(0),
or, using the distributive law for matrices, and the fact that differentiation
is linear,
u'{t) + ivt) = Au(t) + iAv(i).
Au(t), Av(t), u'(t), and v'(t) are real-valued vectors, and two complex
vectors can be equal if and only if the real parts and the imaginary parts are
equal. Hence, it must be the case that
u'(t) = Au(t), alii,
and
v'(f) = Av(t all /.
Thus, u{i) and v(t) solve (6.1).
Returning to our original discussion, if Xt is a complex eigenvalue of A and
ct is the corresponding complex eigenvector, then Re(eA,i
cf) and Im(eAii
cf) are
solutions. Making use of Euler's formula,
ew
= coso + /sino, (6.2)
we can be more explicit; let kt = a H- iß and c = a + ib, OL, ß real numbers, a, b real
vectors. Then
Re [e(a+iß)
a + ib)] = ear
[(cos (ßt))a - (sin (ßt))b] (6.3)
and
42 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
Im [eia+iß)t
(a + ib)] = cei
[(sin (ßt))a + (cos (ßt))b. (6.4)
It is not difficult to show that these two vectors are linearly independent.
At first glance it would seem that from one solution, two linearly indepen­
dent vectors have been created. This is not the case, and we explore this point
in somewhat more detail. First of all, since the matrix A has real coefficients,
det (A — λΐ) = ρ(λ) is a polynomial with real coefficients. Let
p(X) = λη
+ a^"'1
+ ••· + αη.
Let λ be any complex number. Then (denoting the complex conjugate of a
complex number z by z),
/Κΐ) = λ" + 01λη
-1
+ ··· + 0„
= ΧΛ
+ α1Χπ
-1
+ · · · + ΰ „ ,
since ài = af and λη
— λη
. Thus,_/?(A) = ρ(λ). If A£ is a complex eigenvalue, then
Pi^i) = 0, which implies thatp^) = 0 or that Xf is an eigenvalue. Thus, complex
eigenvalues occur as complex-conjugate pairs.
Now let Ci be an eigenvector corresponding to the complex eigenvalue kt.
Then, since {A — X^Ci = 0,
(A - V)c! = 0
or
(A - X,/)^ = 0.
Since λι is an eigenvalue of A, c£ is an eigenvector. Thus, knowing a complex
eigenvalue λ and its corresponding eigenvector c lets us determine a second
eigenvalue-eigenvector pair, X, c. In effect, taking the real and imaginary parts of
a complex solution eA<i
cf amounts to using both Af and A, and c, and ci to find two
real (linearly independent) solutions.
We illustrate the procedure with some examples. Consider the system
det(^ - λΐ) = deti " ~ j = 0 yields λ2
4- 1 = 0 or λ = ±L Fix λ = i.
To find an eigenvector, it is necessary to solve
6. THE CONSTANT COEFRCIENT CASE 43
or
— icl — c2 = 0
c
i — *c
2 =
0.
Then c1 = 1, c2 = — i is a nontrivial solution to this linear system of equations,
so a solution vector is given by
Making use of Euler's formula, ew
= cos(0) + isin(0),
ç»(0 = (cos(0 + isin(;)(
= /cos(f) 7 sin(i)
Vsin(i)/ V-cos(i)/'
Thus, two solutions are given by
Re<p(i)
lm<p(0
and
/cos(f)
Vsin(i)/'
/ sin(/)
V-cos(0/
/cos(0 sin(0
w
Vsin(f) -cos(0/
is a fundamental matrix with real entries.
Consider now the system
/ 1 1 0 
X' = I _ l l 0 I x.
 1 0 1/
Then det(A — XI) = (1 — λ)((1 — A)2
+ 1), or the eigenvalues are λ = 1 and
44 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
λ= I ±L Fixing λ= 1, we seek a nontrivial solution of
Thus, c2 = 0, cl = 0, and c3 is arbitrary, or I 0 I is an eigenvector and ex
I 0 ) is
a solution. Fixing λ = 1 + /, it is necessary to solve
= 0.
This requires
— cli + c2 = 0
~ c
i — c2/ = 0
Ci - c 3 / = 0.
Setting Ci = 1 (arbitrarily) produces c2 = i and c3 = —/; or, we can conclude
that
1
(p(t) = ^(cos(/) + /sin(/)) I /
is a solution. To find two real solutions it is only necessary to decompose this
solution into its real and imaginary parts. A straightforward computation shows
that
φ(ή
cos(/) / sin(0N
- sin (0 I + i I cos (0
sin(0/ —cos(/)/
The two desired real solutions are
cos(0 / sin (0s
°tl
- sin (0 land e* I cos(t) ),
sin(0/ —cos(/)/
6. THE CONSTANT COEFFICIENT CASE 45
and
0 e?'cos(0 et
sin(t)s
φ = | 0 - ex
sin (0 et
cos (0
, ex
et
sin (t) — ex
cos (/)/
is a real fundamental matrix.
We conclude with one additional example with all complex eigenvalues.
Consider the system
x —
■ i
- 1
0
0
1
1
0
0
0
0
2
- 1
1
"
1
1
2
X.
The eigenvalues are the roots of
det
1 - λ 1 0
- 1 1-A 0
0 0 2-λ
= 0.
0 0 - 1 2-λ]
Expanding this determinant yields the characteristic polynomial in the form
/>(λ) = [(1-Λ)2
+ 1][(2-λ)2
+ 1] = 0.
Thus, the eigenvalues are 1 ± / and 2 ± /. To find an eigenvector corresponding
to 1 — /, we must solve the system
= 0.
i
-1
0
0
1
i
0
0
0
0
1 +i
- 1
1 "
I
1
1
1 +i
pi'
Γ2
Γ3
LC
4.
This is the same as
ic1 + c2 + c4 = 0
— cl + ic2 + c4 = 0
(1 + i)c3 + c4 = 0
-c3 + (1 + /)c4 = 0.
46 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
A solution of this linear algebraic system is given by the vector
1
0
LOJ
and a solution of the differential equation is
£?r
[cos(0- isin(/)].
Taking the real and imaginary parts, wefindtwo real solutions of the system
[~sin(0~
cos (/)
0
L 0 -
PX
e »
cos (01
-sin(0
0
. 0 J
e .
Now let λ = 2 — i and seek an eigenvector by solving the system
- 1 +i
- 1
0
0
This is the same as
1
l + i
0
0
0
0
Ì
- 1
11
1
1
/J
rCi
l
°2
°3
Lc4J
= 0.
(-1 +!>! +C2 -hC4 = 0
-cx + ( - 1 + i)c2 + c4 = 0
ic3 + c4 = 0
— c3 4- ic4 = 0.
Set c4 = 1, which makes c3 = i. Thus, it is necessary to solve
(-1 -fi>i +c2 + 1 =0
-cx + (-1 + i)c2+ 1 =0
6. THE CONSTANT COEFFICIENT CASE 47
to obtain
Ci =
Cr =
I -
2i-
-2
-1
i
4 + 3/
5
2 - 1
2i - 1 5
Thus, an eigenvector is given by
Γ4 + 3f|
5
2 - 1
5
'
L i J
and a solution takes the form
Γ4 + 3/Ί
5
2 - 1
5
i
I i J
i?2i
[cos(0 -sin(/)].
Collecting real and imaginary parts produces two solution vectors
i(4cos(0 + 3sin(0)
i(2cos(t)-sin(0)
sin(t)
cos(0
e2

i(3cos(i)-4sin(0)"
i ( - c o s ( 0 - 2 s i n ( 0 )
cos (0
-sin(i)
«2r
These four real solutions then form the columns of the fundamental matrix
Vsin(f) e'cos(0 e2i
^(4cos(0 +3sin(/)) e2t
(?>cos(i) - 4sin(0)~
é cos (0 - ex
sin (/) e2t
%(2 cos (t) - sin (/)) e2t
£( - cos (t) - 2 sin (f))
0 0 e2i
sin(0 e2t
cos(0
0 0 e2i
cos(0 -e2
'sin(0
48 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
EXERCISES
1. Apply the definition of derivative to the function
/(/) = u{t) + Ht)
to show that
f(t) = ut) + ivt
2. Given Euler's formula, eie
= cos (0) + i sin (0), use Exercise 1 to show that
(e1
')' = ieu
.
3. If A is a complex number and « is an integer, show that λη
= I". If ω is also a complex
number, show that ωλ = ωλ.
4. Find the eigenvalues of the following matrices:
<·>(.: o « ei) « (.: :)
5. Find a fundamental matrix for x' = V4JC where Λ is as given in Exercise 4.
6. Show that the vectors (6.3) and (6.4) are linearly indepenent.
7. Find the eigenvalues of the following matrices.
« - I l l «
0
- 1
- 3
- 3
- 4
0
1
2
1
3
3
1
1
0
1
1
0
2 0 0 
0 1 - 1 )
1 1 1 /
' 2 - 1 0
1 2 0
0 Ô 1
.0 0 - 2
0
0
2
1.
(c) - 3 3 1 (d)
8. Find eigenvectors corresponding to the eigenvalues found in Exercise 7.
9. Find a fundamental matrix for x' = Ax for each A given in Exercise 7.
10. (a) Derive the Taylor series expansion for f(6) = ew
. (Proceed exactly as you
would for real functions, using d/d0(eie
) = iew
.)
(b) Rearrange the series in (a) into real and purely imaginary parts (each part will
be a series).
(c) Identify the series in (b) and deduce Euler's formula,
e>ie
= cos(0) + /sin(0).
(d) What do you need to know about the convergence of a series to perform the
rearrangement in (b)?
7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 49
7. The Constant Coefficient Case: The Putzer Algorithm
The analysis of the preceding section depended on finding either n distinct
eigenvalues or sufficient linearly independent eigenvectors when an eigenvalue
corresponded to a repeated root of the characteristic polynomial. This is the case
whenever the coefficient matrix A is similar to a diagonal matrix—whenever A is
"diagonalizable." When the matrix A does not have this property, the compu­
tation of eAt
becomes difficult. To continue the approach that we have begun
would require the introduction of more sophisticated linear algebra, not covered
in the usual elementary course (the Jordan canonical form). For this reason we
abandon the present approach and turn instead to the Putzer algorithm, a
method for computing eAt
based on a relatively simple theorem, the Cayley-
Hamilton theorem, which is traditionally a part of elementary courses in linear
algebra.
Let ρ(λ) be a polynomial, ρ(λ) = α0λη
+ αίλη
~ί
+ · · · + an. Since powers
of square matrices make sense, we can write a corresponding matrix polynomial,
p(A) = a0An
+ αχΑη
~χ
+ · · · + aj
(where, as above, the a^s are scalars). The partial sums used in defining eA
were
such polynomials. For every choice of a matrix A, p(A) is a well-defined matrix.
THEOREM (Cayley-Hamilton)
Let A be an n x n matrix and let ρ(λ)= dei (A — λΐ). Then/>(/*) = 0.
The zero, of course, is the n x n null matrix. Armed with only this theorem,
we can establish the following:
THEOREM 7.1 (Putzer)
Let A be an #i x n matrix with eigenvalues λλ, λ2,..., λη. Then
^ ^Σ rj+ì(t)Pj (7.1)
7=0
where P0 = /,
/>·= Π ( Λ - 4 / ) , 7 ' = l , . . . , n , (7.2)
*=ι
and f*i(f),..., r„(t) is the solution of the triangular system
50 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
r = λχψχ
rj=rj_i + Vy> 7 = 2,..., a, (7.3)
r,(0) = l, i}(0
) = °> 7 = 2,...,Ä.
Notefirstthat each eigenvalue appears in the list repeated according
to its multiplicity. Further, note that the order of the matrices is not
crucial—A — kx/and A — k2Icommute—so for convenience in the com­
putation, we adopt the convention that (A — kj) precedes {A — kjl) if/ > j
in the product. The system (7.3) can be solved recursively; if rx(t) is found,
the equation for r2(t) is aforcedfirst-orderlinear equation, the "forcing"
being r^if). This process can be continued until r1(t), ..., rn{t) are found
merely by solvingfirst-orderlinear differential equations.
Proof. Let Φ(ή = Σΐ=ο0+ι(0^· The idea of the proof is to show that Φ(ή
satisfies Φ' = ΑΦ, Φ(0) = / so that Φ(/) = eA
 by the uniqueness of so­
lutions. For convenience, define r0(t) = 0. Then
j=o
= "f[Vi O+i (0 + 0(0]/}
j=o
so that
Φ'(0 - Λ„Φ = "£ (Vi 0>i (0 + rj(t))Pj - j / f rj+1(t)Pj
j=0 j=0
= Σ (VI - K)r^{t)Pj + "f 0(0^5
= ? ( V i - K)rj+i(t)Pj + "l O+i(0^1.
7=0 j=0
Since Pj+l = (A — kj+1I)Pj by (7.2), the last line may be rewritten as
Φ'(/) - kMt) = Σ2
[(Α
- Vi7
)/} + (Vi - Wlo+iW
J=0
= °Σ (A - knI)Pjrj+1(t)
j=o
= (^-ν)Σ2
/;·ο+ι(ο.
J=0
We manipulate this right-hand side so as to obtain the appropriate
equation for Φ. Since
7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 51
= Φ(ί)-Γη{ί)ΡΛ-ΐ9
then
Φ'(0 - Λ„Φ(0 = (Λ - ληΙ)Φ(ή - rn{i){A - An/)P„_1
= (A - ληΙ)Φ(ή - rn{i)Pn.
The characteristic equation for A may be written in factored form as
ρ(λ) = (λ- λη)(λ - V i ) * * (A - A2)(A - A,).
Since
i>„ = (Λ - ληΙ)(Α - λη.χΙ) -(A-XJ)
= P(A),
it follows by the Cayley-Hamilton theorem that Pn = 0 (the null matrix).
Therefore, Φ(ί) satisfies the differential equation
Φ'(0 = ΑΦ(ί) (7.4)
and the initial condition
Φ(0) = "Σ 0+1(0)^ = ^(0)7
= /.
Hence, it follows by the uniqueness of solutions of (7.4) that Φ(0 = eAt
.
We illustrate the theorem first for a simple two-dimensional system.
Consider
'-G >
First, solve
det(><-A/) = det(3
"i"/l
j ^ M
= 3 - 4 λ + λ2
+ 1 = 0
andfindthat λ = 2 is a double root. Following the algorithm, let λι = 2, λ2 = 2,
52 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
P0 = /, and
Ί - 1
Λ=(Λ-2/) = Μ _1
Further,
Λ = 2rt
'i(O) = 1
so that r^{f) = e21
. Since
r'2 = <?2
' + 2r2
r2(0) = 0,
we have r2(i) = /e2
'. Therefore,
«M; ÎM:::
--or.-,
Consider now the system
/ =
The characteristic polynomial, det {A — λΐ) = 0, takes the form
(1 - λ)2 - λ) = 0
and we label the roots λί = λ2 = 1, λ3 = 2. Then P0 = I,
and
7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 53
/ 0 0 0 
p2 = pi = I 0 0 0 I.
 - l 1 1 /
Now solve the system (7.3) recursively. From
r = rx
ri(0) = 1
it follows that
From
ri = e1
+ r2
r2(0) = 0
it follows that
'2(0 = té.
Finally, from
r'3 = te' + 2r3
r3(0) = 0
it follows that
r3(i) = e2
' - te' - e'.
Thus, from the Putzer algorithm we have that
/ l 0 0  / 0 1 0  / 0 0 0 
eM
= e'l 0 1 0 I + te'i 0 0 0 ) + (e2<
-te' -e')i 0 0 0 I
 0 0 1 /  - l 2 1 /  - l 1 1 /
or
54 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
/
eA
' =

' e<
0
K-e2
' + e'
te'
é
e2
' + te' --e'
0
0
e2
'
As a final example, consider
Then det(A - λΐ) = 0 yields (2 - λ)3
(3 - λ) = 0 and we label the roots λχ =
λ2 = λ3 = 2, λ4 = 3. Then P0 = /,
Λ =
ρ2 = Pi =
and
^3 = PÌ =
'0 0 0 <Γ
0 0 0 0
0 0 0 0
.0 0 0 L
We proceed to find the functions r;(0> '
■ = U 2, 3, 4. First of all,
r
'i =
2^i
r , ( 0 ) = l
so /·,(/) = e2
'. Then, r2(i) satisfies
ri = e2
' + 2r2
r2(0) = 0
7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 55
or
r2(t) = te2
',
and r3(t) satisfies
r'3 = te2
' + 2r3
r3(0) = 0
or
r3(t) -
<2
2,
inally, r4(/) satisfies
r-
r4(0) =
t2
= 0
+ 3r4
or
or
e-3
'r4(t) = — - te~' -e-'+l
t2
r4(t) = e3
'-e2t
-te2
'--e2
'.
Thus
0 0 1 (Γ
0 0 0 0
0 0 0 0
0 0 0 h
56 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
or
/
eA,
=

fe2x
0
( °
o
te2
'
e2
'
0
0
S 2,
2e
te2
'
e2
'
0
0
0
0
e3
>
All of the illustrations above were for the case of a repeated root, since this
is the case where the previous method could fail. However, the method works
equally well in the case of distinct roots. It finds eAt
directly and avoids finding
the inverse of an arbitrary fundamental matrix. (Recall that eAt
= Φ(0Φ_1
(0)
where Φ(ή is an arbitrary fundamental matrix.) The computations with the
Putzer algorithm are usually more involved than computing the required eigen­
vectors for a fundamental matrix.
In the illustrations, the eigenvalues were all real. However, the method
works equally well if they are complex, since the differential equations for the
functions r can be solved in just the same manner. For example, the equation
y' + iy = 0
has the general solution y(t) = ce~u
9 where c is constant. Solutions with real
initial conditions are no longer necessarily real, but otherwise everything is as
before. For example, if we add the initial condition }>(0) = 1, then the solution is
y(t) = e~u
. We illustrate the Putzer algorithm with a simple example.
Consider the system
The eigenvalues are roots of the polynomial
-X
- 1
0
. o
1
-k
0
0
0
0
-X
- 1
σ
0
1
-K
/>(*) = det A A , , 1 = 0 .
Expanding the determinant yields
7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 57
ρ(λ) = λ2
(λ2
+ 1) + (λ2
+ 1)
= (λ2
+ l)2
= 0
or that
λ= ±i
are double roots. To apply the algorithm, label
Λ·ι = h K = -U
λ3 = i, /l4 = - i .
First of all,
i
1
0
0
1
— I
0
0
0
0
— I
- 1
°
° l
1 /
-ij
P2 = (A + U)Pi = 0
(the null matrix), and hence
p3 = o.
Also, rx(t) satisfies the equation
r[ = irl9 r1(0)=l,
so rx(t) = e* and r2(t) satisfies
r'2 = —ir2 H- eu
.
A straightforward computation shows that
r2(t)^Yi[eit
-e-it
]
= sin(i).
Since P2 and P3 are null, there is no need to compute r3(t) and r4(/). The solution
•■e-
58 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
is real even if the differential equation is complex! Since eAt
= eu
I + sin (t)Pl, we
have
V - / s i n ( 0 sin(0 0 0
-sin(0 eit
-isin(t) 0 0
0 0 eü
-isin(t) sin(f)
0 0 -sin (7) eü
— isin(t)y
However, eix
— z'sin(/) = cos(/) + /sin(f) — zsin(l) = cos(i). Thus, eAt
is equal
to
cos(f) sin(0 0 0
-sin(0 cos(0 0 0
0 0 cos(0 sin(0
0 0 -sin(0 cos(0/
Of course, eAt
had to turn out to be real, since A was real. The algorithm simply
took us through an excursion in the complex domain. The result is so simple
because we were, in effect, dealing with two uncoupled systems. A more interest­
ing computation occurs if we change the system slightly to
Z = I
0
1
0
0
1
0
0
0
0
0
0
- 1
i 
'}
0 /
The eigenvalues are roots of the polynomial
λ
1
0
0
1
-λ
0
0
0
0
-λ
- 1
Γ
0
1
-λ
/KA) = det| Λ Λ . , =
°·
Expanding the determinant yields the same polynomial as in the previous
example,
ρ(λ) = (λ2
+ l)2
= 0,
or that
λ= +i
7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 59
are again double roots. Take the same definition of the order of the eigenvalues,
A1 = /, λ2= — U Λ
-
3 = U K = —
U and proceed with the computation. As always,
P0 = I and
Λ =
Pi
'0 0 - 1 0s
0 0 0 - 1
0 0 0 0
00 0 0 0>
and
ft =
We find r^t) and r2{t) as before. To find r3(t), we must solve
r3 = ir3 + sin(0·
If this quantity is rewritten as
(*-%(/))' = *"" sin (f)
we see that the solution with r3(0) = 0 is given by
r3(t) = i[/sin(/) - it cos (t) + isin(0].
Finally, r4 is the solution of
ri = - Ϊ > 4 + r3
and can be obtained by integrating both sides of
fae")'= e*r3.
An integration and some manipulation yields that
60 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
^(O = i s i n ( 0 - icos(0·
Thus, the matrix eAt
takes the form
cos (0 sin (0 - - sin (t) - (t cos (t) + sin (t))
- sin (0 cos (0 ^(sin (0 — x cos (0) t sin (i)
0 0 cos(0 sin(0
0 0 -sin(0 cos(0
As an aside, for those interested in computing, this algorithm can easily be
set up to be performed by a symbolic manipulator, such as Reduce.
EXERCISES
1. Find eAt
where
2. Find eM
where
(b) A
3. Find the solution of xf
= Ax, JC(0) = ( — 1 I for each A given in Exercise 2.
4. Verify the Cayley-Hamilton theorem for each of the matrices in Exercise 2.
-C')
5. Let A = I I where B is an r x r matrix, C is a /? x /? matrix, and A is an
0 CJ
(n + />) x (n + />) matrix. Show that
8. GENERAL LINEAR SYSTEMS 61
, (em
0  _, [B~l
0 
= I Λ ΓΛ an
d if A is invertible, A l
=
V 0 ea
J  0 C"7
6. Use Exercise 5 to find eAt
where Λ =
7. Show that if the real part of each eigenvalue of A is negative, then every solution of
y' = Ay satisfies limf_>OOiy(0 = 0. (Hint: Show that lim,^ rt(t) = 0.)
8. The following matrices have complex eigenvalues. Use the Putzer algorithm to find
US)
0 1
1 0
0 0
0
0
0
0
0
1
(a)
(c)
1 0 - 1 0 ,
8. General Linear Systems
We now consider a linear system with aforcing term,
y' = A(t)y + e(t (8.1)
where A(t) is an n x n continuous matrix and e(t) is a continuous «-vector. For
notational purposes, let L[y] denote y' — Ay. Then, as noted before, (8.1) can be
written
L[y] = e. (8.2)
The principal theoretical result is given in the following theorem. Note first
that L is a linear operator.
THEOREM 8.1
Ifχ(ί) is a given solution of (8.2), then any solution ψ(ί) of (8.2) can be written
ψ(ί) = Φ(ί)α + χ{ϊ)
62 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
where Φ is a fundamental matrix for
Ly = 0 (8.3)
and c is an appropriate constant vector.
Proof. Let Φ be a fundamental matrix for L[y] = 0, and let φ(ή be an arbitrary
solution. Then
m - xi = LIM - LX]
=e-e=0
or φ(ή — χ(ί) is a solution of (8.3). By Theorem 3.4,
Ht) - Z(0 = Φ(0<%
or
φ(ή = <t>(t)c + x(t),
which is the conclusion of the theorem.
The importance of Theorem 8.1 is that it reduces the problem of finding all
solutions of equation (8.1) to the problem of finding a fundamental matrix for
(8.3) and finding any solution whatever of (8.1). We deal now with a way to find
χ(ή given the fundamental matrix Φ(ί).
THEOREM 8.2
The vector function
I I Φ - 1
/ ( 0 = Φ(0 Φ - ^ Τ Μ Τ ) * (8.4)
is a solution of (8.1).
Proof To prove the theorem it is necessary only to verify by differentiation that
(8.4) is a solution. To check first, however, that the above makes sense, note
that Φ and Φ- 1
are n x n matrices, so Φ_1
(τ)β(τ) is an n vector, as is its
integral, and Φ(ί) operates on the n vector J|o Φ_1
(τ)^(τ) di. Of course, Φ(ή
is differentiable, and since e{t) is continuous, so is the integral in (8.4).
Differentiating,
χ'(ή = Φ'(ί) φ-τ)β{τ)(ίτ -h Φ(0Φ_1
(0^(0·
Since Φ'(ί) = Α(ήΦ(ή, and since Φ- 1
(0Φ
(0 = h this becomes
χ'(ή = Α(ήΦ(ί) Γ φ-^τΜτ) A + e(t).
Jto
8. GENERAL LINEAR SYSTEMS 63
By the definition of χ(ή (8.4), this is
X'(t) = A(t)X(t) + e(0
or
L[X] = *.
Note that χ(ί0) = 0; that is, χ(ή is the solution of (8.1) that takes the
zero vector as the initial condition at t = t0. Suppose we desire to solve (8.1)
with initial conditions y(t0) = η. Let Φ(/) be a fundamental matrix for (8.3).
If χ(ή is given by (8.4), then
φ(ί) = Φ(ί)φ-ί0)η + x{t)
is a solution of the equation, for
ψν) = Φ{ήΦ-Κ*ο)η + xV)
= Α(ήΦ(ήφ-ί0)η + Α(ί)χ(ή + e(0
= ^(0(φ(0Φ-1
(ίο)ί + ζω) + «ω
Further,
= /»/ = */,
so ^(0 satisfies the initial condition.
This computation can be combined with Theorems 8.1 and 8.2 to yield a
solution for any linear initial value problem.
THEOREM 8.3 ( Variation of constanis formula)
Let Φ(ί) be a fundamental matrix for /' = A (ί)χ. Then the unique solution of
/ = A(t)y + e(t)
y(to) = n
is given by
y(t) = Φίθφ-^ο)* + I ' Φ(ί)φ-!
(*Μ*)^. (8.5)
Jt0
In the representation of the solution, (8.5), the matrix Φ(ί) has been moved inside
the integral sign. This causes no difficulties, since the integration is with respect
64 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
to the dummy variable s. However, it does make for a nice formula if the
fundamental matrix Φ(/) happens to be eAt
. Then with t0 = 0 the representation
given in Theorem 8.3 is
ι = έ^Ι/ +
y(t) = eAt
n+ eAit
~s)
e(s)ds. (8.6)
Equation (8.6) follows from (8.5), since eA0
= / and (eAt
)~y
= e~At
. As an
example, consider the system
yι =yi
y'i = -yi + u
and the initial conditions
JKi(0) = 0
y2(0) = 2,
or
y' = Ay + e(t)
where
and
e(t)
-[-Ϊ i]
■ra-
The unforced equation is
y'i. = y2
y'i = -y-
(8.7)
(8.8)
(8.9)
A fundamental matrix for (8.10) is
^, χ Γ cos (rt sin (/)Ί
φ(0= . ; , , ; (8.10)
|_-sin(0 cos(0j
8. GENERAL LINEAR SYSTEMS 65
because the columns of (8.10) are solutions of (8.9) and detO(0 = cos2
(f) +
sin2
(0= l.Then,
Φ~ή = Γ
cos (0 — sin (t)
sin (/) cos (0_
Hence, χ(ί) is given by
lit) = Φ(/)
ί>"'(τ)β(τ)ατ
[ ce
- s i
■[
)ΊΓ-ί;ο τδ ΐη(τ)ΛΊ
)_][_ ίίοτ cos (τ) dz]
T-sir
)JL COÎ
cos(0 sin (0Π Γ—
J
sin (t) cos (/)_
cos (/) sin (0~| Γ - sin (/) + / cos (t)
sin (0 cos (0J L c o s
(0 + l s i n
(0 1
[-sin(0cos(0 + icos2
(/) + sin (0 cos (0 + tsin2
(t) - sin(0~]
+ sin2
(/) - / sin (/) cos (i) + cos2
(/) + / sin (t) cos (/) - cos (t) J
■[t - sin (0 "
1 - cos (0.
To satisfy the initial conditions, (8.8), we have
,.Φ-<0>[°]
or
Finally,
φ(ί) = <t>(t)c + χ(ή
[ cos(r) + s i n ( 0 i r 0 l p
L-sin(0 c o s ( 0 j b J Ll
_r2sin(0~| p - s i n ( 0 " |
" ^ c o s W J ^ L 1
- c o s ( 0 j
_ Γ / + sin (0 Ί
" |_1 +cos(0J'
- sin (0
— cos(0_
66 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
Consider the 3 x 3 system
2 1 Γ
0 2 0
0 0 3_
y +
Ί]
0
_<J
y =
with the initial condition
(8.11)
j(0) = (8.12)
Let A denote the above 3 x 3 matrix. The eigenvalues of A are given by roots of
/>(A) = det 0 2-λ 0 | = 0 .
2-λ 1 1
0 2-λ 0
. 0 0 3-AJ
Expanding the determinant, we see that
p{X) = (2 - 1)2
(3 - X) = 0
or that λ1 — 2, λ2 = 2, and λ3 = 3. We apply the Putzer algorithm tofindeAt
. The
relevant matrices are P0 = I,
Λ =
[0
0
Lo
1 11
0 0
0 lj
"0
0
Lo
0 11
0 0
0 lj
and
Clearly, rt(t) = e2
' and r2(t) solves
r'2 = 2r2 + e2
', r2(0) = 0,
or r2(t) — te2
'. Then r3(t) satisfies
r'3 = 3r3 + te2
', r3(0) = 0.
8. GENERAL LINEAR SYSTEMS 67
Rewriting the differential equation as
(r3é>-3
*)' = AT'
gives, after an integration, r3(t) = e2t
(et
— t — 1). Thus, eAt
takes the form
v<
0
LO
te2
'
e2
'
0
e3
' - e2T
0
e3
' .
(8.13)
Rather than directly invert this matrix, we can use the fact that (eA,
)~l
= e~At
.
This amounts to substituting — t for t in (8.13), which yields that Φ-1
, in (8.5), is
of the form
~e-2> -te-2t e-3t_e-2t
0 e~2
· 0
LO 0 e~3
' _
(8.14)
Then, e A
'e(t) is
e-2
'(te-' - t + l )
0
te~3t
We can now compute the particular solution, (8.4). An integration of the vector
above from 0 to / produces
ig[e-3
'(3e3
' + We' - 9e' - 12? - 4) + 13]
ie-3
'(e3
' - 3/ - 1)
Multiplying eA
' by this vector produces the χ of (8.4). Recall that this is a solution
of the system that is the null vector at / = 0. The result of this multiplication is
X(t)
Të[4e3
' + 9e2
' + 6t - 13]
0
%[e3t
- 3/ - 1]
Finally, the variation of constants formula may be applied; it requires us to
compute eA
'y(0) + χ(ή. This yields the solution of (8.11) in the form
68 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
y(t) =
Vg[40e3
' + 36te2
' + 9e2
' + 6/ - 13]
e2
'
MlOe3
' - 3/ - 1]
While computations such as this are not intrinsically difficult, it is clear
that considerable, careful manipulation is necessary to carry out the very elegant
representation of the solution given by Theorem 8.3. For this reason, the varia­
tion of constants formula is of more theoretical than practical importance for all
but the simplest systems. Again, if it is necessary to find the explicit representa­
tion of a large system, a computer with a symbolic manipulator is a useful tool.
EXERCISES
1. Find all solutions of
(a) X=
{o 2)X
t
« x
' = (~l ~)x +
u
(c) x'
(d) X'
-1 2,r+
U(').
2. Find the solution for each system given in Exercise 1 that satisfies x(0) =
3. Determine whether the limit as t -> oo or as t -* — oo exists for any of the solutions
found in Exercise 1.
4. Find all solutions of
 2 - 2 3,
5. Find all solutions of
1 1 0 0
0 1 0 0
* = l
o o o i lx
P o i a
9. SOME ELEMENTARY STABILITY CONSIDERATIONS 69
9. Some Elementary Stability Considerations
The theory developed in the previous sections makes it possible to intro­
duce some of the basic ideas of stability analysis for systems of differential
equations. These ideas are important in many physical systems to give a robust­
ness to theoretical conclusions. More sophisticated tools and concepts will
appear in the next chapter. Although, at this point, the discussion will be
restricted to linear systems (indeed, to those with constant coefficients), the
concepts carry over to nonlinear systems as well. For those who have met the
idea of stability in a physics course, we note that what is presented here is a
mathematician's way of describing those very same ideas. The properties of
norm, introduced at the beginning of Section 4, are important in this endeavor
and the reader may wish to review them before proceeding with this section.
The basic, intuitive idea is that of instability. "Something" is unstable if a
small deviation from the present "state" produces a major change in the state.
The familiar physical example is a cone balanced on its pointed end (see Figure
9.1). A small change in position produces a major change—the cone falls. To
make this, and related ideas, precise in the context of solutions of systems of
linear differential equations is the goal of this section.
Figure 9.1 An example of instability. A cone balanced on its point will fall if slightly
disturbed.
Consider the linear system of ordinary differential equations
x' = Ax (9.1)
where A is an n x n constant matrix and x is a vector in Rn
. Equation (9.1)
always has the trivial solution, the function x(t) = 0, and this solution will play
70 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
the role of "present state" in the intuitive description above. The trivial solution
is said to be stable if for every ε > 0 there is a δ > 0 such that if x{t) is any solution
of(9.1)with||x(0)|| <<5,then||jc(0H < ε for all t > 0. We are using norm, || ||,
to measure how close a solution is to the trivial solution. Think of the trivial
solution as the present state of the system and x{t) as a solution that represents
a deviation from the present state. The above definition says that, if the trivial
solution is stable, x{t) will remain arbitrarily close (this is the ε) to the present
state (the trivial solution) for all future time if the initial condition x(0) is
sufficiently close (this is the δ) to zero. The trivial solution is said to be unstable
if it is not stable. Given the definition of stability, being unstable meets the above
intuitive criterion of a major (not small) change of state at a future time from a
small, initial disturbance (a small change of initial condition). To set the basic
idea, let us redefine instability by formally contradicting the notion of stability—
by stating what must happen for stability to fail. Roughly, we must state that
no matter how close the initial conditions are to zero, at some future time some
solution will not be close to zero. Thus it is sufficient to show that for some ε > 0,
there is a sequence of real numbers (initial conditions) p„, with limM_00p„ = 0,
and a corresponding sequence of real numbers tn (times) such that the solution
of (9.1)—call it xn(t)—that satisfies ||xn(0)|| = pn also satisfies ||χη(ίη)|| > ε.
Thus, not all solutions that start arbitrarily close to the trivial solution remain
close to it for all future time. Note that it was important to take an entire
sequence of initial conditions tending to zero. If the conclusion was satisfied for
one, or a finite number of, initial conditions, there could be a smaller δ that
would make the definition of stability "work" if solutions were this (δ) close.
The infinite sequence of initial conditions guards against this possibility.
For example, the linear system
* = [J J]* (9-2)
has a fundamental matrix (actually eAt
) of the form
[cosh (t) sinh (/)~|
sinh (0 cosh (t)J '
Using the theory we developed in Section 3, every solution of (9.2) can be written
as Φ(ί)ο, for some constant vector c. Choose pn = l/n, n = 1,2,..., and take c to
be the vector
" 1 "
Tn
1
Yn
9. SOME ELEMENTARY STABILITY CONSIDERATIONS 71
This corresponds to choosing the family of solutions
1 , '
Note that || xn(0) || = pn = /n. Take ε = 1 and tn = n(2n). After these elaborate
preparations, we are ready to check the definition. Clearly, pn -> 0 as n -► oo. Yet
I
Ix
n(Q || = 2 > 1 = ε. Thus, a small change in initial condition at t = 0 produces
a large (>ε) change at a future time, tn. The formal definition of instability is
satisfied (or, the definition of stability is violated).
A simple change in the system of equations (9.2) can make a dramatic
change in the behavior of solutions. Consider the system
*'=[_; iy (9.3)
A fundamental matrix (eAt
) is given by
[ cos (0 sin (0~|
-sin(0 cos(0_|
and every solution may be written as Φ(ήο, or
Γ c^osit) + c2sin(t) Ί
|_ — c1 sin (t) + c2 cos (t)J
for appropriate constants cx and c2. Let ε > 0 be given and choose δ = ε/2. If
ki I + c
i < <
5 (this is the norm of the vector JC(0)—see Section 4 of this chapter),
then ||x(0ll < 2(1^1 -h c2) <2δ = ε for alii > 0 (in fact, for all t). Thus, the
trivial solution of (9.3) is stable—solutions that begin sufficiently close to the
trivial solution remain close to it in the future.
Although it is the simplest concept, it turns out that stability of the trivial
solution is not the most important concept (in both mathematics and physics).
The important concept is stronger and is called asymptotic stability. Its im­
portance stems from the fact that it is preserved under slight changes ("pertur­
bations") of A. If the trivial solution of (9.1) is asymptotically stable and if A is
"changed" slightly, the trivial solution of the new (9.1) is also asymptotically
stable. Since the entries of A often represent measured quantities, it is important
that the stability property be retained under slight changes, corresponding,
perhaps, to a measurement error. These ideas will be explored in detail in the next
chapter for the two-dimensional case. Here we present the basic idea and a
72 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
criterion for determining when the property holds. The trivial solution of (9.1) is
said to be asymptotically stable if (a) it is stable, and (b) there is an ε > 0 such that
if ||;c(0)|| < ε, then lim^^ ||JC(0II = 0. Obviously, requirement (b) strengthens
the conclusion. Neither system (9.2) nor (9.3) satisfies this condition. System
(9.2), of course, satisfies neither (a) nor (b). However, the system
*-["i -1} (9.4)
has a fundamental matrix (again eAt
)
Φ(0 =
te~
e~
Hence, every solution takes the form
x(t) =
:1e ' + c2te Π
c2e- J'
(9.5)
and every solution satisfies lim,^ || x(t)  = 0. This is strong enough to show that
both (a) and (b) in the definition are satisfied, since the second component of the
vector x(t) is decreasing and the first component is eventually decreasing. The
technical details are left as an exercise.
Fortunately, it is not necessary to solve each system to determine its
stability. There is a simple theorem that provides a criterion for asymptotic
stability for linear systems with constant coefficients.
THEOREM 9.1
The trivial solution of (9.1) is asymptotically stable if and only if all of the
eigenvalues of A have negative real parts.
When we say "negative real part" we intend that either the number is real
and negative or it is complex and the real part is negative. A similar statement
applies for "positive real part." There is a corresponding statement for
instability.
THEOREM 9.2
If one eigenvalue oiA has a positive real part, then the trivial solution of (9.1)
is unstable.
9. SOME ELEMENTARY STABILITY CONSIDERATIONS 73
If the real part of the eigenvalues is nonpositive, the middle ground between
asymptotic stability and instability is where the real part of at least one eigen­
value is zero. This case is more delicate and depends on the multiplicity of the
eigenvalue with a zero real part. We will not give a detailed analysis, but note
one simple result.
THEOREM 9.3
Ifthe eigenvalues ofA with zero real parts are simple and all other eigenvalues
have negative real parts, then the trivial solution of (9.1) is stable.
In (9.3), both eigenvalues have zero real parts and the trivial solution is stable,
but not asymptotically stable.
The key element in the proof of Theorem 9.1 is the establishment of the
following basic lemma.
LEMMA 9.4
Let Xj = ξ] + ty9j = 1, 2,..., n be the eigenvalues of the matrix A (repe­
titions allowed). Let σ > max (£,·). Then there is a constant, k > 0, such that
if x(t) is a solution of (9.1), then
||jc(r)||<Jfc>".
The proof of this lemma is easy if A is a diagonal matrix and is not very difficult,
using the properties of norms listed in Section 4, if A is similar to a diagonal
matrix. For the general case we need to use the Putzer algorithm. We state the
basic fact as another lemma.
LEMMA 9.5
Let A, σ, and λ{ be as in Lemma 9.4 and let r,(0 be the elements in the
decomposition (7.1) of e^r
. Then
MOI < W
where c,· is a positive constant.
Proof. The proof is by induction. Let Xi9 i = 1, 2, ..., n be given and let
eAt
be expressed by (7.1). Clearly, rx(t) = eXlt
 < efft
. Suppose that
rj(t) < c}ea
j = 1,2,...,/— 1. Then, solving the scalar differential equa­
tion (or using the variation of constants formula, for the scalar—1 x 1
matrix—equation),
74 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
r[ = Vi + 'i-i
yields (since rf(0) = 0) that
It follows that
k-(0l <
e XiS
ri-l(s)ds.
ie x
's
ri_l(s)ds
< ex
>' e-^ri^(s)ds
< e,-!«4
'' e^'+
")s
ds
since the real part of ^ is £,·. After an integration,
(σ-ί,)( _ 1
^ — ç
This completes the induction.
Proof of Lemma 9.4: Since the representation of the exponential eAt
given by (7.1) has only a finite number of matrices, we can find a number,
M, larger than the norm of each matrix. This number multiplied by the sum
of the numbers ch i = 1, 2, ..., n, given by Lemma 9.5, provides the
constant K in the statement of the lemma. To complete the proof, we use
(7.1) and the inequalities for norms from Section 4 to find that
lk',,
||<Î>i-iWIII^II
n - 1
< Meat
X Ci
0
< Keat
.
This completes the proof of Lemma 9.4.
With the aid of Lemma 9.4, the proof of Theorem 9.1 follows easily. Let
σ < 0 be greater than the largest real part of any eigenvalue of A. This choice is
possible because the largest eigenvalue has a negative real part. Since any
solution x{t) of (9.1) has the form x(t) = eAt
x(0), it follows from Lemma 9.4 that
9. SOME ELEMENTARY STABILITY CONSIDERATIONS 75
x(t) < eAt
x(0) < x(0)Keff
 σ < 0 .
Thus, lim,^ ||x(OII = 0. This shows that (b) in the definition of asymptotic
stability is satisfied for any solution. To get (a), we have only to choose ||x(0) ||
small enough ( < ε/Κ in the definition of stability).
For the two-dimensional case, the fact that the largest eigenvalue has a
negative real part can often be determined without actually computing the
eigenvalues. For example, consider the system
The trace of the matrix gives the sum of the eigenvalues, while the determinant
gives their product (quadratic formula). In (9.5) the trace is negative, so the sum
of the eigenvalues is negative, while the product (the determinant) is positive.
Thus, the eigenvalues have negative real parts. (They are, in fact, complex in this
case.) For larger systems, criteria are known that guarantee that all of the
eigenvalues of a matrix are negative or have negative real parts. Principal among
these are the Routh-Hurwitz criteria, which the interested reader may find in
more advanced textbooks.
It is important to note that things are somewhat more general than we have
presented them to be. Stability of the trivial solution has been defined for (9.1).
For other systems, and particularly for nonlinear ones, the stability of other types
of solutions is important. However, for (9.1), if the matrix A is nonsingular, the
only constant solution is the trivial one. In applied literature, these constant
solutions are called steady states or equilibrium solutions. If A is singular, then
there will be a "continuum" of solutions (a line, if A is two-dimensional). In this
case, asymptotic stability of these constant solutions is not possible. Hence, the
focus is on the zero solution. The definition was applied at the point in time t = 0.
The definition could have been given for any other time t0, but for equations of
the form (9.1) a redefinition of initial time—moving t0 to 0—is trivial. For other
systems, the initial time t0 may be crucial, and the definition is usually given with
an arbitrary time t0. The theorems are actually stronger than stated. For
example, in the case of asymptotic stability the theorems are global in the sense
that all solutions tend to zero as t tends to infinity, not just those that are initially
close. Moreover, in view of Lemma 9.4, the rate of convergence to zero is
exponential; that is, solutions tend to zero faster than does an exponential
function. These properties are typical of linear systems but represent properties
that are stronger than can be expected for other systems.
EXERCISES
1. Determine the stability of the trivial solution of x' — 0. Is it asymptotically stable?
2. Determine the asymptotic stability of the system x' = Ax where A is
7 6 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS
(a)
(c)
(e)
Π
1 1
0 - 2
0 0
"-1
0
1
0
1
- 1
0 0
-2 0
0 - 1
(b)
(d)
(f)
ß-3
1 1 f
1 0 1
0 0 - 2
-1 0 - 1
0 - 2 0
1 0 0
3. Let
-c a·
Show that the sum of the eigenvalues is a + d (called the trace of A) and the product
is ad — be (the determinant of A). (Hint: Use the quadratic formula.)
4. Give a direct (i.e., without Lemma 9.5) proof of Lemma 9.4 when A is a diagonal
matrix. (Use the form of eAt
and the definition of norm from Section 4.)
5. Give a direct proof of Lemma 9.4 when A = T_1
DT and D is a diagonal matrix.
6. Let/(x) be a continuous function on the real line and c SL real number such that
f(c) — 0. Formulate a definition of stability for the constant solution x(t) = c of
x' =/(*)·
7. Repeat Exercise 6 for asymptotic stability.
8. Consider system (*) x' = Ax + g(t), where A is an n x n constant matrix and g(t) is
a continuous w-dimensional vector. Use Theorem 8.3 and Lemma 9.4 to obtain the
following estimate on the solution of (*) that satisfies x(0) = x0i
I·
I
I *(0 I
I < K x01| eat
+ Keat
I e'n
 g(s) || ds.
9. Suppose that A is a diagonal matrix and that the eigenvalues of A have negative real
parts, except for one that has a zero real part. Use the form of eAt
to show that all
solutions of (9.1) are bounded. (By bounded we mean that there is a constant M,
depending on the initial condition, such that || x(t) || < M.)
10. Prove Theorem 9.3 in the special case that A is similar to a diagonal matrix.
11. Give a simple example to show that the statement in Exercise 10 is false if A has
a double eigenvalue with zero real part and all others with negative real parts.
12. Consider (*) in Exercise 8 and suppose that the eigenvalues of A have negative real
parts and that Jg5
1| g(t) || dt exists. Show that all solutions of (*) are bounded. (Hint:
Use the estimate in Exercise 8.)
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf
[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf

More Related Content

Similar to [W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf

Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...BEATRIZJAIMESGARCIA
 
A Course In LINEAR ALGEBRA With Applications
A Course In LINEAR ALGEBRA With ApplicationsA Course In LINEAR ALGEBRA With Applications
A Course In LINEAR ALGEBRA With ApplicationsNathan Mathis
 
Linear Algebra_ Theory_Jim Hefferon
Linear Algebra_ Theory_Jim HefferonLinear Algebra_ Theory_Jim Hefferon
Linear Algebra_ Theory_Jim HefferonBui Loi
 
2000_Book_MechanicsOfDeformableSolids.pdf
2000_Book_MechanicsOfDeformableSolids.pdf2000_Book_MechanicsOfDeformableSolids.pdf
2000_Book_MechanicsOfDeformableSolids.pdfMeyli Valin Fernández
 
[Ronald p. morash] bridge to abstract mathematics
[Ronald p. morash] bridge to abstract mathematics[Ronald p. morash] bridge to abstract mathematics
[Ronald p. morash] bridge to abstract mathematicsASRI ROMADLONI
 
Bridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdf
Bridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdfBridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdf
Bridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdfwaqasahamad422
 
Modeling Instruction in the Humanities
Modeling Instruction in the HumanitiesModeling Instruction in the Humanities
Modeling Instruction in the HumanitiescaroleHamilton
 
Modeling Instruction in the Humanities
Modeling Instruction in the HumanitiesModeling Instruction in the Humanities
Modeling Instruction in the HumanitiesCarole Hamilton
 
Assessing Critical Thinking In Mechanics In Engineering Education
Assessing Critical Thinking In Mechanics In Engineering EducationAssessing Critical Thinking In Mechanics In Engineering Education
Assessing Critical Thinking In Mechanics In Engineering EducationScott Faria
 
Hassani_Mathematical_Physics_A_Modem_Int.pdf
Hassani_Mathematical_Physics_A_Modem_Int.pdfHassani_Mathematical_Physics_A_Modem_Int.pdf
Hassani_Mathematical_Physics_A_Modem_Int.pdfNATALYALMANZAAVILA
 
John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...
John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...
John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...LobontGheorghe
 
A Technique for Partially Solving a Family of Diffusion Problems
A Technique for Partially Solving a Family of Diffusion ProblemsA Technique for Partially Solving a Family of Diffusion Problems
A Technique for Partially Solving a Family of Diffusion Problemsijtsrd
 
Application of calculas in mdc
Application of calculas in mdcApplication of calculas in mdc
Application of calculas in mdcMohamed Sameer
 
Cognitive process dimension in rbt explanatory notepages
Cognitive process dimension in rbt  explanatory notepagesCognitive process dimension in rbt  explanatory notepages
Cognitive process dimension in rbt explanatory notepagesArputharaj Bridget
 
Discrete Mathematics Lecture Notes
Discrete Mathematics Lecture NotesDiscrete Mathematics Lecture Notes
Discrete Mathematics Lecture NotesFellowBuddy.com
 
1993_Book_Probability.pdf
1993_Book_Probability.pdf1993_Book_Probability.pdf
1993_Book_Probability.pdfPabloMedrano14
 

Similar to [W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf (20)

Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
Cohen-Tannoudji, Diu and Laloë - Quantum Mechanics (vol. I, II and III, 2nd e...
 
A Course In LINEAR ALGEBRA With Applications
A Course In LINEAR ALGEBRA With ApplicationsA Course In LINEAR ALGEBRA With Applications
A Course In LINEAR ALGEBRA With Applications
 
Linear Algebra_ Theory_Jim Hefferon
Linear Algebra_ Theory_Jim HefferonLinear Algebra_ Theory_Jim Hefferon
Linear Algebra_ Theory_Jim Hefferon
 
Sierpinska
Sierpinska Sierpinska
Sierpinska
 
2000_Book_MechanicsOfDeformableSolids.pdf
2000_Book_MechanicsOfDeformableSolids.pdf2000_Book_MechanicsOfDeformableSolids.pdf
2000_Book_MechanicsOfDeformableSolids.pdf
 
[Ronald p. morash] bridge to abstract mathematics
[Ronald p. morash] bridge to abstract mathematics[Ronald p. morash] bridge to abstract mathematics
[Ronald p. morash] bridge to abstract mathematics
 
Brownian Motion and Martingales
Brownian Motion and MartingalesBrownian Motion and Martingales
Brownian Motion and Martingales
 
Bridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdf
Bridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdfBridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdf
Bridge-To-Abstract-Mathematics-by-Ronald-P-Morash-pdf-free-download.pdf
 
Modeling Instruction in the Humanities
Modeling Instruction in the HumanitiesModeling Instruction in the Humanities
Modeling Instruction in the Humanities
 
Modeling Instruction in the Humanities
Modeling Instruction in the HumanitiesModeling Instruction in the Humanities
Modeling Instruction in the Humanities
 
Assessing Critical Thinking In Mechanics In Engineering Education
Assessing Critical Thinking In Mechanics In Engineering EducationAssessing Critical Thinking In Mechanics In Engineering Education
Assessing Critical Thinking In Mechanics In Engineering Education
 
Hassani_Mathematical_Physics_A_Modem_Int.pdf
Hassani_Mathematical_Physics_A_Modem_Int.pdfHassani_Mathematical_Physics_A_Modem_Int.pdf
Hassani_Mathematical_Physics_A_Modem_Int.pdf
 
John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...
John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...
John Stillwell-Numerele reale-An Introduction to Set Theory and Analysis ( PD...
 
A Technique for Partially Solving a Family of Diffusion Problems
A Technique for Partially Solving a Family of Diffusion ProblemsA Technique for Partially Solving a Family of Diffusion Problems
A Technique for Partially Solving a Family of Diffusion Problems
 
Application of calculas in mdc
Application of calculas in mdcApplication of calculas in mdc
Application of calculas in mdc
 
Cognitive process dimension in rbt explanatory notepages
Cognitive process dimension in rbt  explanatory notepagesCognitive process dimension in rbt  explanatory notepages
Cognitive process dimension in rbt explanatory notepages
 
Discrete Mathematics Lecture Notes
Discrete Mathematics Lecture NotesDiscrete Mathematics Lecture Notes
Discrete Mathematics Lecture Notes
 
Project
ProjectProject
Project
 
1993_Book_Probability.pdf
1993_Book_Probability.pdf1993_Book_Probability.pdf
1993_Book_Probability.pdf
 
0824719875 inverse
0824719875 inverse0824719875 inverse
0824719875 inverse
 

Recently uploaded

Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptSonatrach
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdfHuman37
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxEmmanuel Dauda
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingNeil Barnes
 
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...Suhani Kapoor
 
Data Warehouse , Data Cube Computation
Data Warehouse   , Data Cube ComputationData Warehouse   , Data Cube Computation
Data Warehouse , Data Cube Computationsit20ad004
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改atducpo
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiLow Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiSuhani Kapoor
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
 
Spark3's new memory model/management
Spark3's new memory model/managementSpark3's new memory model/management
Spark3's new memory model/managementakshesh doshi
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfLars Albertsson
 

Recently uploaded (20)

Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.pptdokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
dokumen.tips_chapter-4-transient-heat-conduction-mehmet-kanoglu.ppt
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf20240419 - Measurecamp Amsterdam - SAM.pdf
20240419 - Measurecamp Amsterdam - SAM.pdf
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
Customer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptxCustomer Service Analytics - Make Sense of All Your Data.pptx
Customer Service Analytics - Make Sense of All Your Data.pptx
 
Brighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data StorytellingBrighton SEO | April 2024 | Data Storytelling
Brighton SEO | April 2024 | Data Storytelling
 
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
VIP High Class Call Girls Jamshedpur Anushka 8250192130 Independent Escort Se...
 
Data Warehouse , Data Cube Computation
Data Warehouse   , Data Cube ComputationData Warehouse   , Data Cube Computation
Data Warehouse , Data Cube Computation
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service BhilaiLow Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
Low Rate Call Girls Bhilai Anika 8250192130 Independent Escort Service Bhilai
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
 
Spark3's new memory model/management
Spark3's new memory model/managementSpark3's new memory model/management
Spark3's new memory model/management
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdf
 

[W]-REFERENCIA-Paul Waltman (Auth.) - A Second Course in Elementary Differential Equations-Elsevier Inc, Academic Press (1986).pdf

  • 1. A Second Gourse in Elementary Differential Equations Paul Waltman Emory University, Atlanta Academic Press, Inc. (Harcourt Brace Jovanovich, Publishers) Orlando San Diego San Francisco New York London Toronto Montreal Sydney Tokyo Säo Paulo
  • 2. Copyright © 1986 by Academic Press, Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher. Academic Press, Inc. Orlando, Florida 32887 United Kingdom Edition Published by Academic Press, Inc. (London) Ltd., 24/28 Oval Road, London NW1 7DX ISBN: 0-12-733910-8 Library of Congress Catalog Card Number: 85-70251 Printed in the United States of America
  • 3. TO RUTH For her patience and understanding
  • 4. Preface The once-standard course in elementary differential equations has undergone a considerable transition in the past two decades. In an effort to bring the subject to a wider audience, a gentle introduction to differential equations is frequently incorpo­ rated into the basic calculus course—particularly in the last semester of a calculus- for-engineers course-or given separately, but strictly as a problem-solving course. With this approach, students frequently learn how to solve constant coefficient scalar differential equations and little else. If systems of differential equations are treated, the treatment is usually incomplete. Laplace transform techniques, series solutions, or some existence theorems are sometimes included in such a course, but seldom is any of theflavorof modern differential equations imparted to the student. Graduates of such a course are often ill-equipped to take the next step in their education, which all too frequently is a graduate-level differential equations course with considerable analytical prerequisites. Even when a "good" elementary course in ordinary differential equations is offered, the student who needs to know more sophisticated topics mayfindhis or her way to further study blocked by the need to first study real and complex variables, functional analysis, and so on. Yet many of the more modern topics can be taught along with a marginal amount of the necessary analysis. This book is for a course directed toward students who need to know more about ordinary differential equa­ tions; who, perhaps as mathematics or physics students, have not yet had the time to study sufficient analysis to be able to master an honest course, or who, perhaps as biologists, engineers, economists, and so on, cannot take the necessary time to master the prerequisites for a graduate course in mathematics but who need to know more of the subject. This book, then, is a second course in (elementary) ordinary differential equations, a course that may be taken by those with minimal—but not zero— preparation in ordinary differential equations, and yet which treats some topics from a ix
  • 5. X PREFACE sufficiently advanced point of view so that even those students with good preparation will find something of interest and value. I have taught the subject at this level at Arizona State University and The University of Iowa to classes with extremely varied backgrounds and levels of mathematical sophistication with very satisfactory results. This book is the result of those courses. Before describing the contents, I wish to further emphasize a topic alluded to above. For some students from other disciplines, this may be the only analysis course they will see in their mathematical education. Thus, whenever possible, basic real analysis, as well as differential equations, is taught. The concepts of analysis are brought into play wherever possible—ideas such as norms, metric spaces, com­ pleteness, inner products, asymptotic behavior, and so on, are introduced in the natural setting of a need to solve, or to set, a problem in differential equations. For example, metric spaces could be avoided in the proof of the existence theorem but they are deliberately used, because the idea of an abstract space is important in much of applied mathematics and it can be introduced easily and naturally in the context of very simple operators. The book has applications as well. However, rather than tossing in trivial applications of dubious practical use, few, but detailed, applications are treated, with some attention given to the mathematical modeling that leads to the equation. By and large, however, the book is about applicable, rather than truly applied, mathematics. Chapter 1 gives a thorough treatment of linear systems ofdifferential equations. Necessary concepts from linear algebra are reviewed and the basic theory is pre­ sented. The constant coefficient case is presented in detail, and all cases are treated, even that of repeated eigenvalues. The novelty here is the treatment of the case of the nondiagonalizable coefficient matrix without the use of the Jordan form. I have had good results substituting the Putzer algorithm—it gives a computational procedure that students can master. This part of the course, which goes rather quickly, is computational and helps pull students with different backgrounds to the same level. Topics in stability of systems and the case of periodic coefficients are included for a more able class. Chapter 2 is the heart of the course, where the ideas of stability and qualitative behavior are developed. Two-dimensional linear systems form the starting point—the phase plane concepts. Polar coordinate techniques play a role here. Liapunov stability and elementary ideas from dynamic systems are treated. Limit cycles appear here as an example of a truly nonlinear phenomenon. In a real sense, this is "applied topology" and some topological ideas are gently introduced. The Poincaré-Bendixson theorem is stated and its significance discussed. Ofcourse, proofs at this stage are too difficult to present; so, if thefirstsection can be described as computational, then this section is geometrical and intuitive. Chapter 3 presents existence and uniqueness theorems in a rigorous way. Not all students will profit from this, but many can. The ideas of metric spaces and operators defined on them are important in applied mathematics and appear here in an elemen­ tary and natural way. Moreover, the contraction mapping theoremfindsapplication in
  • 6. PREFACE Xi many parts of mathematics, and this seems to be a good place for the student to learn about it. To contrast this chapter with the previous ones, the approach here is analytical. Although everything up to this point pertained to initial value problems, a simple boundary value problem appears in this chapter as an application of the contraction mapping technique. Chapter 4 treats linear boundary value problems, particularly the Sturm- Liouville problem, in one of the traditional ways—polar coordinate transformations. Ideas of inner products and orthogonality appear here in developing the rudiments of eigenfunction expansions. A nonlinear eigenvalue problem- a bifurcation problem - also appears, just to emphasize the effect of nonlinearities. The book contains more material than can be covered in a semester. The instructor can pick and choose among the topics. Students in a course at this level will differ in ability and the material can be adjusted for this. I have usually taught Chapter 1 through the Putzer algorithm, skipped ahead and taught all of Chapter 2, presented the scalar existence theory in Chapter 3, and spent the remaining time in Chapter 4 (never completing it). Other routes through the material are possible, however, and the chapters are relatively independent. For example, Chapter 1 can be skipped entirely if students have a good background in systems (although a brief discussion of norms in Rn would help). I have made an effort to alert the reader when understanding of previous material is critical. Professor John Baxley of Wake Forest University, Professor Juan Gatica of the University of Iowa, and Dr. Gail Wolkowicz of Emory University read the entire manuscript and made detailed comments. The presentation has benefited consider­ ably from their many suggestions, and I wish to acknowledge their contributions and express my gratitude for their efforts. Several others-Gerald Armstrong of Brigham Young University, N. Cac of the University of Iowa, T. Y. Chow of California State University, Sacramento, Donald Smith of the University of California, San Diego, Joseph So of Emory University, and Monty Straus of Texas Tech University —read portions of the manuscript and made constructive comments. I gratefully acknowl­ edge their assistance and express my appreciation for their efforts.
  • 7. J Systems of Linear Differential Equations 1. Introduction Many problems in physics, biology, and engineering involve rates of change dependent on the interaction of the basic elements—particles, popula­ tions, charges, etc.—on each other. This interaction is frequently expressed as a system of ordinary differential equations, a system of the form y =fi(t9yl9y29 yf i =fi(Uyi,y2, yn=fn(t,yi,y2, Here the functions f^t, yl5... ,yn) take values in R (the real numbers) and are defined on a set in Rn+1 (RxRx-xR, n+ times). We seek a set of n unknown functions (yi(t),y2(t),... ,j>n(0) defined on a real interval /such that, when these functions are inserted into the equations above, an identity results for every t e I. In addition, certain other constraints (initial conditions or bound­ ary conditions) may need to be satisfied. In this chapter we will be concerned with the special case that the functions/) are linear in the variables yhi= 1,..., ->yn) ,yn)· (1.1) 1
  • 8. 2 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS n. The problem takes the form yl = *n(0j>i + al2(t)y2 + · · · + aln(t)yn + ^ ( 0 y'i = 021 (0>Ί + ci22{i)y2 + · ' · + û2w(0^„ + e2(t) (12) J > » = anl(t)yx + αη2(0^2 + ■ · · + ^„(Ο^ + ^(0· In many applications the equations occur naturally in this form, or (1.2) may be an approximation to the nonlinear system (1.1). Moreover, some prob­ lems already familiar to the reader may be put into the form (1.2). For example, solving the second-order linear differential equation y" + a(t)y' + b(t)y = e(i) (1.3) is equivalent to solving a system of the form y'i = y2 (1.4) y'i= -Hi)yi-a(t)y2 + e(t). To see the equivalence, suppose (yi(t),y2(t)) is a solution of (1.4). Then^(r) is a solution of (1.3), since y2 = {y'J = -6(0^1 - «(O^i + e(t), which is (1.3). On the other hand, if y(t) is a solution of (1.3), then define y1 (t) = y(t) and y2(t) = y'(i). This yields a solution of (1.4). Equation (1.3) is called a scalar equation; (1.4) is called a system. The study of systems of the form (1.2) is made simpler by the use of matrix algebra. In the next section the basic notation, conventions, and theorems from linear algebra that are needed for the study of differential equations are collected for the convenience of the reader. Few proofs are given, and the reader meeting these concepts for the first time may wish to consult a textbook on linear algebra for an expanded development. 2. Some Elementary Matrix Algebra If m and n are positive integers, an m x n matrix A is defined to be a set of mn numbers aij9 1 < / < m, 1 <j < n. (This is properly written as aitj but the comma is omitted.) For notational purposes we write
  • 9. 2. SOME ELEMENTARY MATRIX ALGEBRA 3 A = an a12 α2ί <*2η or A = «2n that is, üij occupies a position in the /th row and the y th column of A. It is convenient to write A = [ad to save space when the specific entries are not important or when they share a common property that can be illustrated by the bracket. For example, -H! i = l , 2 , ; = 1 , 2 denotes the matrix A = '2 2 ' 3 ¥ ?0T First, we will develop an algebra for these matrices. Then we will consider matrices with functions as entries and define continuous, differentiable, and integrable matrices. Two m x n matrices A = [α0·], Β = [b^] are defined to be equal, written A = B, if ciij = bij for every / and j . Given m x n matrices A and B, we define their sum, A + B, by A + B = [αυ + bu]. For example, if _ [ 1 0 11 Γ4 - 2 - 1 ] ~2 2 7_T |_6 2 -4_T then + L2 +6 2 +2 7 - 4 J ~ L 8 4 3 J*
  • 10. 4 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS From this definition of addition, it is obvious that if A, B, C are m x n matrices, then A+B=B+A and A + (B + C) = (A + B) + C, since numbers have these properties. We define multiplication of a matrix A by the number λ (called scalar multiplication) by XA = [AflyJ. Thus, for example, — A = (—1)^4 = [ — α0·]; or, if ^ is as in the example above and λ = 29 then ~|_4 4 14J If 0 denotes the matrix with all zero entries, that is, atj = 0, for all / andy, then A-A = 0 and , 4 + ο = Λ = 0 + Λ. If A is an m x p matrix and B is ap x n matrix, the product of A and B, written AB, is defined to be the m x n matrix whose entries are given by = [Cij, <i<m, <j<n. The z/th element of the product is the sum of the products of the elements in the /th row of A with the corresponding elements in theyth column of B. A simple example illustrates this definition.
  • 11. 2. SOME ELEMENTARY MATRIX ALGEBRA 5 Γθ-4+1-7 0-5 + 1-8 0-6+1-9Ί ~|_2·4 + 3·7 2-5 + 3-8 2-6 + 3-9J _ Γ 7 8 9Ί ~|_29 34 39J' Note first that BA is not defined, since A is 2 x 2 and i? is 2 x 3. The product is defined only when the number of columns of A is equal to the number of rows of B. If n = m, the matrix is said to be a square matrix. If y4 and B are square matrices of the same size, then both AB and BA are defined, but these need not be the same matrix. For example, if then while Γθ ΠΓ2 3"| Γ01 + 1-4 0-3 + 1-51 [2 3_||_4 5j [2-2 + 3-4 2-3 + 3-5J Lio 21J' Γ2 3ΊΓ0 1 Ί Γ 2 - 0 + 3-2 2-1 + 3-31 ~L4 5 JL2 3 j _ L 4 0 +5 · 2 4-1 + 5-3J Γ6 11-I Lio 19J' and AB φ BA. The matrix B = [b^], where bi} = ajh is called the transpose of A = [a0] and is denoted by AT . If A = 1 0 1 4 2 5 -1 - 2 3 then
  • 12. 6 CHAPTER 1 / SYSTEMS OF UNEAR DIFFERENTIAL EQUATIONS AT = 1 0 1 4 2 5 - 1 - 2 3 The rows and columns have been interchanged. THEOREM 2.1 The matrix product has the properties i. A(BC) = (AB)C ii. x(AB) = (<*A)B = A(xB) iii. (A + B)C=AC+BC iv. C(A + B) = CA + CB v. (AB)T =BT AT where A, B, C are matrices, a is a real or complex number, and the above products are defined. The proofs of these properties are exercises in manipulating subscripts and are omitted. A matrix with n = 1 (i.e., an n x 1 matrix) a21 is called an «-dimensional (column) vector. The matrix / (denoted Ip if the dimension p x pis important), called the identity matrix, is defined by / = 1 0 0 1 0 0 1 0 0 0
  • 13. 2. SOME ELEMENTARY MATRIX ALGEBRA 7 that is, atì = 1 and ai} = 0, i φ j . If A is an m x n matrix and In is the n x n identity matrix, then, from the definition of multiplication, it follows that AIn = A . If Im is the m x m identity matrix, ImA = A . Our interest is principally in vectors and square matrices. Let A be an n x n matrix. A real number called the determinant of A is associated with each square matrix. The definition of this real number is induc­ tive on n. If n = 1, det^4 = all. Suppose that atiA has been defined for n = k > 1. Given an element ai} of a matrix A, M0·, the minor of aij9 is the matrix obtained from A by deleting the ith row and they'th column. Aij9 the cofactor of αψ is defined by 4 , = (-l)l+ 'detA#„. For n = k + 1, we define deL4 = allAll + α21Λ21 + ··· + anlAnl. The following examples clarify this definition. If U21 «22 J then dei A = a11a22 — ^21^12· If A = ' 2 1 ' 2 2 *23 then A + A A Xa ^ a lA A Xa ^ a ^A _L Α Λ ^ ^ Ί dety4 = a11det — <z21det +ö3 1 det Lö 32 «33j Lö 32 ^33j Lö 22 ^ J = «11022^33 - fl ll«23«32 ~ «21«12«33 + 021013ΰ 32 + α 3 1 α 1 2 ΰ 2 3 - « 3 1 ^ 1 3 ^ 2 2 · By definition, a determinant, for n > 1, is the sum of all of the elements of the first column multiplied by their cofactors. Actually, the first column need not necessarily be used to find a determinant; in fact, the following is also true.
  • 14. 8 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS THEOREM 2.2 n n àttA = £ UijAij = £ aijAq. i = l y = l The content of this theorem is that in the preceding inductive definition of a determinant, the first column can be replaced by an arbitrary column or an arbitrary row. We will accept this theorem without proof. An important property of expansions of the type in the theorem is that n n Σ a iJA ik = Σ a iJA kj = o, i = l 7=1 if/ Φ k on the left and / φ k on the right. That is, if the cofactors are taken from a different column or a different row, the resulting sum is zero. An important property of determinants (which also will not be proved here) is given in the following. THEOREM 2.3 If A and B are n x n matrices, tet(AB) = (det/0(det£); that is, the determinant of the product of two matrices is the product of the determinants. A square matrix is said to be singular if det A = 0 and nonsingular if det^ Φ0. A matrix B is called the inverse of the square matrix A if AB = /. Suppose det^ φ 0. We define B by *-SiW CL·) that is, B is the transpose of the matrix obtained by replacing each element by its cofactor and then dividing by the scalar det A. Then AB = -] — YaikAjk . If / = j , then the entry is det A; if / #y, the product is zero, as noted previously.
  • 15. 2. SOME ELEMENTARY MATRIX ALGEBRA 9 Hence, the matrix ß = 1 aikAjk] looks like Tdet^ 0 0 ··· 0 1 0 dei A 0 · ·· LO ··· det^J Therefore, AB — I, so B is the inverse of A. This matrix B is usually written A'1 . Since AA'1 = /, then 1 = d e t ^ " 1 ) = (det^)(det^f_1 ), and it cannot be the case that A has an inverse if det A = 0. The arguments above can be used to show the following. THEOREM 2.4 A necessary and sufficient condition that A~l exists is that dei A φ 0. It can also be shown that A'1 A = /, that is, that A and A"1 commute. Further, A~l is unique. To see this, suppose there exists a matrix X such that AX = /. Then, multiplying both sides of this equation on the left by A'1 yields A-1 AX=A-1 I=A~1 or X=A~K Similarly, if XA = /, then XAA'1 = A~ or X = A'1 . If Al and A2 are nonsingular, then (A1A2) = A2 Αγ , since AlA2A2~l Aì1 = /. Tofitinitial conditions for solutions of systems of differential equations, it will be necessary to consider systems of linear algebraic equations in the variables JC1? ..., xn of the form *11*1 + û 12*2 + ' * ' + an*n = C a 2X + ' " + a 2nx n = C 2 tf„i*i + *·· + amxH = cH.
  • 16. 10 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS This can be written Ax = c, (2.2) where A = [α0·] is an n x n matrix and x = [xt] and c = [cj are vectors. If det^ 7*0, then x = A~x c is a solution, since Ax = A{A~l c) = AA~l c = c. It is not difficult to see that this is the only solution. Suppose x and y are both solution vectors of (2.2). Since Ax = c and Ay = c, Ax — Ay = c — c = 0, the null vector (all entries zero). The distributive law (x and y are « x 1 matrices) says that A(x-y) = 0. Since A is inverüble and v4_1 0 = 0, we have A-1 A(x-y) = A-i 0 = 0; hence x - j > = 0 or Thus there is only one solution. The converse of this result is also true. THEOREM 2.5 A necessary and sufficient condition for the system (2.2) to have a unique solution is that A be nonsingular. In particular, note that if the vector c is null, that is, has all its components zero, x = 0 is the only solution vector if A is nonsingular. (If A is singular, then x = 0 is one solution and there must be another solution, since solutions are not unique.)
  • 17. 2. SOME ELEMENTARY MATRIX ALGEBRA 11 A finite set of w-dimensional vectors xx,..., xk, that is, n x 1 matrices, is said to be linearly dependent if there exist constants ci,..., ck, not all zero, such that c1x1 + c2x2 + * · * + ckxk = 0. A set of vectors that is not linearly dependent is said to be linearly independent. An expression of the form c1x1 + c2x2 + * * · + ckxk is said to be a linear com­ bination of the vectors xi9..., xn. The following theorem offers a way to check whether a matrix is singular. THEOREM 2.6 A necessary and sufficient condition for a matrix to be nonsingular is that its columns are linearly independent vectors. Proof. Let A be a matrix and let x1 , ..., xn denote the w-dimensional column vectors of A. We inquire whether there exist real numbers cl9..., cn such that 0. (2.3) cxxl + c2x2 + · · · + cnxn If we let C be the vector C = then (2.3) i ~Cl~ Cl Cn may 5 be written AC = 0, (2.4) since A = [x1 ,...^*]. The equation AC — 0 has a nontrivial solution (Theorem 2.5) ifand only ifA is singular. Thus, ifA is nonsingular, the only solution of (2.3) is c1 = c2 = · · · = c„ = 0 and the vectors x1 , ..., xn are linearly independent. If A is singular, there exists a nontrivial solution C of (2.4) and x1 ,..., xn are linearly dependent, using the components of C as the constants in the definition of linear dependence. Finally, it will be necessary to consider matrices whose elements are func­ tions. We can think of a matrix A(t) as a mapping from the set of real numbers into the set ofn x n matrices. It is simpler, however, to think of such a matrix as n2 functions labeled a^t) and make our definitions in terms of the entries rather
  • 18. 12 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS than in terms of the mapping. Proceeding this way, a matrix of functions A(t) = [a0(0] is said to be 1. continuous at a point t0 if each α(](ί) is continuous at t0, 2. differentiable at a point t0 if each a^(t) is differentiable at t0, 3. integrable over [a, b] if each a^t) is integrable over [a, b], lfA(t) is differentiable, define A'(i) = [a'^t)]. From the definition of the product of two matrices, we have at once that if A(t) and B(t) are differentiable and the product A(t)B(t) is defined, then A{t)B{i) is differentiable. Further, (AB)' = A'B + AB'. (2.5) To see this, note that = A'B + AB'. This fact will be very important in our development of a theory for systems of differential equations. Note that the order of multiplication is important. Similarly, we define and the usual rules for integration apply. For example, [A(s) + B(s)]ds= A(s)ds+ B(s)ds. Jo Jo Jo EXERCISES 1. Find A + B, AB, and BA when --Gil· -[-:a
  • 19. 2. SOME ELEMENTARY MATRIX ALGEBRA 13 (b) A (c) -[-; a- - G -a (d) (e) Λ = Λ = Λ = "1 2 3" 0 1 0 4 5 6 " - 1 - 2 1 2 0 1 B = - 3 " 3 - 4 _ "2 0 1 0 - 1 4 _2 2 0 £ = " 2 13"] - 1 4 2 | _ 6 1 Oj 1 1 0 1 0 0 0 0 0 0 1 0 1 0 0 2 B = 2 0 0 0 3 1 0 0 3 0 0 0 2. A matrix is said to be diagonal if atj = 0 when i Φ j . Show that the product of two diagonal matrices is diagonal. 3. Show that A(BC) = (AB)C by using the definition of product for matrices. 4. If a is a scalar and A and B are matrices, show that <x(AB) = (<xA)B = Α(μΒ), that is, scalars (numbers) "factor through" matrix multiplication. 5. Establish the distributive laws for matrix multiplication, (A + B)C AC + BC C{A + B) = CA + CB. 6. Prove that (ABf = BT AT . 7. Construct Λ"1 , if A = <·>Β (c) (e) "2 0 L° "2 0 0 0 a 1 0" 2 0 0 3 1 0 0" 2 1 0 0 2 0 0 0 2_ (b) (d) [2 0 0 3 [o o °1 0 4j [1 0 [ _ 1 0 1 1 i"| 0 oj 8. Find solutions of the system when A is as in Exercise 7(b), (c), and (d).
  • 20. 14 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS 9. Determine whether the following sets of vectors are linearly independent or linearly dependent. (a) 0 , 1 , 0 (b) « , U , U Λ M Λ W 10. Find A'(/) and f0 A(s) ds if A(t) = ( cos t sin / — sin / cos tl (c) GO (:*::) * (::::r,) 11. Verify equation (2.5) where A{i) is given by Exercise 10(a) and B(t) by Exercise 10(b). 3. The Structure of Solutions of Homogeneous Linear Systems Let x represent an «-dimensional vector, let A be an n x n matrix of continuous functions defined on an interval /, and let e{t) be an «-vector of continuous functions defined on /. The system (1.2), using matrix notation, can be written x' = A(t)x + e(t). (3.1) A solution of (3.1) on /is a differentiable vector function φ(ί) such that <pi) = Α(ί)φ(ή + e(t) for every / e /. If x0 is a constant vector and t0 G I, then the initial value problem for (3.1) is to find a solution of (3.1) that satisfies, in addition, <K'o) = *o· (3 ·2 ) We state the basic existence theorem, which is a special case of a theorem to be proved in Chapter 3.
  • 21. 3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 15 THEOREM 3.1 Let A (t) be a continuous n x n matrix defined on an interval / and let e(t) be a continuous n-vector defined on /. For every constant n-vector JC0 and every f0 e /, there exists a unique differentiable vector φ(ί) defined on / such that <p'(i) = A(t)9(f) + e(t), tel, For the remainder of this section we consider the case e(t) = 0, called the homogeneous case. (The quantity e(t) sometimes represents an external force, so this case is also called the unforced case.) Here we attempt to develop a structure similar to that for scalar equations. It will be convenient to think of Equation (3.1) in a slightly different way. Let sé be one set of functions and 0$ another. Suppose that for each element x in the set sé we associate a unique element of the set Jf, called Tx. T is a mapping from the set sé into the set 0& and symbolically we write Tse^0^. The mapping T is also called an operator, sé is called the domain of T and ^ , the set of all y such that y = Tx, is called the range of T. (Sometimes it is convenient to indicate a larger set, a set containing the range, in the symbolic definition.) For example, let sé be the set of all continuous «-vectors on [0,1]. Define Tx = y, xesé, by y(t)= x(s)ds. Set ^ , in this case, could be the set of continuous vectors defined on [0,1] that have a continuous derivative on (0,1). As another example, let sé be as above and let Ω be an n x n constant matrix; then an operator can be defined by y = Ωχ. All of our sets will have the property that if a is a number and x and y are in the set, ocx and x + y are in the set. Now let x(t) be a continuously differentiable vector. Define an operator L on the set of all such functions by L[x] = x' — Ax, where A is an n x n continuous matrix. L maps continuously differentiable functions into continuous ones and a solution of (3.1) is just the set of functions that are mapped by L onto the function e(t), or, in the homogeneous case, onto the constant function that is everywhere zero. An operator Tis said to be linear if for any two elements x, y in its domain, and any two numbers (scalars) a and /?, T(ocx + ßy) = OLT(X) + ßT(y).
  • 22. 16 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS THEOREM 3.2 The operator L is a linear operator. Proof. Let Xi(t), x2{t) be differentiable vector functions and let cl5 c2 be real constants. Then L[cix1 + c2x2](t) = (c^M + c2x2(t)Y - Λ(0(^ι*ι(0 + c2x2(t)) = ογχ[{ί) + c2x2(t) - £^(0*1 (0 - c2A(t)x2(t) = c^x'^t) - Α(ήΧί(ή) + c2(x£(0 - A(t)x2(t)) = c1L[x1](0 + c2L[x2](/). THEOREM 3.3 Every linear combination of solutions of Z,[x] = 0 (3.3) is a solution of (3.3). Proof. Ifx^i), x2{t) are solutions of (3.3) and c1 and c2 are constants, L[c1x1 + c2.x2]= ^î^t-^i] + c2L[x2] = 0, since L[xJ = 0 and L[x2] = 0. Suppose now that we have n solution vectors, JC£(Î), of (3.3) defined on an interval /. Then we can form a matrix Φ whose columns are these solutions. We write Φ = [χί9χ2,...,χ„]. Since the elements of Φ are differentiable, we can compute Φ'. Now ith column Φ'(ή = x[(t) = A(t)xt(t), so that Φ'(0 = [Λ(0*ι(0, Λ(0*2(0, · · ·, A(t)xm(t)] = A(t)[xl(t),x2(t),...,xn(t)] = Λ(ήΦ(ή. That is, Φ satisfies Φ'(0 = Α(ήΦ(ή.
  • 23. 3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 17 Equation (3.4) is a shorthand method for writing n-vector differential equations (n2 scalar differential equations). For this reason, Theorem 3.1—the existence and uniqueness theorem—applies if we specify an initial condition Φ(0) = C, where C is a constant matrix. Using (3.4), we have the following useful fact. If<b is a matrix whose columns are solutions of (3.3) and c is a constant vector, then Φι is a solution of (3.3). Proof The proof is a straightforward computation, L[<bc] = (Oc)' - ΑΦο = Φ'ε — A<bc = (Φ' - A®)c = 0, since Φ satisfies (3.4). If Φ(/) is a matrix that is nonsingular for each t and that satisfies the matrix differential equation (3.4), then Φ is said to be a fundamental matrix for the differential equation x' — Ax = 0. We are now equipped to prove the principal theorem of this section. The content of this theorem is that finding any fundamental matrix for (3.3) allows us to find all of the solutions of (3.3). THEOREM 3.4 If Φ is a fundamental matrix for (3.3) on an interval / where A(t) is con­ tinuous, then every solution of (3.3) can be written Oc for an appropriate constant vector c. Proof Let x(t) be an arbitrary solution defined on an interval /, let t0 e I, and let Φ be a given fundamental matrix for (3.3). Now Φ(/0) is a nonsingular constant matrix, so Φ_1 (/0) exists (Theorem 2.4). Let c = &~1 (t0)x(t0). Since c is a vector, y(t) = Φ(/)<: is a solution of (3.3). Furthermore, y(to) = Q(*o)c = ΦΟο)*"1 ^)*^)) = *('o). Since solutions of the initial value problem are unique (Theorem 3.1), it follows that x(t) = y(t) for tel, or that x(t) = Φ(ήϋ, as claimed.
  • 24. 18 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS The reader will recognize that Theorem 3.4 is a wholesale way of doing what was done in particular for scalar equations. For example, an arbitrary solution of a second-order scalar equation can be represented as a linear combi­ nation of two linearly independent solutions. The quantity y(t) = Q)(t)c repre­ sents the /th component of y(t) as a linear combination of the /th components of n given solutions—each row is in fact the same linear combination, and the components of c are the coefficients. We illustrate Theorem 3.4 by comparing it with the theory for second-order scalar equations. Consider y'i = y i (3.5) y2 = -y I- This system is the system we obtain from y"+y = 0. (3.6) By substitution in (3.5) it may be verified that the vectors x and Leos (0J Γ cos (0~| . , I are two solutions of (3.5). Let |_-sin(0j o = |"sin(0 cos (01 |_cos (t) - sin (/)J ' Since detΦ = —sin2 (/) — cos2 (/) = — 1, Φ is nonsingular and hence is a funda­ mental matrix. By Theorem 3.4, any solution y(t) of (3.5) can be written y(t) = Φο ■ sin (/) cos (0 cos (0 — sin (t) ][:;] for appropriate cx and c2. Note that two linearly independent solutions of (3.6) are φί = sin(/), φ2 = cos(t). The Wronskian of φ1 and φ2, W((p1,(p2)(t)9 is the determinant of thefundamental matrix φ. That is, detO(r0) =^(Φι,φ2 )(0· This is the connecting link between the theory for second-order scalar equations and the theory for the equivalent systems. Consider now the system (n = 3),
  • 25. 3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 19 or y'i = yi + 4 ^3 y'i = - 3 Ί - 2 ^3 0 1 1 0 0 0 Λ - 2 1 / ri [ y 2 U (3.7) It can be verified by substitution that three solutions are Γ sin (t) cos (0 L o 9 cos (0 -sin(f) 0 , and e' 1 -3e« β' J Then Φ is given by Φ(0 = sin (i) cos (f) _ 0 cos (0 -sin(i) 0 e' - 3 e e' (3.8) A computation shows that detO(0 = e'(-sin2 (0 - cos2 (0) = - e V Ö , so Φ is nonsingular and hence is a fundamental matrix. Every solution of (3.7) can be written 0(i)c for an appropriate constant vector c. Suppose, for example, we desire to find the solution of (3.7) that satisfies the initial conditions >Ί(0) = 1, y2(0) = 1, ^3(0) = 1. It is necessary to choose c = such that <D(0)c =
  • 26. 20 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS or 0 1 1 0 0 0 1] - 3 lj pi Cl LC 3_ = ΊΊ 1 lj In equation form, this is c2 + c3 = 1 3 c 3 = l c3 = l. (We could, of course, invert the matrix, but that would involve more labor.) Thus c = and <&(t)c = sin (t) cos (/) cos(/) — sin(0 0 0 4 sin (0 + ex ~ 4cos(/)- 3er é » e' Ί -3e' e' J |4 0 Li is the desired solution. The principal issue, however, the question of the existence of a fundamental matrix (and how to find it), has been sidestepped. In the examples above the matrix was exhibited explicitly, but the question as to whether a fundamental matrix can always be found thus far remains open. Let φί) = ΦΪ(0 L<pUt). be the solution of (3.3) satisfying φ]{ί0) = 0J Φ i, and φ(ΐ0) = 1, that is,
  • 27. 3. THE STRUCTURE OF SOLUTIONS OF HOMOGENEOUS LINEAR SYSTEMS 21 «P'COH i th place. 0 The set of solutions φ%ί), i ■ to form a solution matrix 1,2,...,«, which exists by Theorem 3.1, can be used Φ(0 = [φι ω,Φ2 ω,.··,Φ"(0]. Further, for t = 0, det Φ(ή = 1. If the determinant should remain nonzero on an interval /, then Φ would be a fundamental matrix on this interval. This is indeed the case, as given by Theorem 3.5. The trace of a matrix A{i) = [α0·(0] (written trA(t)) is defined to be the sum of the diagonal elements, that is, trA(t) = Σ?=ι a iM- Note that tr A(t) is a scalar function. THEOREM 3.5 (Abel's formula) Let A (t) be an « x « matrix of continuous functions on / = [a, b and let Φ(ί) be a matrix of differentiate functions such that Φ'(ί) = Α(ί)Φ(ί). Then for t910 e /, detO(0 = detlOi^let^W*. Since an exponential is never zero (note that this exponential is a real- valued function, not a matrix), Theorem 3.5 says that if Φ is a matrix whose columns are solutions of (3.3), then detO(i) is, identically zero or never zero. Thus, to find a fundamental matrix, we need only to find « solutions that at some point t0 are linearly independent vectors. We omit the proof of Theorem 3.5, although the exercises following this section indicate how it can be done. Combining the preceding arguments, we have THEOREM 3.6 If A(t) is continuous, there exists a fundamental matrix for the system (3.3).
  • 28. 22 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS EXERCISES 1. If Φ(/) is a fundamental matrix for x' = Ax and C is a nonsingular matrix of the same dimension, show that 0(/)C is a fundamental matrix. (Recall that C = 0, if C is a constant matrix.) 2. Show that if Φ(ί) and Ψ(ή are fundamental matrices for x' = Ax, then there is a constant, nonsingular matrix C such that <D(i)C = Ψ(0· 3. Verify that the matrix (3.8) is a fundamental matrix for (3.7). Illustrate Theorem 3.5 with the matrix (3.8). 4. Define an operator Tx on the set of continuous functions on [0,1] by (Tx)(t) = Y0x(s)ds, te[0,1]. Show that ris a linear operator. Can you define Ton a larger domain? 5. Show that multiplication of vectors in R" by a (fixed) matrix A defines a linear operator. 6. LetB(t) = ,* L 1 2 ^ . Let bM) be differentiable. Compute (det B(t))' by first expanding det B(t) and then differentiating. Then show that L*21 Γ«1ΐ(0 <*12(')~| 7. Let Φ(0 l(0 *1 I be a fundamental matrix for x' = A(t), where Λ = (0 *22(')J Show that (detO)' = I det 2 ΣΛ 1***1 1 * 2 1 Σα ι*^ * 2 2 + det * 1 1 * 1 2 2 2 Z-^Jk-^kl La 2kX k2 1 1 = ^û»detO. 1 8. Let z(i) = det Φ(ί) where Φ(ί) is as given in Exercise 7. Use Exercise 7 to conclude that z(t) = z(0)e&itTA{s))ds and establish Theorem 3.5 for this special case. 4. Matrix Analysis and the Matrix Exponential In Section 2 some of the basic ideas of matrices and their algebra were developed. Matrices were used in Section 3, but mostly for notational conve­ nience; they were used to give a simple representation to complicated ex­ pressions. We now need to take a further step, to learn how to take a limit of a
  • 29. 4. MATRIX ANALYSIS AND THE MATRIX EXPONENTIAL 23 sequence of matrices. The power of the simple idea of limit is familiar to every calculus student and lies at the heart of all of analysis. It is possible to carry over to matrices (and to more general settings) many of the ideas from elementary calculus. We limit our scope to one simple idea, convergence. We will use this notion to define a very useful matrix, which is a sum of an infinite series of matrices. This matrix, called the exponential of another matrix, has many (and fails to have many other) properties of the real exponential function. It is an example of one of the fundamental themes of mathematics, taking an idea in one setting and developing it in another—the process called generalization. As we shall see in the material that follows, the concept of the exponential of a matrix is a very useful generalization. The reader is assumed to be familiar with limits of sequences and sums of series from calculus. The notion ofmaking the absolute value (or modulus, in the complex case) of a quantity small is crucial. The first step on the way to the definition we need is to replace absolute value with a notion applicable to vectors and matrices. This concept is called the norm of a matrix or a vector. There are many ways to do this, and for some applications the clever choice of a norm is very important. For what we need here, any of the usual notions of norm would be satisfactory, so we choose one that makes the proofs easy. Define, for an r x r matrix A, a real number, called the norm of A, written MII, by ij IfA is a vector (aX9...9 ar)T , define A = £'=1 |af|. The norm of a matrix has the following properties: 1. M||>0,^^0,and||0||=0; 2. MU = M MU; 3. A + B < MU + ||*||; and 4. ΑΒ<ΑΒ, where A and Äarerxr matrices and c is a real or complex number. If* is an n-vector, the definitions for norms of matrices and vectors are so related that || Ax < A x. Note that the norm of a vector satisfies properties (1), (2), and (3). We also note that there are other "norms" for vectors and matrices. The definitions that follow distinguish between the concept of convergence and the concept of converging to a limit. The distinction is not really needed at this point, but it will be important in Chapter 3 where these ideas must be extended further.
  • 30. 24 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS A sequence of r x r matrices, An, is convergent (or is a Cauchy sequence) if for ε > 0 there exists a positive integer N such that if m, n > N9 then || An — Am || < ε. The definition of convergence of a sequence of matrices is exactly the same as the definition of convergence of a sequence of real numbers except that norm, || ||, has replaced absolute value, | |. This is true of the definition of limit of a sequence of matrices as well. A matrix A is said to be the limit of a sequence of matrices An if for each ε > 0 there exists a positive integer N such that if n > TV, then A - An < e. THEOREM 4.1 Every convergent sequence of matrices An has a limit. Proof. The proof will follow from the fact that every convergent sequence of real numbers has a limit and the fact that α» — a™ < An — Am ||, where αξ is the element in the /th row,y th column of thepth matrix in the sequence. Hence, if An converges, so does a». Let lim,,.^ ajj = alj9l < i < r, 1 < j < r, and let A = [α$. Fore > 0, choose^· such that ais — α% < e/r2 fovn > Νψ and let Λ^ = maxuNu. Then A - An = ΣυΙ«ί/ - a ij < ^Φ1 ^ ε f o r n>N. Given a sequence of matrices An, we can form another sequence (called the sequence of partial sums) by defining Sn = Ax + · · · + An. We denote the se­ quence {Sn} by £"Ä 1 Ai and call J ^ Ai an infinite series. If limM_00 Sn = S, then the series is said to converge and its sum is defined to be S. If {Sn} does not converge, the series is said to diverge, and the sum is not defined. It is important to note that we can often show that a series converges without being able to find the limit. For example, we could investigate the (real) infinite series 00 ν Π o ni or the sequence of partial sums x2 xn and deduce that it converges. Then we could define a new function by 00 v " o n
  • 31. 4. MATRIX ANALYSIS AND THE MATRIX EXPONENTIAL 25 All of the common properties of the exponential can be deduced from this series. While this approach is not commonly used in calculus for the real exponential, it is often used to define the exponential of a complex number. We will take this approach to define the exponential of a matrix and then deduce some of its properties through the limiting process. The series of interest is Â2 1+Λ+ Λ - + (4.1) THEOREM 4.2 The series (4.1) is convergent. Proof. The nth partial sum is Sn = £ï=0 Ak /k, so Sn-Sm = A Ak f — £, Ak ~à>fcT ~ ~ 1 V Α * *Jr+iir where we have chosen labeling so that n > m. By property (iii) of the norm, the last quantity above is < A Μ Ί < A Mf. k=m+l k k=m+i k Since || A || is a real number, the right-hand side is a part of the convergent series of real numbers - Σ Mil* (4.2) Hence, since (4.2) is convergent, ifε > 0, there is an N such that for m> N, Z MU* ^ This is sufficient to prove that {Sn} is convergent. It has a limit, by Theorem 4.1. The sum of this series is denoted eA , that is, oo Ak eA =Y—. kkk (4.3) Similarly, for a real number t,
  • 32. 26 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS Since each entry of eAt is defined as a convergent power series, it is diflferentiable (and hence continuous and integrable) and may be differentiated term by term. The/?th term of the series is Ap tp /p. Hence, for/? > 1 (for/? = 0, (d/dt)I = 0, the derivative of a constant matrix is the null matrix) making use of (2.5), dt p Idt p V p P = A AH p (p - 1)! ' where αξ is the t-y'th element of Ap . Hence, oo ^ * - l f * - l oo Ak tk = ΑοΛ' (eAt )' = Σ A =AYd-— = Ae *tl (k - 1)! k% k THEOREM 4.3 (e4 ')' = Ae*' m ^Α. A matrix A is said to be similar to a matrix B if there exists a nonsingular matrix T such that T~l AT = B. Similar matrices are related in ways that are useful to us in the study of differential equations. Here we note one such property involving the exponential of a matrix of the form eTAT ' Since (TAT-1 )" = (TAT'1 ) (TAT'1 ) ■ ■ (TAT-1 ), > V ' n times (TAT'1 )- = TAT'1 TAT'1 ■ · ■ TAT'1 = TA'T'1 . If S„ is the «th partial sum of eTAT ' that is, s = A (TAT'1 )1 then
  • 33. 4. MATRIX ANALYSIS AND THE MATRIX EXPONENTIAL 27 » ΤΑΎ'1 S n- Σ J] = <Ιοΐ)Γ "' From this it follows that lim S„ = TeA T~ n~*ao that is, ,ΤΛΤ' = TeAT-l_ (4.4) We need an additional fact about eA , whose proof we defer to the exercises. THEOREM 4.4 aete** φθ for any matrix M; that is, e^ is always nonsingular. EXERCISES 1. Let x = I 1 1 be a vector in R2 . Describe (geometrically) the set ofpoints x such that W ||*|| = 1 and such that ||x|| < 1. 2. If A is an n x n matrix and JC is an n-vector, show that || Ax < A x. (Hint: Use the definition of multiplication.) 3. Compute eA by summing (4.3) when A = (a) U (b) [l 0 0" 0 1 0 [o 0 3 (c) "2 0 0] 0 2 0 0 0 2J 4. Show that if A = *1 0 0 0 λ2 0 0 0 ^3j ' e A eXi 0 0 0 ex * 0 0 0 ex > 5. Let A = "o 0 0 1 0l 0 1 0 OJ . Compute A2 , A3 , and A4 .
  • 34. 28 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS 6. Find eA where A is as in Exercise 5. 7. If A and B are n x n matrices such that AB = BA, show that eA eB — eA+B . 8. Combine Exercises 4 and 7 and the definition (4.3) to find eA where A = (a) Γ.3 (b) [2 1 0" 0 2 0 L° ° L (c) ~i i ol 0 1 0 _0 0 l j 9. Show that (eA )'x = e~A . (Hint: See Exercise 7.) 10. Show that UQieM φ 0, for any M. {Hint: eM e~M = I, by Exercise 9.) 11. Let A(t) be a continuous matrix defined on an interval /. Show that the scalar function M (Oil is a continuous function. 12. Let v : be a vector in Rn . Let N(v) = max,·^!,v2,...,vn]. Show that N(v) satisfies properties (i), (ii), and (iii) of the listed properties for the norm of a matrix. 13. Let N(v) be as in Exercise 12. Show that N(v)< ||t; || <nN(v). 14. Let A(t) be a continuous matrix defined on an interval [a, b]. Show that I ffe II Γ b A{t)dt< I Jo II Ja (Hint: Use the definition of functions.) A(f)dt. and a similar property of absolute value for scalar 5. The Constant Coefficient Case: Real and Distinct Eigenvalues The theory developed in Section 3 exhibited the rich structure of linear systems of differential equations. Unfortunately, that theory gave no clues as to how we construct the fundamental matrix on which that theory depends. This is not unexpected since, even for the simple scalar equation y" + a(t)y = 0, there is no "formula" for the general solution. The class of equations that we can actually solve is far smaller than that to which the theory of Section 3 applies. However, using the material developed in Section 4, it is possible to construct solutions to those systems where the coefficient matrix is constant, that is, systems of the form
  • 35. 5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 29 / = Ay where A is a constant matrix. The amount of work required to do this is somewhat more than in the case of scalar equations that the reader may have encountered previously, but the added difficulties are those of linear algebra rather than of differential equations. Several new concepts from linear algebra appear here, particularly the important concepts of eigenvalue and eigenvector. The first theorem shows the importance of the concept of the exponential of a matrix, developed in Section 4. The remainder of this section is devoted to making this idea constructive. THEOREM 5.1 Let Abe a constant matrix. A fundamental matrix Φ for is given by (5.1) (5.2) Proof. From Theorem 4.3, (eAt )' = AeA so Φ'(ί) = ΑΦ(ή. Furthermore, det eAt φ 0 (Theorem 4.4), and thus Φ is a fundamental matrix. The matrix eAt is, however, not readily accessible, for since A2 eAt = 1+ At + —t2 + (5.3) it is necessary to sum the series. There is one tractable case (see Exercise 3 of Section 4), the case where the matrix is diagonal, that is, αΗ = λί9 ai} = 0, / #y, or A = K 0 0 0 for then A2 is ' X 0 0 λ 0 0 λ2 0 0 A3 0 (f 0 λ2 0 0 ... o K > (5.4)
  • 36. 30 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS A3 has Xf on the diagonal, and so on. The series (4.3) may be summed to obtain eA ' = ex >< 0 . 0 0 eA2r 0 · 0 • 0 ex "' This is not surprising, of course, since the system (4.1) in this case is x1 = λ1χ1 X 2 = ^2-*2 The system is uncoupled (each equation does not involve any other) and each equation may be solved directly to yield x^i) = eXi lï A is not diagonal, it still may be the case that there is a matrix Tsuch that TAT~l has the form (5.4). (Recall that this means that A is similar to a diagonal matrix.) From (4.4) it follows that TeAt T-x = eTATi t. (5.5) Thus, if we can find a matrix rand the >l's resulting after the transformation, we can, of course, recover eAt . Saying this another way, let B = TAT'1 and have the form (5.4). Then the solution of y' = By can be found, since eBt can be computed. From eAt = T~l eBt T, we have eAt . Rather than attempt to find the Γ, we reason as follows: eBt has entries eki Premultiphcation (multiplication on the left) by Γ"1 and postmultiplication (multiplication on the right) by T rearranges and combines these. Suppose we disregard B and simply look for a solution of (5.1) of the form y = [cYekit c2eXit _cne^_ = eXtt [~c i] _cn = eXit c,
  • 37. 5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 31 where Af is one of the diagonal elements of B = TAT l . Then ~kiCie^~ lfne kit = Ky, or hy = Ay, which we write as (A - XJ)y = 0. (5.6) Ify is not to be identically zero, then A — ^/must be singular (Theorem 2.5). The (complex) numbers λ such that detG4-A/) = 0, are called the eigenvalues of the matrix A. Vectors c, not identically zero, such that (A - XI)c = 0, are called eigenvectors. Equation (5.6) says that A, must be an eigenvalue of A and substitution for y gives (A - XJ)ekit c = 0, or {A - kj)c = 0, (5.7) since eXi% φ 0. Equation (5.7) says that c must be an eigenvector of A. Since the At are fixed, that is, they are the diagonal elements of B, it would seem that we have no hope of satisfying (5.6). (Equation (5.7) can be satisfied, since the constant vector c has been arbitrary up to now.) This matter is resolved in the following. THEOREM 5.2 If TAT'1 = B, then A and B have the same eigenvalues.
  • 38. 32 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS Proof. B-XI=TAΤ'1 - XTT'1 = Τ(Α-λΙ)Τ- Thus, using Theorem 2.3, det (B - XI) = (det r)(det(,4 - /l/))(det Γ"1 ). Now both det Tand det T'1 φ 0, since Γ"1 exists. Thus, det (B - XI) = 0 if and only if det {A — XI) = 0 or A and B have the same eigenvalues. Note that for a diagonal matrix B, the eigenvalues are the n entries. Clearly, det (2? — XI) is a polynomial of degree n in X; hence, so is det (A — XI). This polynomial is called the characteristic polynomial. Thus, if X is an eigenvalue of A and if c is chosen as an eigenvector, eXit c is a solution. The following theorem summarizes this argument. THEOREM 5.3 If A is a constant matrix, λ an eigenvalue of A9 and c a corresponding eigenvector, then j = eXt c is a solution of (5.1) Since A has n eigenvalues, we can find n such solutions, and it would seem then that we have found the columns for a fundamental matrix. The difficulty, however, is that the eigenvalues are not necessarily distinct and the eigenvectors corresponding to a repeated eigenvalue may not be linearly independent. (Eigen­ vectors corresponding to distinct eigenvalues are always linearly independent.) If this occurs, we have not found n linearly independent column vectors to make a fundamental matrix. The analysis in this case is a good bit more complicated, and we defer it for the moment. However, it is the case that if all of the eigenvalues of A are distinct, then A is similar to a diagonal matrix, so the n solutions obtained actually are linearly independent, and a fundamental matrix has been found. THEOREM 5.4 Let A be a constant n x n matrix with distinct eigenvalues λί9...,λ„ and let C,..., cn be corresponding eigenvectors. Then a fundamental matrix for (5.1) is given by Φ(ί) = [ ^ ^ , Λ 2 , . . . , Λ Λ 1 .
  • 39. 5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 33 We illustrate the foregoing analysis with some examples, where the Λ,/s a r e real numbers. First, consider the system X ■ — Z A J ^2 x2 = 4x2 X~ ^— Ζ Λ | ~I~ DX2 1 ■^•^'3 or, in matrix form, * i ~ x2 * 3 - 1 = '2 0 _2 - 1 0Ί 4 0 5 3J Γ*1 x2 L*3 (5.8) The eigenvalues are the solutions of det 2 0 2 - 1 (Γ 4 0 5 3_ - λ " 1 0 0 0 1 0 _0 0 1 or det 2-λ - 1 0 0 4 - 1 0 2 5 3 - A . = 0. Expansion gives (2 ,T4 - λ 0 1 „ Γ -1 01 Λ - 5 3 - Α Μ 4 - Λ OJ = ° or (2-λ)(4-λ)(3-λ) = 0. Thus, the eigenvalues are λί = 2, λ2 = 3, and λ3 = 4, all distinct. Hence, if we can find the eigenvectors, we can find a fundamental matrix. An eigenvector can be determined by solving 2-λ - 1 0 0 4-λ 0 2 5 3 -λ. -3-J = 0
  • 40. 34 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS for λ = 2, 3, 4. If λ = 2, this becomes 0. "0 - 1 0"| 0 2 0 _2 5 lj pi] Γ2 UJ This yields c2 — 0 and 2c! + c3 = 0. Thus, an eigenvector (clearly there are infinitely many, since a constant multiple of an eigenvector satisfies the defining equation) is c = 1" 0 t-2. For λ = 3, the system becomes 1 0 2 - 1 0~| 1 0 5 OJ P1 ! °2 U3J = 0 or c i + c 2 = 0 c2 = 0 2CÌ + 5c2 = 0. Here c1 and c2 are zero, and since c3 does not appear in the equations, it may be chosen arbitrarily. Hence, an eigenvector is Finally, for λ = 4, the equations are 2cj + c 2 = 0 2cx + 5c2 — c3 = 0.
  • 41. 5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 35 Setting c2 = 2 (arbitrarily), ct = — 1 and c3 = 2c1 + 5c2 = — 2 + 10 = 8. Thus, afinaleigenvector is Three linearly independent solutions are Γ Γ 0 L-2_ e2 ', "0" 0 _1_ e3 ', "-1Ί 2 8j and a fundamental matrix corresponding to (5.8) is Φ(ΐ) = Γ e2 ' 0 L-2e2 ' 0 0 e3 « -e4 ' 2eM ieM (5.9) We now have detO(i) = -é?3i (é?2r )(2*?4i ) = -2e9 which, as it must be, is #0. As another example, consider the system Χγ = Χ^ "Τ X$ x 2 = *i + 2JC2 + 3JC3 * 3 = 3·Χ3 which, in matrix form, is with x = I JC2 J · The eigenvalues are the solutions of / 1 - A 0 1 det(yi-A/) = det( 1 2 - λ 3 0 0 3-A> = (1-A)(2-A)(3-A) = 0
  • 42. 36 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS or λγ = 1, λ2 = 2, λ3 = 3. For λ = Xt = 1 an eigenvector is the solution of / 0 0 I W c A (A - I)c = I 1 1 3 11 c2 ) = 0. 0 0 2/cJ In equation form, this is c3 = 0 cl + c2 + 3c3 = 0 2c3 = 0. A solution of this system of linear equations is given by ct = 1, c2 = — 1, c3 = 0 and one solution of the system of differential equations takes the form For λ = λ2 = 2, an eigenvector is the solution of / - l 0 1 / < V (^ - 2I)c = I 1 0 3 1 I c2 0 0 1/ c 3 or - C ! +C3 = 0 Cl + c3 = 0 c3 = 0. One solution of these equations is cx = c3 = 0, c2 = 1, yielding a solution to the differential equation x2(t) = e2 <l 1 Y Finally, for λ = λ3 — 3, an eigenvector is a solution of
  • 43. 5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 37 / - 2 0 Γ (A - 3/)c = ( 1 - 1 3 ) [ c2 ) = 0 0 0 0, or -2c! + c3 = 0 c1 — c2 4- 3c3 = 0. Since c3 = 2cx, then 7Ci =c2. Choosing (arbitrarily) cx = 1 yields c2 = 7 and c3 = 2. Hence, a third solution of the system of differential equations is given by x3 = e3 'l 7 Using these three (linearly independent) solutions, a fundamental matrix Φ takes the form / é 0 Φ(ί)= ( -e1 e2 ' le3 ' V 0 0 2é>3 ', Suppose that, in addition to solving the system of differential equations, ( l we want the solution through the vector I 1 I at time t = 0. Since Φ(ί) is a 0 / fundamental matrix, any solution takes the form Φ(ί)ο, for some vector c. To fit the initial condition it is then necessary to solve O(0)c = ( 1 (Cl for c = I c2 ] · Thus, we must solve c3J
  • 44. 38 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS In equation form this is, or c3 Cl c2 Cl + = 0 = 1 = 2. Cx +C3 c2 + 7c3 2c3 = 1 = 1 = 0 The desired solution, y(t), is then given by / e' 0 e3 ' /1 y(t) = j - e > e2 ' 7e3 ' J I 2 0 0 2e3 '/ 0 . Finally, consider the system *r x2 . ^ 3 . / = "1 3 0 1 .0 0 - 2 Ί 4 lj Γ^ιΊ *2 L^aJ The eigenvalues are the roots of det 1-A 3 - 2 0 Ι-λ 4 0 0 1 - λ. = 0 or (1 - λ)(1 - λ)(1 - X) = 0;
  • 45. 5. THE CONSTANT COEFFICIENT CASE: REAL AND DISTINCT EIGENVALUES 39 that is, λ = 1 is a triple root. An eigenvector can be found by solving "0 3 -21 0 0 4 _o o oj An eigenvector is c = "Γ 0 _oj and one solution is x = [e'~ 0 _0_ = 0. Tofindtwo additional linearly independent solutions requires additional analy­ sis, which is presented in Section 7. It should be noted here that a theorem from linear algebra states that if the matrix A is similar to a diagonal matrix, there will always be enough (that is, n) linearly independent eigenvectors to successfully carry out the procedure for finding a fundamental matrix. Two particular cases are worthy of note. For each eigenvalue there is always one nontrivial eigenvec­ tor, so the procedure above will work (as noted above) if there are n distinct eigenvalues. If the matrix A satisfies the additional property that ai} = an—such matrices are said to be symmetric—then A is similar to a diagonal matrix and there will be "enough" eigenvectors to find a fundamental matrix. EXERCISES Compute eigenvalues of the following matrices: (·) (c) (e) C 2 0 2 (-1 { 2 f3 h 2 ) - 1 A - 1 0 - 2 ) 0 0 3 0>) (d) (O
  • 46. 40 CHAPTER 1 I SYSTEMSOF LINEAR DIFFERENTIAL EQUATIONS 2. Find eigenvectors corresponding to the eigenvalues found in Exercise 1. 3. Find a fundamental matrix for x' = Ax for each A given in Exercise 1. 4. Find the solution of x' = Ax, x(0) = ,where A is given in Exercise l(a)-(d). (3 ,where A is given in Exercise 1(f). (3 5. Find the solution of x' = Ax, x(0) = 6 . Show that the matrices [ :] and [- 3 ''1are similar. (Hint: Let T = [tij] and 7. Let A = (t :>. Let tl and t, be the eigenvectors found in Exercise 2(b). Define -2 8 try to solve AT = TB.) T = [tl, tz],a 2 x 2 matrix. Compute T'ATand TAT-'. 8. Let A = (::).Find two linearlyindependent eigenvectorsand repeat Exercise7. 9. For A given in equation (5.8), (5.9) is not eAr.(Why?) Find a matrix C such that @(t)C= eAt.(Hint: It is sufficient that @(O)C= I.) 10. Show that eigenvectors corresponding to distinct eigenvalues of a matrix A are linearly independent. (Hint: Suppose eigenvectors u1 and u, correspond to eigen- values I,, I,, A, # A,, and that clul + czu2 = 0. Apply A to both sides.) 6. The ConstantCoefficientCase: Complex and Distinct Eigenvalues Nowhere in the development of the theory in Section 5 was any explicit use made of the assumption that the eigenvalues of the matrix A were real numbers. If some of the eigenvalues Izi turn out to be complex numbers, then the corre- sponding eigenvectors, ci,will contain complex entries, but e""ci will still be a solution. For most problems with real coefficients, we are interested in having real-valued solutions. Since all initial conditions can be satisfied, given a funda- mental matrix 0, real solutions are of the form 0(t)c, where 0 ( t )and c may have complex entries. Representing a real vector as the product of a matrix with complex entries and a constant vector with complex entries is, at least, inelegant and frequently may be awkward. For this reason we seek a way to find a real fundamental matrix. That this can always be done is a consequence of the following theorem.
  • 47. 6. THE CONSTANT COEFFICIENT CASE 41 THEOREM 6.1 If <p(t) is a solution of x' = Ax, (6.1) where A is a constant matrix with real-valued entries, then the real part of φ(ί) (written Re φ(ί)) and the imaginary part of <p(t) (written Im φ(ί)) are both solutions of (6.1). Proof. The complex-valued function φ(ί) can be written as q>(t) = u(t) + iv(t) where u(i) and v(t) are real-valued functions (u(t) = Re φ(ή, v(t) = lm<p(0). Since φ(ί) is a solution of (6.1), (ii(0 + w(t)Y = A(u(t) + fe(0), or, using the distributive law for matrices, and the fact that differentiation is linear, u'{t) + ivt) = Au(t) + iAv(i). Au(t), Av(t), u'(t), and v'(t) are real-valued vectors, and two complex vectors can be equal if and only if the real parts and the imaginary parts are equal. Hence, it must be the case that u'(t) = Au(t), alii, and v'(f) = Av(t all /. Thus, u{i) and v(t) solve (6.1). Returning to our original discussion, if Xt is a complex eigenvalue of A and ct is the corresponding complex eigenvector, then Re(eA,i cf) and Im(eAii cf) are solutions. Making use of Euler's formula, ew = coso + /sino, (6.2) we can be more explicit; let kt = a H- iß and c = a + ib, OL, ß real numbers, a, b real vectors. Then Re [e(a+iß) a + ib)] = ear [(cos (ßt))a - (sin (ßt))b] (6.3) and
  • 48. 42 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS Im [eia+iß)t (a + ib)] = cei [(sin (ßt))a + (cos (ßt))b. (6.4) It is not difficult to show that these two vectors are linearly independent. At first glance it would seem that from one solution, two linearly indepen­ dent vectors have been created. This is not the case, and we explore this point in somewhat more detail. First of all, since the matrix A has real coefficients, det (A — λΐ) = ρ(λ) is a polynomial with real coefficients. Let p(X) = λη + a^"'1 + ••· + αη. Let λ be any complex number. Then (denoting the complex conjugate of a complex number z by z), /Κΐ) = λ" + 01λη -1 + ··· + 0„ = ΧΛ + α1Χπ -1 + · · · + ΰ „ , since ài = af and λη — λη . Thus,_/?(A) = ρ(λ). If A£ is a complex eigenvalue, then Pi^i) = 0, which implies thatp^) = 0 or that Xf is an eigenvalue. Thus, complex eigenvalues occur as complex-conjugate pairs. Now let Ci be an eigenvector corresponding to the complex eigenvalue kt. Then, since {A — X^Ci = 0, (A - V)c! = 0 or (A - X,/)^ = 0. Since λι is an eigenvalue of A, c£ is an eigenvector. Thus, knowing a complex eigenvalue λ and its corresponding eigenvector c lets us determine a second eigenvalue-eigenvector pair, X, c. In effect, taking the real and imaginary parts of a complex solution eA<i cf amounts to using both Af and A, and c, and ci to find two real (linearly independent) solutions. We illustrate the procedure with some examples. Consider the system det(^ - λΐ) = deti " ~ j = 0 yields λ2 4- 1 = 0 or λ = ±L Fix λ = i. To find an eigenvector, it is necessary to solve
  • 49. 6. THE CONSTANT COEFRCIENT CASE 43 or — icl — c2 = 0 c i — *c 2 = 0. Then c1 = 1, c2 = — i is a nontrivial solution to this linear system of equations, so a solution vector is given by Making use of Euler's formula, ew = cos(0) + isin(0), ç»(0 = (cos(0 + isin(;)( = /cos(f) 7 sin(i) Vsin(i)/ V-cos(i)/' Thus, two solutions are given by Re<p(i) lm<p(0 and /cos(f) Vsin(i)/' / sin(/) V-cos(0/ /cos(0 sin(0 w Vsin(f) -cos(0/ is a fundamental matrix with real entries. Consider now the system / 1 1 0 X' = I _ l l 0 I x. 1 0 1/ Then det(A — XI) = (1 — λ)((1 — A)2 + 1), or the eigenvalues are λ = 1 and
  • 50. 44 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS λ= I ±L Fixing λ= 1, we seek a nontrivial solution of Thus, c2 = 0, cl = 0, and c3 is arbitrary, or I 0 I is an eigenvector and ex I 0 ) is a solution. Fixing λ = 1 + /, it is necessary to solve = 0. This requires — cli + c2 = 0 ~ c i — c2/ = 0 Ci - c 3 / = 0. Setting Ci = 1 (arbitrarily) produces c2 = i and c3 = —/; or, we can conclude that 1 (p(t) = ^(cos(/) + /sin(/)) I / is a solution. To find two real solutions it is only necessary to decompose this solution into its real and imaginary parts. A straightforward computation shows that φ(ή cos(/) / sin(0N - sin (0 I + i I cos (0 sin(0/ —cos(/)/ The two desired real solutions are cos(0 / sin (0s °tl - sin (0 land e* I cos(t) ), sin(0/ —cos(/)/
  • 51. 6. THE CONSTANT COEFFICIENT CASE 45 and 0 e?'cos(0 et sin(t)s φ = | 0 - ex sin (0 et cos (0 , ex et sin (t) — ex cos (/)/ is a real fundamental matrix. We conclude with one additional example with all complex eigenvalues. Consider the system x — ■ i - 1 0 0 1 1 0 0 0 0 2 - 1 1 " 1 1 2 X. The eigenvalues are the roots of det 1 - λ 1 0 - 1 1-A 0 0 0 2-λ = 0. 0 0 - 1 2-λ] Expanding this determinant yields the characteristic polynomial in the form />(λ) = [(1-Λ)2 + 1][(2-λ)2 + 1] = 0. Thus, the eigenvalues are 1 ± / and 2 ± /. To find an eigenvector corresponding to 1 — /, we must solve the system = 0. i -1 0 0 1 i 0 0 0 0 1 +i - 1 1 " I 1 1 1 +i pi' Γ2 Γ3 LC 4. This is the same as ic1 + c2 + c4 = 0 — cl + ic2 + c4 = 0 (1 + i)c3 + c4 = 0 -c3 + (1 + /)c4 = 0.
  • 52. 46 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS A solution of this linear algebraic system is given by the vector 1 0 LOJ and a solution of the differential equation is £?r [cos(0- isin(/)]. Taking the real and imaginary parts, wefindtwo real solutions of the system [~sin(0~ cos (/) 0 L 0 - PX e » cos (01 -sin(0 0 . 0 J e . Now let λ = 2 — i and seek an eigenvector by solving the system - 1 +i - 1 0 0 This is the same as 1 l + i 0 0 0 0 Ì - 1 11 1 1 /J rCi l °2 °3 Lc4J = 0. (-1 +!>! +C2 -hC4 = 0 -cx + ( - 1 + i)c2 + c4 = 0 ic3 + c4 = 0 — c3 4- ic4 = 0. Set c4 = 1, which makes c3 = i. Thus, it is necessary to solve (-1 -fi>i +c2 + 1 =0 -cx + (-1 + i)c2+ 1 =0
  • 53. 6. THE CONSTANT COEFFICIENT CASE 47 to obtain Ci = Cr = I - 2i- -2 -1 i 4 + 3/ 5 2 - 1 2i - 1 5 Thus, an eigenvector is given by Γ4 + 3f| 5 2 - 1 5 ' L i J and a solution takes the form Γ4 + 3/Ί 5 2 - 1 5 i I i J i?2i [cos(0 -sin(/)]. Collecting real and imaginary parts produces two solution vectors i(4cos(0 + 3sin(0) i(2cos(t)-sin(0) sin(t) cos(0 e2 i(3cos(i)-4sin(0)" i ( - c o s ( 0 - 2 s i n ( 0 ) cos (0 -sin(i) «2r These four real solutions then form the columns of the fundamental matrix Vsin(f) e'cos(0 e2i ^(4cos(0 +3sin(/)) e2t (?>cos(i) - 4sin(0)~ é cos (0 - ex sin (/) e2t %(2 cos (t) - sin (/)) e2t £( - cos (t) - 2 sin (f)) 0 0 e2i sin(0 e2t cos(0 0 0 e2i cos(0 -e2 'sin(0
  • 54. 48 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS EXERCISES 1. Apply the definition of derivative to the function /(/) = u{t) + Ht) to show that f(t) = ut) + ivt 2. Given Euler's formula, eie = cos (0) + i sin (0), use Exercise 1 to show that (e1 ')' = ieu . 3. If A is a complex number and « is an integer, show that λη = I". If ω is also a complex number, show that ωλ = ωλ. 4. Find the eigenvalues of the following matrices: <·>(.: o « ei) « (.: :) 5. Find a fundamental matrix for x' = V4JC where Λ is as given in Exercise 4. 6. Show that the vectors (6.3) and (6.4) are linearly indepenent. 7. Find the eigenvalues of the following matrices. « - I l l « 0 - 1 - 3 - 3 - 4 0 1 2 1 3 3 1 1 0 1 1 0 2 0 0 0 1 - 1 ) 1 1 1 / ' 2 - 1 0 1 2 0 0 Ô 1 .0 0 - 2 0 0 2 1. (c) - 3 3 1 (d) 8. Find eigenvectors corresponding to the eigenvalues found in Exercise 7. 9. Find a fundamental matrix for x' = Ax for each A given in Exercise 7. 10. (a) Derive the Taylor series expansion for f(6) = ew . (Proceed exactly as you would for real functions, using d/d0(eie ) = iew .) (b) Rearrange the series in (a) into real and purely imaginary parts (each part will be a series). (c) Identify the series in (b) and deduce Euler's formula, e>ie = cos(0) + /sin(0). (d) What do you need to know about the convergence of a series to perform the rearrangement in (b)?
  • 55. 7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 49 7. The Constant Coefficient Case: The Putzer Algorithm The analysis of the preceding section depended on finding either n distinct eigenvalues or sufficient linearly independent eigenvectors when an eigenvalue corresponded to a repeated root of the characteristic polynomial. This is the case whenever the coefficient matrix A is similar to a diagonal matrix—whenever A is "diagonalizable." When the matrix A does not have this property, the compu­ tation of eAt becomes difficult. To continue the approach that we have begun would require the introduction of more sophisticated linear algebra, not covered in the usual elementary course (the Jordan canonical form). For this reason we abandon the present approach and turn instead to the Putzer algorithm, a method for computing eAt based on a relatively simple theorem, the Cayley- Hamilton theorem, which is traditionally a part of elementary courses in linear algebra. Let ρ(λ) be a polynomial, ρ(λ) = α0λη + αίλη ~ί + · · · + an. Since powers of square matrices make sense, we can write a corresponding matrix polynomial, p(A) = a0An + αχΑη ~χ + · · · + aj (where, as above, the a^s are scalars). The partial sums used in defining eA were such polynomials. For every choice of a matrix A, p(A) is a well-defined matrix. THEOREM (Cayley-Hamilton) Let A be an n x n matrix and let ρ(λ)= dei (A — λΐ). Then/>(/*) = 0. The zero, of course, is the n x n null matrix. Armed with only this theorem, we can establish the following: THEOREM 7.1 (Putzer) Let A be an #i x n matrix with eigenvalues λλ, λ2,..., λη. Then ^ ^Σ rj+ì(t)Pj (7.1) 7=0 where P0 = /, />·= Π ( Λ - 4 / ) , 7 ' = l , . . . , n , (7.2) *=ι and f*i(f),..., r„(t) is the solution of the triangular system
  • 56. 50 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS r = λχψχ rj=rj_i + Vy> 7 = 2,..., a, (7.3) r,(0) = l, i}(0 ) = °> 7 = 2,...,Ä. Notefirstthat each eigenvalue appears in the list repeated according to its multiplicity. Further, note that the order of the matrices is not crucial—A — kx/and A — k2Icommute—so for convenience in the com­ putation, we adopt the convention that (A — kj) precedes {A — kjl) if/ > j in the product. The system (7.3) can be solved recursively; if rx(t) is found, the equation for r2(t) is aforcedfirst-orderlinear equation, the "forcing" being r^if). This process can be continued until r1(t), ..., rn{t) are found merely by solvingfirst-orderlinear differential equations. Proof. Let Φ(ή = Σΐ=ο0+ι(0^· The idea of the proof is to show that Φ(ή satisfies Φ' = ΑΦ, Φ(0) = / so that Φ(/) = eA by the uniqueness of so­ lutions. For convenience, define r0(t) = 0. Then j=o = "f[Vi O+i (0 + 0(0]/} j=o so that Φ'(0 - Λ„Φ = "£ (Vi 0>i (0 + rj(t))Pj - j / f rj+1(t)Pj j=0 j=0 = Σ (VI - K)r^{t)Pj + "f 0(0^5 = ? ( V i - K)rj+i(t)Pj + "l O+i(0^1. 7=0 j=0 Since Pj+l = (A — kj+1I)Pj by (7.2), the last line may be rewritten as Φ'(/) - kMt) = Σ2 [(Α - Vi7 )/} + (Vi - Wlo+iW J=0 = °Σ (A - knI)Pjrj+1(t) j=o = (^-ν)Σ2 /;·ο+ι(ο. J=0 We manipulate this right-hand side so as to obtain the appropriate equation for Φ. Since
  • 57. 7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 51 = Φ(ί)-Γη{ί)ΡΛ-ΐ9 then Φ'(0 - Λ„Φ(0 = (Λ - ληΙ)Φ(ή - rn{i){A - An/)P„_1 = (A - ληΙ)Φ(ή - rn{i)Pn. The characteristic equation for A may be written in factored form as ρ(λ) = (λ- λη)(λ - V i ) * * (A - A2)(A - A,). Since i>„ = (Λ - ληΙ)(Α - λη.χΙ) -(A-XJ) = P(A), it follows by the Cayley-Hamilton theorem that Pn = 0 (the null matrix). Therefore, Φ(ί) satisfies the differential equation Φ'(0 = ΑΦ(ί) (7.4) and the initial condition Φ(0) = "Σ 0+1(0)^ = ^(0)7 = /. Hence, it follows by the uniqueness of solutions of (7.4) that Φ(0 = eAt . We illustrate the theorem first for a simple two-dimensional system. Consider '-G > First, solve det(><-A/) = det(3 "i"/l j ^ M = 3 - 4 λ + λ2 + 1 = 0 andfindthat λ = 2 is a double root. Following the algorithm, let λι = 2, λ2 = 2,
  • 58. 52 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS P0 = /, and Ί - 1 Λ=(Λ-2/) = Μ _1 Further, Λ = 2rt 'i(O) = 1 so that r^{f) = e21 . Since r'2 = <?2 ' + 2r2 r2(0) = 0, we have r2(i) = /e2 '. Therefore, «M; ÎM::: --or.-, Consider now the system / = The characteristic polynomial, det {A — λΐ) = 0, takes the form (1 - λ)2 - λ) = 0 and we label the roots λί = λ2 = 1, λ3 = 2. Then P0 = I, and
  • 59. 7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 53 / 0 0 0 p2 = pi = I 0 0 0 I. - l 1 1 / Now solve the system (7.3) recursively. From r = rx ri(0) = 1 it follows that From ri = e1 + r2 r2(0) = 0 it follows that '2(0 = té. Finally, from r'3 = te' + 2r3 r3(0) = 0 it follows that r3(i) = e2 ' - te' - e'. Thus, from the Putzer algorithm we have that / l 0 0 / 0 1 0 / 0 0 0 eM = e'l 0 1 0 I + te'i 0 0 0 ) + (e2< -te' -e')i 0 0 0 I 0 0 1 / - l 2 1 / - l 1 1 / or
  • 60. 54 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS / eA ' = ' e< 0 K-e2 ' + e' te' é e2 ' + te' --e' 0 0 e2 ' As a final example, consider Then det(A - λΐ) = 0 yields (2 - λ)3 (3 - λ) = 0 and we label the roots λχ = λ2 = λ3 = 2, λ4 = 3. Then P0 = /, Λ = ρ2 = Pi = and ^3 = PÌ = '0 0 0 <Γ 0 0 0 0 0 0 0 0 .0 0 0 L We proceed to find the functions r;(0> ' ■ = U 2, 3, 4. First of all, r 'i = 2^i r , ( 0 ) = l so /·,(/) = e2 '. Then, r2(i) satisfies ri = e2 ' + 2r2 r2(0) = 0
  • 61. 7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 55 or r2(t) = te2 ', and r3(t) satisfies r'3 = te2 ' + 2r3 r3(0) = 0 or r3(t) - <2 2, inally, r4(/) satisfies r- r4(0) = t2 = 0 + 3r4 or or e-3 'r4(t) = — - te~' -e-'+l t2 r4(t) = e3 '-e2t -te2 '--e2 '. Thus 0 0 1 (Γ 0 0 0 0 0 0 0 0 0 0 0 h
  • 62. 56 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS or / eA, = fe2x 0 ( ° o te2 ' e2 ' 0 0 S 2, 2e te2 ' e2 ' 0 0 0 0 e3 > All of the illustrations above were for the case of a repeated root, since this is the case where the previous method could fail. However, the method works equally well in the case of distinct roots. It finds eAt directly and avoids finding the inverse of an arbitrary fundamental matrix. (Recall that eAt = Φ(0Φ_1 (0) where Φ(ή is an arbitrary fundamental matrix.) The computations with the Putzer algorithm are usually more involved than computing the required eigen­ vectors for a fundamental matrix. In the illustrations, the eigenvalues were all real. However, the method works equally well if they are complex, since the differential equations for the functions r can be solved in just the same manner. For example, the equation y' + iy = 0 has the general solution y(t) = ce~u 9 where c is constant. Solutions with real initial conditions are no longer necessarily real, but otherwise everything is as before. For example, if we add the initial condition }>(0) = 1, then the solution is y(t) = e~u . We illustrate the Putzer algorithm with a simple example. Consider the system The eigenvalues are roots of the polynomial -X - 1 0 . o 1 -k 0 0 0 0 -X - 1 σ 0 1 -K />(*) = det A A , , 1 = 0 . Expanding the determinant yields
  • 63. 7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 57 ρ(λ) = λ2 (λ2 + 1) + (λ2 + 1) = (λ2 + l)2 = 0 or that λ= ±i are double roots. To apply the algorithm, label Λ·ι = h K = -U λ3 = i, /l4 = - i . First of all, i 1 0 0 1 — I 0 0 0 0 — I - 1 ° ° l 1 / -ij P2 = (A + U)Pi = 0 (the null matrix), and hence p3 = o. Also, rx(t) satisfies the equation r[ = irl9 r1(0)=l, so rx(t) = e* and r2(t) satisfies r'2 = —ir2 H- eu . A straightforward computation shows that r2(t)^Yi[eit -e-it ] = sin(i). Since P2 and P3 are null, there is no need to compute r3(t) and r4(/). The solution •■e-
  • 64. 58 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS is real even if the differential equation is complex! Since eAt = eu I + sin (t)Pl, we have V - / s i n ( 0 sin(0 0 0 -sin(0 eit -isin(t) 0 0 0 0 eü -isin(t) sin(f) 0 0 -sin (7) eü — isin(t)y However, eix — z'sin(/) = cos(/) + /sin(f) — zsin(l) = cos(i). Thus, eAt is equal to cos(f) sin(0 0 0 -sin(0 cos(0 0 0 0 0 cos(0 sin(0 0 0 -sin(0 cos(0/ Of course, eAt had to turn out to be real, since A was real. The algorithm simply took us through an excursion in the complex domain. The result is so simple because we were, in effect, dealing with two uncoupled systems. A more interest­ ing computation occurs if we change the system slightly to Z = I 0 1 0 0 1 0 0 0 0 0 0 - 1 i '} 0 / The eigenvalues are roots of the polynomial λ 1 0 0 1 -λ 0 0 0 0 -λ - 1 Γ 0 1 -λ /KA) = det| Λ Λ . , = °· Expanding the determinant yields the same polynomial as in the previous example, ρ(λ) = (λ2 + l)2 = 0, or that λ= +i
  • 65. 7. THE CONSTANT COEFFICIENT CASE: THE PUTZER ALGORITHM 59 are again double roots. Take the same definition of the order of the eigenvalues, A1 = /, λ2= — U Λ - 3 = U K = — U and proceed with the computation. As always, P0 = I and Λ = Pi '0 0 - 1 0s 0 0 0 - 1 0 0 0 0 00 0 0 0> and ft = We find r^t) and r2{t) as before. To find r3(t), we must solve r3 = ir3 + sin(0· If this quantity is rewritten as (*-%(/))' = *"" sin (f) we see that the solution with r3(0) = 0 is given by r3(t) = i[/sin(/) - it cos (t) + isin(0]. Finally, r4 is the solution of ri = - Ϊ > 4 + r3 and can be obtained by integrating both sides of fae")'= e*r3. An integration and some manipulation yields that
  • 66. 60 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS ^(O = i s i n ( 0 - icos(0· Thus, the matrix eAt takes the form cos (0 sin (0 - - sin (t) - (t cos (t) + sin (t)) - sin (0 cos (0 ^(sin (0 — x cos (0) t sin (i) 0 0 cos(0 sin(0 0 0 -sin(0 cos(0 As an aside, for those interested in computing, this algorithm can easily be set up to be performed by a symbolic manipulator, such as Reduce. EXERCISES 1. Find eAt where 2. Find eM where (b) A 3. Find the solution of xf = Ax, JC(0) = ( — 1 I for each A given in Exercise 2. 4. Verify the Cayley-Hamilton theorem for each of the matrices in Exercise 2. -C') 5. Let A = I I where B is an r x r matrix, C is a /? x /? matrix, and A is an 0 CJ (n + />) x (n + />) matrix. Show that
  • 67. 8. GENERAL LINEAR SYSTEMS 61 , (em 0 _, [B~l 0 = I Λ ΓΛ an d if A is invertible, A l = V 0 ea J 0 C"7 6. Use Exercise 5 to find eAt where Λ = 7. Show that if the real part of each eigenvalue of A is negative, then every solution of y' = Ay satisfies limf_>OOiy(0 = 0. (Hint: Show that lim,^ rt(t) = 0.) 8. The following matrices have complex eigenvalues. Use the Putzer algorithm to find US) 0 1 1 0 0 0 0 0 0 0 0 1 (a) (c) 1 0 - 1 0 , 8. General Linear Systems We now consider a linear system with aforcing term, y' = A(t)y + e(t (8.1) where A(t) is an n x n continuous matrix and e(t) is a continuous «-vector. For notational purposes, let L[y] denote y' — Ay. Then, as noted before, (8.1) can be written L[y] = e. (8.2) The principal theoretical result is given in the following theorem. Note first that L is a linear operator. THEOREM 8.1 Ifχ(ί) is a given solution of (8.2), then any solution ψ(ί) of (8.2) can be written ψ(ί) = Φ(ί)α + χ{ϊ)
  • 68. 62 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS where Φ is a fundamental matrix for Ly = 0 (8.3) and c is an appropriate constant vector. Proof. Let Φ be a fundamental matrix for L[y] = 0, and let φ(ή be an arbitrary solution. Then m - xi = LIM - LX] =e-e=0 or φ(ή — χ(ί) is a solution of (8.3). By Theorem 3.4, Ht) - Z(0 = Φ(0<% or φ(ή = <t>(t)c + x(t), which is the conclusion of the theorem. The importance of Theorem 8.1 is that it reduces the problem of finding all solutions of equation (8.1) to the problem of finding a fundamental matrix for (8.3) and finding any solution whatever of (8.1). We deal now with a way to find χ(ή given the fundamental matrix Φ(ί). THEOREM 8.2 The vector function I I Φ - 1 / ( 0 = Φ(0 Φ - ^ Τ Μ Τ ) * (8.4) is a solution of (8.1). Proof To prove the theorem it is necessary only to verify by differentiation that (8.4) is a solution. To check first, however, that the above makes sense, note that Φ and Φ- 1 are n x n matrices, so Φ_1 (τ)β(τ) is an n vector, as is its integral, and Φ(ί) operates on the n vector J|o Φ_1 (τ)^(τ) di. Of course, Φ(ή is differentiable, and since e{t) is continuous, so is the integral in (8.4). Differentiating, χ'(ή = Φ'(ί) φ-τ)β{τ)(ίτ -h Φ(0Φ_1 (0^(0· Since Φ'(ί) = Α(ήΦ(ή, and since Φ- 1 (0Φ (0 = h this becomes χ'(ή = Α(ήΦ(ί) Γ φ-^τΜτ) A + e(t). Jto
  • 69. 8. GENERAL LINEAR SYSTEMS 63 By the definition of χ(ή (8.4), this is X'(t) = A(t)X(t) + e(0 or L[X] = *. Note that χ(ί0) = 0; that is, χ(ή is the solution of (8.1) that takes the zero vector as the initial condition at t = t0. Suppose we desire to solve (8.1) with initial conditions y(t0) = η. Let Φ(/) be a fundamental matrix for (8.3). If χ(ή is given by (8.4), then φ(ί) = Φ(ί)φ-ί0)η + x{t) is a solution of the equation, for ψν) = Φ{ήΦ-Κ*ο)η + xV) = Α(ήΦ(ήφ-ί0)η + Α(ί)χ(ή + e(0 = ^(0(φ(0Φ-1 (ίο)ί + ζω) + «ω Further, = /»/ = */, so ^(0 satisfies the initial condition. This computation can be combined with Theorems 8.1 and 8.2 to yield a solution for any linear initial value problem. THEOREM 8.3 ( Variation of constanis formula) Let Φ(ί) be a fundamental matrix for /' = A (ί)χ. Then the unique solution of / = A(t)y + e(t) y(to) = n is given by y(t) = Φίθφ-^ο)* + I ' Φ(ί)φ-! (*Μ*)^. (8.5) Jt0 In the representation of the solution, (8.5), the matrix Φ(ί) has been moved inside the integral sign. This causes no difficulties, since the integration is with respect
  • 70. 64 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS to the dummy variable s. However, it does make for a nice formula if the fundamental matrix Φ(/) happens to be eAt . Then with t0 = 0 the representation given in Theorem 8.3 is ι = έ^Ι/ + y(t) = eAt n+ eAit ~s) e(s)ds. (8.6) Equation (8.6) follows from (8.5), since eA0 = / and (eAt )~y = e~At . As an example, consider the system yι =yi y'i = -yi + u and the initial conditions JKi(0) = 0 y2(0) = 2, or y' = Ay + e(t) where and e(t) -[-Ϊ i] ■ra- The unforced equation is y'i. = y2 y'i = -y- (8.7) (8.8) (8.9) A fundamental matrix for (8.10) is ^, χ Γ cos (rt sin (/)Ί φ(0= . ; , , ; (8.10) |_-sin(0 cos(0j
  • 71. 8. GENERAL LINEAR SYSTEMS 65 because the columns of (8.10) are solutions of (8.9) and detO(0 = cos2 (f) + sin2 (0= l.Then, Φ~ή = Γ cos (0 — sin (t) sin (/) cos (0_ Hence, χ(ί) is given by lit) = Φ(/) ί>"'(τ)β(τ)ατ [ ce - s i ■[ )ΊΓ-ί;ο τδ ΐη(τ)ΛΊ )_][_ ίίοτ cos (τ) dz] T-sir )JL COÎ cos(0 sin (0Π Γ— J sin (t) cos (/)_ cos (/) sin (0~| Γ - sin (/) + / cos (t) sin (0 cos (0J L c o s (0 + l s i n (0 1 [-sin(0cos(0 + icos2 (/) + sin (0 cos (0 + tsin2 (t) - sin(0~] + sin2 (/) - / sin (/) cos (i) + cos2 (/) + / sin (t) cos (/) - cos (t) J ■[t - sin (0 " 1 - cos (0. To satisfy the initial conditions, (8.8), we have ,.Φ-<0>[°] or Finally, φ(ί) = <t>(t)c + χ(ή [ cos(r) + s i n ( 0 i r 0 l p L-sin(0 c o s ( 0 j b J Ll _r2sin(0~| p - s i n ( 0 " | " ^ c o s W J ^ L 1 - c o s ( 0 j _ Γ / + sin (0 Ί " |_1 +cos(0J' - sin (0 — cos(0_
  • 72. 66 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS Consider the 3 x 3 system 2 1 Γ 0 2 0 0 0 3_ y + Ί] 0 _<J y = with the initial condition (8.11) j(0) = (8.12) Let A denote the above 3 x 3 matrix. The eigenvalues of A are given by roots of />(A) = det 0 2-λ 0 | = 0 . 2-λ 1 1 0 2-λ 0 . 0 0 3-AJ Expanding the determinant, we see that p{X) = (2 - 1)2 (3 - X) = 0 or that λ1 — 2, λ2 = 2, and λ3 = 3. We apply the Putzer algorithm tofindeAt . The relevant matrices are P0 = I, Λ = [0 0 Lo 1 11 0 0 0 lj "0 0 Lo 0 11 0 0 0 lj and Clearly, rt(t) = e2 ' and r2(t) solves r'2 = 2r2 + e2 ', r2(0) = 0, or r2(t) — te2 '. Then r3(t) satisfies r'3 = 3r3 + te2 ', r3(0) = 0.
  • 73. 8. GENERAL LINEAR SYSTEMS 67 Rewriting the differential equation as (r3é>-3 *)' = AT' gives, after an integration, r3(t) = e2t (et — t — 1). Thus, eAt takes the form v< 0 LO te2 ' e2 ' 0 e3 ' - e2T 0 e3 ' . (8.13) Rather than directly invert this matrix, we can use the fact that (eA, )~l = e~At . This amounts to substituting — t for t in (8.13), which yields that Φ-1 , in (8.5), is of the form ~e-2> -te-2t e-3t_e-2t 0 e~2 · 0 LO 0 e~3 ' _ (8.14) Then, e A 'e(t) is e-2 '(te-' - t + l ) 0 te~3t We can now compute the particular solution, (8.4). An integration of the vector above from 0 to / produces ig[e-3 '(3e3 ' + We' - 9e' - 12? - 4) + 13] ie-3 '(e3 ' - 3/ - 1) Multiplying eA ' by this vector produces the χ of (8.4). Recall that this is a solution of the system that is the null vector at / = 0. The result of this multiplication is X(t) Të[4e3 ' + 9e2 ' + 6t - 13] 0 %[e3t - 3/ - 1] Finally, the variation of constants formula may be applied; it requires us to compute eA 'y(0) + χ(ή. This yields the solution of (8.11) in the form
  • 74. 68 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS y(t) = Vg[40e3 ' + 36te2 ' + 9e2 ' + 6/ - 13] e2 ' MlOe3 ' - 3/ - 1] While computations such as this are not intrinsically difficult, it is clear that considerable, careful manipulation is necessary to carry out the very elegant representation of the solution given by Theorem 8.3. For this reason, the varia­ tion of constants formula is of more theoretical than practical importance for all but the simplest systems. Again, if it is necessary to find the explicit representa­ tion of a large system, a computer with a symbolic manipulator is a useful tool. EXERCISES 1. Find all solutions of (a) X= {o 2)X t « x ' = (~l ~)x + u (c) x' (d) X' -1 2,r+ U('). 2. Find the solution for each system given in Exercise 1 that satisfies x(0) = 3. Determine whether the limit as t -> oo or as t -* — oo exists for any of the solutions found in Exercise 1. 4. Find all solutions of 2 - 2 3, 5. Find all solutions of 1 1 0 0 0 1 0 0 * = l o o o i lx P o i a
  • 75. 9. SOME ELEMENTARY STABILITY CONSIDERATIONS 69 9. Some Elementary Stability Considerations The theory developed in the previous sections makes it possible to intro­ duce some of the basic ideas of stability analysis for systems of differential equations. These ideas are important in many physical systems to give a robust­ ness to theoretical conclusions. More sophisticated tools and concepts will appear in the next chapter. Although, at this point, the discussion will be restricted to linear systems (indeed, to those with constant coefficients), the concepts carry over to nonlinear systems as well. For those who have met the idea of stability in a physics course, we note that what is presented here is a mathematician's way of describing those very same ideas. The properties of norm, introduced at the beginning of Section 4, are important in this endeavor and the reader may wish to review them before proceeding with this section. The basic, intuitive idea is that of instability. "Something" is unstable if a small deviation from the present "state" produces a major change in the state. The familiar physical example is a cone balanced on its pointed end (see Figure 9.1). A small change in position produces a major change—the cone falls. To make this, and related ideas, precise in the context of solutions of systems of linear differential equations is the goal of this section. Figure 9.1 An example of instability. A cone balanced on its point will fall if slightly disturbed. Consider the linear system of ordinary differential equations x' = Ax (9.1) where A is an n x n constant matrix and x is a vector in Rn . Equation (9.1) always has the trivial solution, the function x(t) = 0, and this solution will play
  • 76. 70 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS the role of "present state" in the intuitive description above. The trivial solution is said to be stable if for every ε > 0 there is a δ > 0 such that if x{t) is any solution of(9.1)with||x(0)|| <<5,then||jc(0H < ε for all t > 0. We are using norm, || ||, to measure how close a solution is to the trivial solution. Think of the trivial solution as the present state of the system and x{t) as a solution that represents a deviation from the present state. The above definition says that, if the trivial solution is stable, x{t) will remain arbitrarily close (this is the ε) to the present state (the trivial solution) for all future time if the initial condition x(0) is sufficiently close (this is the δ) to zero. The trivial solution is said to be unstable if it is not stable. Given the definition of stability, being unstable meets the above intuitive criterion of a major (not small) change of state at a future time from a small, initial disturbance (a small change of initial condition). To set the basic idea, let us redefine instability by formally contradicting the notion of stability— by stating what must happen for stability to fail. Roughly, we must state that no matter how close the initial conditions are to zero, at some future time some solution will not be close to zero. Thus it is sufficient to show that for some ε > 0, there is a sequence of real numbers (initial conditions) p„, with limM_00p„ = 0, and a corresponding sequence of real numbers tn (times) such that the solution of (9.1)—call it xn(t)—that satisfies ||xn(0)|| = pn also satisfies ||χη(ίη)|| > ε. Thus, not all solutions that start arbitrarily close to the trivial solution remain close to it for all future time. Note that it was important to take an entire sequence of initial conditions tending to zero. If the conclusion was satisfied for one, or a finite number of, initial conditions, there could be a smaller δ that would make the definition of stability "work" if solutions were this (δ) close. The infinite sequence of initial conditions guards against this possibility. For example, the linear system * = [J J]* (9-2) has a fundamental matrix (actually eAt ) of the form [cosh (t) sinh (/)~| sinh (0 cosh (t)J ' Using the theory we developed in Section 3, every solution of (9.2) can be written as Φ(ί)ο, for some constant vector c. Choose pn = l/n, n = 1,2,..., and take c to be the vector " 1 " Tn 1 Yn
  • 77. 9. SOME ELEMENTARY STABILITY CONSIDERATIONS 71 This corresponds to choosing the family of solutions 1 , ' Note that || xn(0) || = pn = /n. Take ε = 1 and tn = n(2n). After these elaborate preparations, we are ready to check the definition. Clearly, pn -> 0 as n -► oo. Yet I Ix n(Q || = 2 > 1 = ε. Thus, a small change in initial condition at t = 0 produces a large (>ε) change at a future time, tn. The formal definition of instability is satisfied (or, the definition of stability is violated). A simple change in the system of equations (9.2) can make a dramatic change in the behavior of solutions. Consider the system *'=[_; iy (9.3) A fundamental matrix (eAt ) is given by [ cos (0 sin (0~| -sin(0 cos(0_| and every solution may be written as Φ(ήο, or Γ c^osit) + c2sin(t) Ί |_ — c1 sin (t) + c2 cos (t)J for appropriate constants cx and c2. Let ε > 0 be given and choose δ = ε/2. If ki I + c i < < 5 (this is the norm of the vector JC(0)—see Section 4 of this chapter), then ||x(0ll < 2(1^1 -h c2) <2δ = ε for alii > 0 (in fact, for all t). Thus, the trivial solution of (9.3) is stable—solutions that begin sufficiently close to the trivial solution remain close to it in the future. Although it is the simplest concept, it turns out that stability of the trivial solution is not the most important concept (in both mathematics and physics). The important concept is stronger and is called asymptotic stability. Its im­ portance stems from the fact that it is preserved under slight changes ("pertur­ bations") of A. If the trivial solution of (9.1) is asymptotically stable and if A is "changed" slightly, the trivial solution of the new (9.1) is also asymptotically stable. Since the entries of A often represent measured quantities, it is important that the stability property be retained under slight changes, corresponding, perhaps, to a measurement error. These ideas will be explored in detail in the next chapter for the two-dimensional case. Here we present the basic idea and a
  • 78. 72 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS criterion for determining when the property holds. The trivial solution of (9.1) is said to be asymptotically stable if (a) it is stable, and (b) there is an ε > 0 such that if ||;c(0)|| < ε, then lim^^ ||JC(0II = 0. Obviously, requirement (b) strengthens the conclusion. Neither system (9.2) nor (9.3) satisfies this condition. System (9.2), of course, satisfies neither (a) nor (b). However, the system *-["i -1} (9.4) has a fundamental matrix (again eAt ) Φ(0 = te~ e~ Hence, every solution takes the form x(t) = :1e ' + c2te Π c2e- J' (9.5) and every solution satisfies lim,^ || x(t) = 0. This is strong enough to show that both (a) and (b) in the definition are satisfied, since the second component of the vector x(t) is decreasing and the first component is eventually decreasing. The technical details are left as an exercise. Fortunately, it is not necessary to solve each system to determine its stability. There is a simple theorem that provides a criterion for asymptotic stability for linear systems with constant coefficients. THEOREM 9.1 The trivial solution of (9.1) is asymptotically stable if and only if all of the eigenvalues of A have negative real parts. When we say "negative real part" we intend that either the number is real and negative or it is complex and the real part is negative. A similar statement applies for "positive real part." There is a corresponding statement for instability. THEOREM 9.2 If one eigenvalue oiA has a positive real part, then the trivial solution of (9.1) is unstable.
  • 79. 9. SOME ELEMENTARY STABILITY CONSIDERATIONS 73 If the real part of the eigenvalues is nonpositive, the middle ground between asymptotic stability and instability is where the real part of at least one eigen­ value is zero. This case is more delicate and depends on the multiplicity of the eigenvalue with a zero real part. We will not give a detailed analysis, but note one simple result. THEOREM 9.3 Ifthe eigenvalues ofA with zero real parts are simple and all other eigenvalues have negative real parts, then the trivial solution of (9.1) is stable. In (9.3), both eigenvalues have zero real parts and the trivial solution is stable, but not asymptotically stable. The key element in the proof of Theorem 9.1 is the establishment of the following basic lemma. LEMMA 9.4 Let Xj = ξ] + ty9j = 1, 2,..., n be the eigenvalues of the matrix A (repe­ titions allowed). Let σ > max (£,·). Then there is a constant, k > 0, such that if x(t) is a solution of (9.1), then ||jc(r)||<Jfc>". The proof of this lemma is easy if A is a diagonal matrix and is not very difficult, using the properties of norms listed in Section 4, if A is similar to a diagonal matrix. For the general case we need to use the Putzer algorithm. We state the basic fact as another lemma. LEMMA 9.5 Let A, σ, and λ{ be as in Lemma 9.4 and let r,(0 be the elements in the decomposition (7.1) of e^r . Then MOI < W where c,· is a positive constant. Proof. The proof is by induction. Let Xi9 i = 1, 2, ..., n be given and let eAt be expressed by (7.1). Clearly, rx(t) = eXlt < efft . Suppose that rj(t) < c}ea j = 1,2,...,/— 1. Then, solving the scalar differential equa­ tion (or using the variation of constants formula, for the scalar—1 x 1 matrix—equation),
  • 80. 74 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS r[ = Vi + 'i-i yields (since rf(0) = 0) that It follows that k-(0l < e XiS ri-l(s)ds. ie x 's ri_l(s)ds < ex >' e-^ri^(s)ds < e,-!«4 '' e^'+ ")s ds since the real part of ^ is £,·. After an integration, (σ-ί,)( _ 1 ^ — ç This completes the induction. Proof of Lemma 9.4: Since the representation of the exponential eAt given by (7.1) has only a finite number of matrices, we can find a number, M, larger than the norm of each matrix. This number multiplied by the sum of the numbers ch i = 1, 2, ..., n, given by Lemma 9.5, provides the constant K in the statement of the lemma. To complete the proof, we use (7.1) and the inequalities for norms from Section 4 to find that lk',, ||<Î>i-iWIII^II n - 1 < Meat X Ci 0 < Keat . This completes the proof of Lemma 9.4. With the aid of Lemma 9.4, the proof of Theorem 9.1 follows easily. Let σ < 0 be greater than the largest real part of any eigenvalue of A. This choice is possible because the largest eigenvalue has a negative real part. Since any solution x{t) of (9.1) has the form x(t) = eAt x(0), it follows from Lemma 9.4 that
  • 81. 9. SOME ELEMENTARY STABILITY CONSIDERATIONS 75 x(t) < eAt x(0) < x(0)Keff σ < 0 . Thus, lim,^ ||x(OII = 0. This shows that (b) in the definition of asymptotic stability is satisfied for any solution. To get (a), we have only to choose ||x(0) || small enough ( < ε/Κ in the definition of stability). For the two-dimensional case, the fact that the largest eigenvalue has a negative real part can often be determined without actually computing the eigenvalues. For example, consider the system The trace of the matrix gives the sum of the eigenvalues, while the determinant gives their product (quadratic formula). In (9.5) the trace is negative, so the sum of the eigenvalues is negative, while the product (the determinant) is positive. Thus, the eigenvalues have negative real parts. (They are, in fact, complex in this case.) For larger systems, criteria are known that guarantee that all of the eigenvalues of a matrix are negative or have negative real parts. Principal among these are the Routh-Hurwitz criteria, which the interested reader may find in more advanced textbooks. It is important to note that things are somewhat more general than we have presented them to be. Stability of the trivial solution has been defined for (9.1). For other systems, and particularly for nonlinear ones, the stability of other types of solutions is important. However, for (9.1), if the matrix A is nonsingular, the only constant solution is the trivial one. In applied literature, these constant solutions are called steady states or equilibrium solutions. If A is singular, then there will be a "continuum" of solutions (a line, if A is two-dimensional). In this case, asymptotic stability of these constant solutions is not possible. Hence, the focus is on the zero solution. The definition was applied at the point in time t = 0. The definition could have been given for any other time t0, but for equations of the form (9.1) a redefinition of initial time—moving t0 to 0—is trivial. For other systems, the initial time t0 may be crucial, and the definition is usually given with an arbitrary time t0. The theorems are actually stronger than stated. For example, in the case of asymptotic stability the theorems are global in the sense that all solutions tend to zero as t tends to infinity, not just those that are initially close. Moreover, in view of Lemma 9.4, the rate of convergence to zero is exponential; that is, solutions tend to zero faster than does an exponential function. These properties are typical of linear systems but represent properties that are stronger than can be expected for other systems. EXERCISES 1. Determine the stability of the trivial solution of x' — 0. Is it asymptotically stable? 2. Determine the asymptotic stability of the system x' = Ax where A is
  • 82. 7 6 CHAPTER 1 / SYSTEMS OF LINEAR DIFFERENTIAL EQUATIONS (a) (c) (e) Π 1 1 0 - 2 0 0 "-1 0 1 0 1 - 1 0 0 -2 0 0 - 1 (b) (d) (f) ß-3 1 1 f 1 0 1 0 0 - 2 -1 0 - 1 0 - 2 0 1 0 0 3. Let -c a· Show that the sum of the eigenvalues is a + d (called the trace of A) and the product is ad — be (the determinant of A). (Hint: Use the quadratic formula.) 4. Give a direct (i.e., without Lemma 9.5) proof of Lemma 9.4 when A is a diagonal matrix. (Use the form of eAt and the definition of norm from Section 4.) 5. Give a direct proof of Lemma 9.4 when A = T_1 DT and D is a diagonal matrix. 6. Let/(x) be a continuous function on the real line and c SL real number such that f(c) — 0. Formulate a definition of stability for the constant solution x(t) = c of x' =/(*)· 7. Repeat Exercise 6 for asymptotic stability. 8. Consider system (*) x' = Ax + g(t), where A is an n x n constant matrix and g(t) is a continuous w-dimensional vector. Use Theorem 8.3 and Lemma 9.4 to obtain the following estimate on the solution of (*) that satisfies x(0) = x0i I· I I *(0 I I < K x01| eat + Keat I e'n g(s) || ds. 9. Suppose that A is a diagonal matrix and that the eigenvalues of A have negative real parts, except for one that has a zero real part. Use the form of eAt to show that all solutions of (9.1) are bounded. (By bounded we mean that there is a constant M, depending on the initial condition, such that || x(t) || < M.) 10. Prove Theorem 9.3 in the special case that A is similar to a diagonal matrix. 11. Give a simple example to show that the statement in Exercise 10 is false if A has a double eigenvalue with zero real part and all others with negative real parts. 12. Consider (*) in Exercise 8 and suppose that the eigenvalues of A have negative real parts and that Jg5 1| g(t) || dt exists. Show that all solutions of (*) are bounded. (Hint: Use the estimate in Exercise 8.)