Senior Seminar:  Systems of Differential Equations
Upcoming SlideShare
Loading in...5
×
 

Senior Seminar: Systems of Differential Equations

on

  • 664 views

The research and paper behind the focus of my senior project: Systems of Differential Equations.

The research and paper behind the focus of my senior project: Systems of Differential Equations.

Statistics

Views

Total Views
664
Views on SlideShare
664
Embed Views
0

Actions

Likes
0
Downloads
11
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Senior Seminar:  Systems of Differential Equations Senior Seminar: Systems of Differential Equations Document Transcript

    • Systems of Differential Equations Joshua Dagenais 12-04-09 Mentor: Dr. Arunas Dagys 1
    • Table of Contents Introduction Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues Section 2: Solving Systems of Differential Equations with Complex Eigenvalues Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues Section 4: Solving Systems of Nonhomogenous Differential Equations Section 5: Application of Systems of Differential Equations – Arms Races Section 6: Application of Systems of Differential Equations – Predator-Prey Model Conclusion References 2
    • Introduction Many laws and principles that help explain the behavior of the natural world arestatements or relations that involve rates at which things change. When explained inmathematical terms, the relations become equations and that rates become derivatives.Equations that contain these rates or derivatives are called differential equations. Therefore,systems of ordinary differential equations arise naturally in laws and principles explainingbehavior of the natural world involving several dependent variables, each of which is a functionof single independent variable. This then becomes a mathematical problem that consists of asystem of two or more differential equations. These systems of differential equations thatdescribe these laws or principles are called mathematical models of the process (Boyce &DiPrima, 2001). A system of first order ordinary differential equations is an interesting mathematicalconcept as it combines 2 different studies of mathematics for its use. By dissecting the phrase,system of first order differential equations, into 2 parts, the 2 different areas of mathematics usedto solve these equations can be found. In the system part of the phrase, it involves linear algebrato solve the system of equations and in this case the system of equations consists of first orderdifferential equations. The ladder part of the phrase, first order of differential equations,indicates that solution strategies for solving them will also be involved when solving systems ofdifferential equations. So with linear algebra for systems and differential equations in mind, what otherunderlying concepts and skills involved with these mathematical concepts must be learned andexplained to solve systems of differential equations? Well, for the linear algebra aspect of 3
    • solving systems of differential equations, topics that are to mentioned briefly in this paperinclude matrices, characteristic equations, roots of the characteristic equations (eigenvalues),eigenvectors, and the diagonalization of a matrix. For the differential equation part of solvingsystems, topics that are discussed include solving first order differential equations, solvingsimple diagonal systems ( y  Dy ), and solutions of the original systems ( x  Cy wherex(t )  keat ). When everything mentioned is put together, solutions of different types are foundfor systems of differential equations and with the help of mathematical software such as Maple,graphs are able to visually represent the answers of these systems (slope fields) to show thatthere are actually more than one solution called a family of solutions. Also, depending on thetypes of eigenvalues that are found for the system of differential equations, different methods forsolving the systems will be used for eigenvalues that are distinct and real, eigenvalues that arecomplex, and eigenvalues that are repeated, all of which graphically represented in a differentmanner. Along with different methods for solving systems of differential equations, methods forsolving homogenous and nonhomogenous systems will be explained to help further the scope ofthis subject. Why do we care about solving systems of differential equations? Well, there aremany physical problems that involve a number of separate elements linked together in somemanner such as generic application problems which include spring-mass systems, electricalcircuits, and interconnected tanks that need solutions of systems of differential equations to beunderstood and solved. Other, more advanced applications of the theory behind systems ofdifferential equations include the Predator-Prey Model (Lotka-Volterra Model) and theRichardson’s Arms Race Model which connects mathematics with concepts that would havenever been able to be explained without such elegant mathematical equations. The Predator-Prey 4
    • Model is a system of nonlinear differential equations (even though it is considered an almostlinear system) and the Arms Race Model that uses systems of differential equations that arenonhomogenous. Both models are very interesting applications that will be discussed andexplained later on in this paper. Hopefully, this paper will give the reader insight on whatsystems of linear differential equations are, how to solve them, how to apply them, and how tounderstand and interpret the answers that are derived from problems. 5
    • Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues In this section, we will be solving systems of differential equations where the eigenvaluesfound from the characteristic equation are all real and all distinct. In order to do this, we willfirst take the system  x1  p11 (t ) x1  ...  p1n (t ) xn  g1 (t )  xn  pn1 (t ) x1  ...  pnn (t ) xn  g n (t )  and write it in matrix notation. To do this, we write x1 ,..., xn in vector form:   x1    x     x   nwe put the coefficients p11 (t ),..., pnn (t ) in an n x n matrix:  p11 (t ) p1n (t )    P(t )     p (t ) pnn (t )   n1 we again write x1 ,..., xn in vector form:  x1    x   x   nand write g1 (t ),..., gn (t ) in vector form: 6
    •  g1 (t )    g (t )     g (t )   n Therefore, the resulting equation using the above vector and matrix notation is represented by x  P(t ) x  g (t ) We will first consider homogenous systems where g (t )  0 , thus x  P(t ) xTo find the general solution of the above system when P(t ) is a 1 x 1 matrix, the system abovereduces to a single first order equation dx  px dtwhere the solution is x  ce pt . Therefore, to solve any other systems with second order orhigher, we will look for solutions of the form x   ertwhere  is a column vector instead of a constant c (because we are dealing with solutions tomore than one differential equation thus giving us multiple constants equating to a vector) and ris an exponent to be solved. Substituting x   ert into both sides of x  P(t ) x gives r ert  P(t ) ertUpon canceling ert , we obtain r  P(t ) or 7
    • ( P(t )  rI )  0where I is the n x n identity matrix. In order to solve ( P(t )  rI )  0 , we will use theorem 1.Theorem 1: Let A be an n x n matrix of constant real numbers and let X be an n-dimensionalcolumn vector. The system of equations AX  0 has nontrivial solutions, that is, X  0 , if andonly if the determinant of A is zero.In our case, ( P(t )  rI ) is the n x n matrix represented by A and  is the n-dimensional columnvector represented by X. Therefore, in order to find the nontrivial solutions of ( P(t )  rI )  0 ,we must take the determinant of ( P(t )  rI )  0 which is represented p11 (t )  r p1n (t ) 0 pn1 (t ) pnn (t )  rComputing the determinant will yield a characteristic equation, which resembles the structure ofa polynomial of degree n, where the roots of the characteristic equation, eigenvalues denoted byr, will be computed. After the eigenvalues have been computed, r will be substituted back into( P(t )  rI )  0 and solved for the nonzero vector,  , which is called the eigenvector of thematrix P(t ) corresponding to the eigenvalue r1 . The eigenvector will be an n x 1 column vectorthat will have as many values as there are equations to solve for. After finding the eigenvaluesand the eigenvectors for those specific values, they will be substituted back into the equation x   ertwhich will be represented as the following specific solutions 8
    •  x11 (t )   x1k (t )      x (t )   (1)  ,..., x (t )   (k )  ,...  x (t )   x (t )   n1   nk for the initial system. If the Wronskian of x(1) ,..., x( n) (represented as W [ x(1) ,..., x( n) ] ) does notequal zero, then the general solutions can be represented as a linear combination of the specificsolutions x  c1 x(1) (t )   ck x( k ) (t )The following examples will help illustrate how to solve n x n systems of differential equationswith distinct real eigenvalues. The general solution of the given system of equations will besolved for along with a graph that shows the direction field of the answer.Example 1: Solve the following 2 x 2 system for x  x1  3x1  2 x2  x2  2 x1  2 x2To solve the problem, we rewrite the equations into its matrix form  3 2  x   x  2 2 which is of the form x  P(t ) xwhere 9
    •  3 2  P (t )     2 2 We then find the eigenvalues of P(t) by finding the characteristic equation and solving for r.Therefore, 3 r 2 det( P(t )  rI )   r 2  r  2  (r  1)(r  2)  0 2 2  rand the eigenvalues of P(t) are r1  1 and r2  2 . Now we compute the eigenvectors for each oftheir respective eigenvalues. We will compute the nontrivial solutions of 3 r 2   c1      0  2 2  r  c2 For r1  1  3  (1) 2   c1   4 2   c1  4c1  2c2  0  c   0     c   0  2c  c  0  c2  2c1  2 2  (1)   2   2 1   2  1 2(Note that both of the resulting equations with c1 and c2 are the same). One such solution of the 1equation is found by choosing c1  1 thus making c2  2 to give the eigenvector  1    .  2 1Knowing that x( n) (t )   ( n)ernt , it follows that x (1) (t )  c1   e  t is a solution of the initial system.  2For r2  2 3 2 2   c1   1 2   c1  c1  2c2  0  c   0     c   0  2c  4c  0  c1  2c2  2 2  2   n   2 4  2  1 2 10
    • By choosing c1  2 to solve the equation, c2  1. Proper notation of eigenvectors, if possible,insists that fractions should be avoided when representing the numerical value of the eigenvalue.  2Therefore, for r2  2 ,  2    and a second solution is 1  2 x (2) (t )  c2   e 2t 1Now, we check to see if we can represent x1 and x2 as a general solution by taking the Wronskianof both specific solutions. The Wronskian of x(1) (t ) and x(2) (t ) is et 2e2t W [ x (1) , x (2) ]   3et 2et e 2twhich is never equal to zero. It follows that the solutions x(1) (t ) and x(2) (t ) are linearlyindependent. Therefore, the general solution of the system x  P(t ) x is 1  2 x(t )  c1   e  t  c2   e 2t  2 1 11
    • All the general solutions (represented by the family of red lines), a combination of x(1) (t ) andx(2) (t ) , for which c1  0 and c2  0 , are asymptotic to the line x2  2 x1 . The blue trajectoriesrepresent specific solutions to the system with each trajectory having a different initial value( x1 (0)  a and x2 (0)  b where a and b are any real number ). For the remaining examples in this section, the derivation of the final solution will beshown without all steps shown The purpose of these examples is to show the variety of systemsof differential equations that have distinct real eigenvalues such as a 3 x 3 system and a 2 x 2system with initial conditions given.Example 2: Solve the following 3 x 3 system for x  x1  x1  x2  x3  1 1 1     x2  2 x1  x2  x3   x   2 1 1  x x3  8 x1  5 x2  3x3     8 5 3   First, we find the eigenvalues for the coefficient matrix by the following equation 1 r 1 1 det( P(t )  rI )  2 1  r 1  0 8 5 3  rand solving the resulting characteristic equation.> 12
    • >Using maple yields the eigenvalues r1  2 , r2  2 , and r3  1and eigenvectors  4 0  3   2   3      5  ,    1  ,    4  1  7   1  2       The eigenvalues above are the same as the given Maple output but manipulated in properlyformat where all values of the eigenvector are integers and the first value is positive. After theeigenvalues and eigenvectors are computed, we find the WronskianW [ x(1) , x(2) , x(3) ]  12et  0 therefore we can substitute all the eigenvalues and eigenvectorsfound into x( n)   ( n)ernt and express the solution as a linear combination  4 0  3   e 2t  c   e 2t  c   e t x(t )  x (t )  x (t )  x (t )  c1  5  3  4  (1) (2) (3) 2 1   7   1  2       Example 3: Solve the 2 x 2 system with initial conditions given for x  x1  5 x1  x2 x (0)  2  5 1 2 where 1  x    x where x(0)     x2  3x1  x2 x2 (0)  1 3 1   1 13
    • We will start off the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>>  1 1Therefore, r1  4 ,  1    , r2  2 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e6t  0  1  3therefore the specific solutions x (1) and x (2) can be expressed as the general solution  1 1 x(t )  c1   e 4t  c2   e 2t  1  3 2After the general solution has been found, we substitute x(0)    into x(t ) to get  1  1 1 2  1 1  2  x(0)  c1   e 4*0  c2   e 2*0     c1    c2       1  3  1  1  3   1After the equation has been simplified, we multiply c1 and c2 by their respected vectors to yieldthe follow system of equations c1  c2  2 and c1  3c2  1 14
    • 7 3We then solve the system of equations for c1 and c2 to get c1  and c2  . Substituting 2 2back into the general solution to get the specific solution of the system as 7  1 3  1  x(t )    e 4t    e 2t 2  1 2  3The direction field of general solution along with a trajectory of the specific solution isrepresented asThe above direction field shows the different families of solutions for the general solutiondenoted by the red arrows and the blue trajectory represents the specific solution to the system 2for when the initial starting value was x(0)    . Now, after establishing the basis for solving  1systems of differential equations, we will now delve into different cases of solving systemswhere the eigenvalues are not real and/or distinct. 15
    • Section 2: Solving Systems of Differential Equations with Complex Eigenvalues In this section, we will use what was previously discussed in the section for solvingsystems with real and distinct eigenvalues on how to generate eigenvalues for an n x n system oflinear homogenous equations with constant coefficients denoted as x  P(t ) xNow if P(t) is real then the coefficients that make up the characteristic equation for r are real andany complex eigenvalues must occur in conjugate pairs (Boyce & DiPrima, 2001, p. 384).Therefore, for a 2 x 2 system, r1  a  bi and r2  a  bi would be eigenvalues where a and b arereal. Also, it follows that the corresponding eigenvectors are complex conjugate pairs of eachother. Therefore, r2  r1 and  2   1 . To help visualize this, take the equation that was formedin the previous section ( P(t )  rI )  0and substitute r1 and  1 into the equation to get ( P(t )  r1I ) 1  0which forms a corresponding general solution to the system. Now, by taking the complexconjugate of the entire equation, the resulting equation becomes ( P(t )  r1I ) 1  0where P(t) and I are not affected by the conjugation because they both have all real values. Theequation then forms another corresponding general solution where r2  r1 and  2   1 . Now, 16
    • with the eigenvalues and eigenvectors solved for, we can use Euler’s formula to express asolution with real and imaginary parts just as real solutions to the system. Euler’s formula states eit  cos t  i sin tBut, for use with general complex solutions to a system of differential equations, we will use thea modified version of the formula e( i )t  e t (cos( t )  i sin( t ))  e t cos( t )  i e t sin( t )to find the real-value solutions to the system. We can choose either x(1) (t ) or x(2) (t ) to find the 2real-valued solutions because they are conjugates of each other and both will yield the same real-valued solutions. Using x(2) (t ) and  2  a  bi where a and b are real, then we have x(2) (t )  (a  bi)e( i  )t  (a  bi)e t (cos( t )  i sin( t ))Factoring the above equation results in x(2) (t )  et (a cos( t )  bi cos( t )  ai sin( t )  b sin( t ))and separating x(2) (t ) into its real and imaginary parts, x(2) (t ) will yield x(2) (t )  e t (a cos( t )  b sin( t ))  ie t (a sin( t )  b cos( t ))If x(2) (t ) is written as the sum of 2 vectors ( x(2) (t )  u(t )  iv(t ) ), then the vectors yielded are u(t )  e t (a cos( t )  b sin( t )) and v(t )  e t (a sin( t )  b cos( t )) 17
    • We can disregard the i in front of v(t ) because it is considered to be a multiplier of the vectorand we are only interested in the real-numbered vector solution. If we chose to solve for x(1) (t )instead of x(2) (t ) , we would have gotten the same solution except x(1) (t )  u(t )  iv(t ) . i is alsoconsidered a multiplier of the v(t ) vector therefore we can disregard it and the answers for u (t )and v(t ) would be the same as the ones that were solved for above. u (t ) and v(t ) are theresulting real-valued vector solutions to the system. It is worth mentioning that u (t ) and v(t ) are linearly independent and can be expressed asa single general solution. Therefore, for r1    i, r2    i and that r3 ,..., rn are all real anddistinct. Let the corresponding eigenvectors be  1  a  bi,  2  a  bi,  3 ,...,  n (Boyce &DiPrima, 2001, p. 385). Then the general solution to systems of differential equations withcomplex eigenvalues is x(t )  c1u (t )  c 2 v(t )  c3 3e r3t  ...  cn nerntwhere u(t )  e t (a cos( t )  b sin( t )) , v(t )  e t (a sin( t )  b cos( t )) , and P(t) consists of allreal coefficients. It is only when P(t) consists of all real coefficients that complex eigenvectorsand eigenvalues will occur in conjugate pairs (Boyce & DiPrima, 2001, p. 385). The followingexamples will help illustrate how to solve n x n systems of differential equations with complexeigenvalues. Both the complex and real-valued solutions will be given for each of the examplesand some direction fields will be shown to demonstrate the nature of systems with complexeigenvalues. 18
    • Example 1: Solve the following 2 x 2 system for x  x1  3x1  2 x2   3 2    x   x  x2  4 x1  x2   4 1 We will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>>  1   1 Therefore, r1  1  2i , r2  1  2i ,  1    , and    2  . To get the eigenvectors in 1  i  1  i proper form from the Maple output, we multiplied both eigenvalues (resulting from the Mapleoutput) by its conjugate to get a real number for the first value and then multiplied it again by 2so that all values in the eigenvector were integers. The Wronskian W [ x(1) , x(2) ]  2e2t  i  0therefore the specific solutions x (1) and x (2) can be expressed as the general solution in complexform  1  (1 2i )t  1  (12i )t x(t )  c1  e  c2  e 1  i  1  i  19
    • But, we want to be able to find the real-valued solutions of the complex general solution so wewill use x (1) to find the real-valued vectors. Therefore,  1  (1 2i )t x (1) (t )   e 1  i Using Euler’s formula, x (1) becomes  1  t   e (cos(2t )  i sin(2t )) 1  i After Euler’s formula has been applied, we factor the above equation  cos(2t )  i sin(2t )  t  e  cos(2t )  i sin(2t )  i cos(2t )  sin(2t ) and separate the real and imaginary elements into  cos(2t )  t  sin(2t )  et    ie    sin(2t )  cos(2t )   sin(2t )  cos(2t ) The result is the two real-valued solutions of the form u(t )  iv(t ) where  cos(2t )  t  sin(2t )  u (t )  et   and v(t )  e    sin(2t )  cos(2t )   sin(2t )  cos(2t ) Therefore, the general solution to the system with real-valued solutions is  cos(2t )  t  sin(2t )  x(t )  c1u (t )  c2v(t )  c1et    c2e    sin(2t )  cos(2t )   sin(2t )  cos(2t )  20
    • The resulting direction field showing families of solutions to the general solution to the system isThe blue trajectories show specific solutions when initial conditions are given. Thus, thedirection field creates spiraled solutions where the origin is the center of the spirals called aspiral point. The direction of the motion is away from the spiral point and the trajectoriesbecome unbounded. Also, the spiral point, for this particular solution, is unstable. There arealso systems with complex eigenvalues where the general solution has a spiral point that is stablebecause all trajectories approach it as t increases.Example 2: Solve for the following 3 x 3 system for x  x1  x1  1 0 0     x2  2 x1  x2  2 x3   x   2 1 2  x  x3  3x1  2 x2  x3    3 2 1    21
    • Again, we will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>>Thus, the eigenvalues are r1  1 , r2  1  2i , and r3  1  2i . The simplified eigenvectors are 2 0 0   2     3  ,    i  , and    i  . Notice that r1 and  1 already contain real-values therefore 1 3   2 1  1      no computations are needed to turn them into real-valued solutions like the other complexeigenvalues and eigenvectors. The Wronskian W [ x(1) , x(2) , x(3) ]  4e3t  i  0 therefore thespecific solutions x (1) , x (2) , x (3) and can be expressed as the general solution in complex form 2  0 0   t   (1 2i )t   x(t )  c1  3  e  c2  i  e  c3  i  e(12i )t 2 1  1      To find the real-valued solutions of the general solution, we will use x(2) (t ) and Euler’s formulain the following equations 22
    • 0  0  0   0    (1 2i )t   t t   t   x (t )   i  e (2)   i  e (cos(2t )  i sin(2t ))  e  cos(2t )   ie  sin(2t )  1 1  sin(2t )    cos(2t )         Therefore,  0   0    t   u (t )  e  cos(2t )  and v(t )  e  sin(2t )  t  sin(2t )    cos(2t )     and the general solution to the system with real-valued solutions is 2  0   0    t t   t   x(t )  c1r1  c2u (t )  c3v(t )  c1  3  e  c2e  cos(2t )   c3e  sin(2t )  1 2  sin(2t )    cos(2t )        Now that we know how to solve systems that yield real and/or imaginary eigenvalues andeigenvectors, we will now focus our attention on the next case if a eigenvalue is repeated whenfound from the characteristic equation. 23
    • Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues In this section, we will be solving systems of differential equations where the eigenvaluesfound from the characteristic equation are repeated. We will still be finding solutions of thefollowing equation x  P(t ) xand will still find at least one of the eigenvalues/eigenvectors in the way we previously solvedsystems with distinct eigenvalues. But, when solving for the other repeated eigenvalue, we willsee that the other solution will take the form x   tert  ertwhere  and  are constant vectors. After finding the first solution of the form x(1) (t )   1ert , itmay be intuitive to find a second solution to the system of the form x(2) (t )   1tertbecause of how repeated roots are solved when finding the solution to a second order differentialequation. Substituting that back into x  P(t ) x yields r 1te rt   1e rt  P (t ) 1te rt  r 1te rt   1e rt  P (t ) 1te rt  0   1e rt  rt  1  P (t )t   0But, for the equation to be solved so it is satisfied for all t, the coefficients of te rt and ert musteach be zero (Boyce & DiPrima, 2001, p. 403). Therefore, we find out that in this case,  1  0and thus x2   1tert is not a solution for the second repeated eigenvalue. But, from 24
    • r 1tert   1ert  P(t ) 1tert  0 ,we see that there is a form of x   tert in the substituted equation along with another term of theform  ert . Therefore, we need to assume that x2   1tert ertWhere  and  are constant vectors. Substituting the above expression into x  P(t ) x gives r 1tert   1ert  rert  P(t )( 1tert ert )  r 1tert  ( 1  r )ert  P(t )( 1tert ert )Equating the coefficients of te rt and ert gives the following conditions P(t ) 1te rt  r 1te rt  0  P(t ) 1  r 1  0  ( P(t )  rI ) 1  0 P(t ) e rt   1e rt  r e rt  0  P(t )  r   1  ( P(t )  rI )   1for the determination of  1 and . The underlined portions are the important conditions derivedfrom the equation. To solve ( P(t )  rI ) 1  0 , all we do is solve for one of the repeatedeigenvalue and eigenvector just like in previous sections. We will solve a matrix equation of theform  p11 (t )  r p1n (t )  1   11   p11 (t )  r 1    p1n (t )  n  11              p (t ) pnn (t )  r n    n   pn1 (t ) 1    pnn (t )  r n   n 1 1  n1    Solving for 1 ,...,n in the above equation will result in the solution of the vector  denoted 25
    •  1          nAfter equating  1 and  , we substitute them into x(2) (t ) to get the second specific solution x2 (t )   1tert ertThe last term in the above equation can be disregarded because it is a multiple of the firstspecific solution x(1) (t )   1ert but the first 2 terms make a new solution of the form x(2) (t )   1tert ertFinding W [ x(1) , x(2) ](t )  0 will prove that x(1) and x(2) are linearly independent thus allowing usto represent the a general solution to the system in the form x  c1 x(1) (t )  c2 x (1) (t )  ck x ( k ) (t )  x  c1 1e r1t  c2 [ 1te r1t  e r1t ]  ...  ck k 1e rk 1twhere x(1) and x(2) include the repeated eigenvalues of multiplicity 2. For the sake of simplicity, we will focus our examples on solving systems that haverepeated eigenvalues of only multiplicity 2. Also included in one of the examples is a casewhere a repeated eigenvalue give rise to linearly independent eigenvectors (which is easilyidentifiable using Maple) of the matrix P(t ) thus avoiding the complications of solving systemswith repeated eigenvalues. 26
    • Example 1: Solve the following 2 x 2 system for x  x1  4 x1  x2   4 1   x   x  x2  4 x1  8 x2   4 8 We will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>> 0Notice in the resulting eigenvectors that  2    which is a zero multiple of  1 and does us no 0help in finding the second specific solution of the above system. But, the results derived from 1 1Maple gives us r1  r2  6 ,  1    , and x (1) (t )    e 6t . We need to use the equation  2  2 x(2) (t )   1tert ertto solve for  and thus have a second specific solution to the system. To find out the second 1  4 1specific solution, we substitute x (2) (t )    te6t   e6t into x    x to get the following  2  4 8 expression 27
    •  1  6t 1    4 1  1  6t  4 1   1  6t  e    6te6t   1  6e6t     te   e  2  2  2   4 8  2   4 8  2   4 1  1  6tMultiplying out    te and factoring out a 6 from the result yields  4 8  2   1  6t  1  6t  1  6t  1  6t  4 1   1  6t   e    6te    6e    6te    e  2  2  2  2  4 8  2  1Canceling out the   6te6t on each side of the equation and rearranging the equation yields  2  4 1   1  6t  1  6t  1  6t     e    6e    e  4 8  2   2  2  1 Factoring out   e6t on the left side of the equation and simplifying gives us  2     4 1    1  6t  1  6t   4 8   6 I    e    e     2    2   4 1   6 0    1  6t  1  6t      e    e   4 8   0 6   2   2   2 1   1   1         4 2  2   2   2 1   1   1        , is of the form ( P(t )  rI )   . 1The end product of the above expression,   4 2   2   2 In this case 28
    •  4 1   6 0   1 ( P(t )  rI )   1         4 8   0 6    2Thus, to solve for  , we solve  2 1   1   1  21  2  1 0         4  2  2  1  0 and 2  1       4 2   2   2  1 2 1(Note that both of the resulting equations with 1 and 2 are the same). After solving for  , wesubstitute it into x(2) (t )   1tert ert to find the second solution of the system to be 1 0 x (2) (t )    te rt    e rt  2 1The Wronskian W [ x(1) , x(2) ]  e12t  0 . Therefore the specific solutions x (1) and x (2) can beexpressed as the general solution 1  1   0   x(t )  c1   e6t  c2   t     e6t  2  2   1  The resulting direction field showing families of solutions to the general solution to the system is 29
    • The blue trajectories show specific solutions when initial conditions are given. The origin iscalled an improper node. If the eigenvalues are negative, then the trajectories are similar buttraversed in the inward direction. An improper node is asymptotically stable or unstable,depending on whether the eigenvalues are negative or positive (Boyce & DiPrima, 2001, p. 404).Example 2: Solve the following 3 x 3 system for x  x1  x1  2 x2  x3   1 2 1     x2  x2  x3   x   0 1 1  x  x3  2 x3  0 0 2    We will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:> 30
    • >> 1  1   t  From the Maple results: r1  r2  1 , r3  2 , x1 (t )   0  e , and x3 (t )  1 e2t . What we need to 0  1    find is the specific solution to x(2) (t ) . In this example, we will use the equation ( P(t )  rI )   1to solve for  , substitute it into x(2) (t )   1tert ert , and use the shortcut to find out the thirdspecific solution to the system. Therefore  1 2 1   1  1      e t   0  e t ( P(t )  rI )     0 1 1   1I   2  1    0 0 2         3    0   0 2 1 1  1    t   t  0 0 1 2  e   0  e  0 0 1     0   3   and 0 22  3  1   1 1 3  0  1  0,2  ,3  0      2 2 3  0 0   31
    • Substituting what we found for  into x(2) (t )   1tert ert yields 0 1    2 0   t 1 t   t   t x (t )   0  te  (2) e   0  te   1  e 0 2 0  0   0      The Wronskian W [ x(1) , x(2) , x(3) ]  e4t  0 . Therefore, the specific solutions x(1) , x(2) , and x(3) canbe expressed as the general solution 1 1  2   0     2t   t      x(t )  c1 1 e  c2  0  e  c3  0  t   1   et 1 0  0   0           Example 3: Solve the following 3 x 3 system for x  x1   x2  3x3   0 1 3     x2  2 x1  3x2  3x3   x   2 3 3  x  x3  2 x1  x2  x3     2 1 1   For this example, using Maple can unlock a potential shortcut in solving for the general solutionto the above system. Again, we will begin the example by using Maple to find the eigenvaluesand eigenvectors of the coefficient matrix:>> 32
    • >Unlike the other 2 examples, the Maple output displays 2 linearly independent eigenvectors ofthe repeated eigenvalues r2  r3  2 . Another shortcut for finding eigenvectors of repeatingeigenvalues is found if a math program, such as Maple, is utilized to solve systems of differentialequations. Therefore  1 1  3   2t 2   2t 3   x1 (t )  1 e , x (t )   2  e , x (t )   0  e 2t  1  0  2       and the general solution to the system is 1  1  3    2t      x  c1 1 e  c2  2   c3  0   e 2t 1  0  2          A more advanced look at systems with repeated eigenvalues would include repeatedeigenvalues with multiplicities higher than 2. The equations to solve higher multiplicities ofrepeated eigenvalues become more detailed and difficult to solve for but to find the eigenvaluesfor such values, we would follow the same thought process in how we found the eigenvalue forrepeated eigenvalues of multiplicity 2. For the next section, we will return to our original form 33
    • of a differential equation x  p1 (t ) x1  ...  pn (t ) xn  g (t ) and solve nonhomogenous systemswhere the value of g (t )  0 . 34
    • Section 4: Solving Systems of Nonhomogenous Differential Equations Unlike the previous sections where we solved different types of systems of homogeneousdifferential equations with constant coefficients, this section will focus on solving systems ofnonhomogenous differential equations of the form x  P(t ) x  g (t ) The following theorem related to nonhomogenous systems should help us figure outwhere to start solution process: Theorem 2: If x(1) (t ),..., x( n) (t ) are linearly independent solutions of the n-dimensional homogenous system x  P(t ) x on the interval a < t < b and if x p (t ) is any solution of the nonhomogenous system x  P(t ) x  g (t ) on the interval a < t < b, then any solution of the nonhomogenous system can be written x  c1 x (1) (t )   ck x ( n ) (t )  x p (t ) for a unique choice of the constants c1 ,..., cn (Rainville, Bedient, & Bedient, 1997, p. 199). The theorem states that we will need to find a particular solution x p (t ) and add it on to thegeneral solution of the homogenous system that is part of the nonhomogenous system. To dothat, we will be using a variation of parameters technique to find x p (t ) and solve the equationx  P(t ) x  g (t ) . Solutions of the homogenous part of the nonhomogenous systems will take the form x  c1 1er1t   cn n erntand using the variation of parameters technique suggests we seek a solution to thenonhomogenous system to be 35
    • x p (t )  c1 (t ) 1e r1t   cn (t ) n e rntDirect substitution back into x  P(t ) x  g (t ) yields(r1c1 (t ) 1er1t   rn cn (t ) n ernt )  (c1 (t ) 1er1t    cn (t ) nernt )  ( P(t )c1 (t ) 1er1t    P(t )cn (t ) ne rnt )  g (t )P(t ) multiplied by any eigenvalue found to be a part of the specific solution will result in thatparticular eigenvalue multiplied by its eigenvector because it is already part of the solution to thehomogeneous system. Therefore(r1c1 (t ) 1e r1t   rn cn (t ) n e rnt )  (c1 (t ) 1e r1t    cn (t ) n e rnt )  (r1c1 (t ) 1e r1t    rncn (t ) ne rnt )  g (t )  (c1 (t ) 1e r1t    cn (t ) n e rnt )  g (t ) The resulting equation can be rewritten in matrix form as  11 1n   c1(t )e r t   g1 (t )  1           n  1  nn   cn (t )e rnt   g n (t )        To solve for c1 (t ),..., cn (t ) , we must use Cramer’s Rule to solve Ax  b for x where  11 1n    c1 (t )e r1t   g1 (t )        A , x   ,b     n1  nn   cn (t )e rnt    g (t )       n Cramer’s Rule states that the system has a unique solution that is given by det( Bk ) xk  for k  1,..., n det( A) 36
    • Therefore g1 (t ) 1n 11 g1 (t ) g n (t )  nn n 1 g n (t )  c1 (t )e  r1t  ,..., cn (t )e  rn t 11 1n 11 1n n 1  nn n 1  nnThus,   c1 (t )e r1t  a1 g1 (t )  ...  an g n (t )  c1 (t )  [a1 g1 (t )  ...  an g n (t )]e  r1t   cn (t )ernt  b1 g1 (t )  ...  bn g n (t )  cn (t )  [b1 g1 (t )  ...  bn g n (t )]e  rntfor some arbitrary constants a1 ,..., an and b 1 ,..., bn . To solve for the general solution, integrateboth sides of the above equation to get c1 (t ),..., cn (t ) , substitute them intox p (t )  c1 (t ) 1e r1t   cn (t ) n e rnt to find the particular solution, and substitute x p (t ) intox  c1 x (1) (t )   ck x ( n ) (t )  x p (t ) to find the general solution for the system. The followingexamples will help demonstrate how to solve systems of nonhomogenous differential equationsand lead into an application of nonhomogenous systems.Example 1: Solve the following 2 x 2 system for x  x1  x2   0 1  0    x   x t   x2  2 x1  3x2  3e   2 3   3e  tWe will begin the example by using Maple to find the eigenvalues and eigenvectors of thehomogenous part of the system 37
    •  0 1 x   x  2 3 Therefore>>> 1 1The resulting eigenvalues and eigenvectors are r1  1, r2  2,  1    , and  2    . The 1  2Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the systemis  1 1 xh  c1   et  c2   e 2t  1  2To find the particular solution of the nonhomogenous part of the system, we will use thevariation of parameters technique to find a solution of the above equation of the form  1 1 x p  c1 (t )   et  c2 (t )   e 2t  1  2 38
    •  0 1  0 We will first substitute x p for x and x directly into x    x   t  to get the following  2 3   3e expression  1  1 1 1  0 1  1  0 1  1   0   c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t     c1 (t )e t    c2 (t )e 2t  t   1  1  2  2  2 3  1  2 3  2   3e    1  1 1 1  1 1  0    c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t  c1 (t )   et  2c2 (t )   e 2t  t   1  1  2  2  1  2  3e    1 1  0    c1 (t )   et  c2 (t )   e 2t   t   1  2  3e The final expression given above can be written in matrix notation as  1 1   c1 (t )et   0     2t   t 1 2   c2 (t )e   3e   To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find 0 1 1 0 3et 2 3et 1 3et 3et  c1 (t )et    and c2 (t )e2t   0 1 2 0 1 2 2 3 2 3Thus  3et  t 3  3et  2t 3et  c1 (t )    e =  and c2 (t )   e   2  2  2  2To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that 39
    • 3t 3et c1 (t )  and c2 (t )  2 2  1 1and substituting into the partial solution x p  c1 (t )   et  c2 (t )   e 2t yields  1  2 3t 1 t 3et  1  2t t  1 t 1 xp   e    e  3te    3e   2  1 2  2  1  2Therefore, the general solution to the nonhomogenous system is  1 1  1 1 x  xh  x p  c1   et  c2   e 2t  3tet    3et    1  2  1  2Example 2: Solve the following 2 x 2 system for x  x1  2 x1  x2  et   2 1   et    x   x   x2  x1  2 x2  3t   1 2   3t We will begin the example by using Maple to find the eigenvalues and eigenvectors of thehomogenous part of the system  2 1  x   x  1 2 Therefore>> 40
    • >  1 1The resulting eigenvalues and eigenvectors are r1  1, r2  3,  1    , and  2    . The  1  1Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the systemis  1 1 xh  c1   e t  c2   e 3t  1  1To find the particular solution of the nonhomogenous part of the system, we will use thevariation of parameters technique to find a solution of the above equation of the form  1 1 x p  c1 (t )   e t  c2 (t )   e 3t  1  1  2 1   et Substituting x p for x and x directly into x    x    to get the following  1 2   3t   1 1  et  (t )   et  c2 (t )   e3t    c1   1  1  3t The final expression above can be written in matrix notation as  1 1   c1 (t )et   et     3t    1 1  c2 (t )e   3t   To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find 41
    •  et   3t   et   3t   c1 (t )et        and c2 (t )e 3t        2  2  2  2Thus  1   3te   e2t   3te3t  t  c1 (t )        and c2 (t )     2  2   2   2 To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that  et   3tet  3et 2 3tet 3et t c1 (t )   + and c2 (t )  2 2 2 4 2 2  1  t  1  3t  3t 3 te t  1  e t 3te 2t 3e 2t   1  x  xh  x p  c1 (t )   e  c2 (t )   e    +          1  1 2 2 2   1  4 2 2   1  1 1and substituting into the partial solution x p  c1 (t )   e t  c2 (t )   e 3t yields  1  1  3tet 3et t 1 t   e  3tet 3et   1  3t t 2 xp    +   e       e  2 2 2   1  4 2 2   1     3t 3 tet  1  et 3te2t 3e2t   1  xp    +         2 2 2   1  4 2 2   1Therefore, the general solution to the nonhomogenous system is  1 1  3t 3 tet  1  et 3te2t 3e2t   1  x  xh  x p  c1 (t )   et  c2 (t )   e 3t    +          1  1 2 2 2   1  4 2 2   1 42
    • Section 5: Application of Systems of Differential Equations – Arms Races(Nonhomogenous Systems of Equations) In the previous section, we discussed how to solve systems of differential equations thatwere nonhomogenous using a variation of parameters technique. Now, we can apply thatknowledge of solving systems with nonhomogenous equations to solve a model that illustrates anarms race between two competing nations. L.F. Richardson, an English meteorologist, firstproposed this model (also known as the Richardson Model) that tried to mathematically explainan arms race between two rival nations. Richardson himself seemed to have believed that hisperceptions relating to the way nations compete militarily might have been useful in preventingthe outbreak of hostilities in World War II (Brown, 2007, p. 60). Both nations are self-defensive,both fight back to protect their nation, both maintain army and stock weapons, and when onenation expands their army the other nation finds it offensive. Therefore, both nations will spendmoney (in billions of dollars) on armaments x and y that are functions of time t measured inyears. x(t) and y(t) will represent the yearly rate of armament expenditures of the two nationsusing some standard unit. Richardson then made some of the following assumptions about hismodel:  The expenditure for armaments of each country will increase at a rate that is proportional to the other country’s expenditure (each nations mutual fears rate is directly proportional to the expenditure of the other nation) (Rainville, Bedient, & Bedient, 1997, p. 228).  The expenditure for armaments of each country will decrease at a rate that is proportional to its own expenditure (extensive armament expenditures create a drag on the nations economy) (Rainville, Bedient, & Bedient, 1997, p. 228). 43
    •  The rate of change of arms expenditure for a country has a constant component that measures that level of antagonism of that country toward the other (Rainville, Bedient, & Bedient, 1997, p. 228).  The effects of the three previous assumptions are additive (Rainville, Bedient, & Bedient, 1997, p. 228).The previous assumptions make up the differential equations of the arms race system denoted by dx x(0)  x0  ay  mx  r dt for dy  bx  ny  s y (0)  y0 dtwhere a, m, b, and n are all positive constants. The positive terms ay and bx represent the drive tospend more money are arms due to the level of spending of the other nation, and the negativeterms mx and ny reflect a nation’s desire to inhibit future military spending because of theeconomic burden of its own spending. But, r and s can be any value because they represent theattitudes of each nation towards each other (negative values represent feelings of good will whilepositive values represent feelings of distrust). The initial values x(0) and y(0) represent theinitial amount of money (in billions of dollars) each nation will spend towards armaments. Thesystem can be simplified into x(t )  mx  ay  r y(t )  bx  ny  sand is expressed in matrix notation where  x(t )   m a  x(t )   r           X   P(t ) X  B  y (t )   b n  y (t )   s  44
    • To solve for the system, we will use the knowledge from the previous section to developgeneral solutions to the homogenous system. For the nonhomogenous part of the system, the fsolution will be a constant solution of the form   because the vector B is made up of gconstants thus making the process of solving by variation of parameters much easier. Lastly, theinitial values (trajectories) for the solution will represent the starting amount of money eachcountry will be spending on armaments. General solutions to the arms race system will represent one of a few types of races: astable arms race, a runaway arms race, a disarmament, or disarmament/runaway/stable arms racedepending on the initial values. The following examples will help demonstrate each of the abovementioned arms races along with slope fields to graphically represent the races.Example 1: A Runaway Arms RaceThe following system will result in a runaway arms race: x(t )  2 x  4 y  8  2 4  8  X    X   y(t )  4 x  2 y  2   4 2  2 To find the solution to this arms race, we will first find the general solution to thehomogenous part of the system using Maple:>>> 45
    •  1 1Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0  1  1 thus, the general solution to the homogenous part of the arms race is  1 1 xh  c1   e 2t  c2   e 6t  1  1 As mentioned in the beginning of this section, the nonhomogenous system eX   P(t ) X  B has a constant solution of the form   because B is a vector of constants thus fthe solution should also be a vector made up of constants. Therefore, in the equation eX   P(t ) X  B , X (t )    can be substituted into X and X  where f  2 4  8 P(t )    and B     4 2   2to get the following expression  2 4  f   8   2 4  f   8  2 f  4 g  8  f   2  0                      xn  4 2  g   2   4 2  g   2 4 f  2 g  2  g   3 Therefore the general solution of the nonhomogenous system is  1 1  2  x  xh  xn  c1   e 2t  c2   e 6t     1  1  3  46
    • Note that as lim x(t ) and lim y(t )   . Thus we would predict that the rate that each nation t  t spends their money on armaments would increase infinity resulting in an arms race. The direction field for the nonhomogenous system is represented bywith the initial conditions x0  5, y0  2 and x0  2, y0  5 given. The direction field of thesystem shows that for any initial value, the solution goes to  as t   . Thus, we have arunaway arms race. If you wanted to solve the system with an initial condition given such as x0  5, y0  2 ,we would set up the general solution as 5 1 20  1  60  2   5   1  1   2     c1   e  c2   e        c1    c2       2  1  1  3   2   1  1  3  47
    • and solve for c1 and c2 . Therefore 5  1  1   2  c1  c2  2  5 c1  c2  7    c1    c2       c  c  3  2  c  c  6  c1  6 and c2  1  2  1  1  3  1 2 1 2Thus, the final solution with the initial conditions given is  1 1  2  x    6e2t    e 6t     1  1  3 or x(t )  6e 2t  e 6t  2 y (t )  6e 2t  e 6t  3The role of the initial value is how much each nation will initially spend on armaments inbillions of dollars. Using initial values when solving an arms race system will lead to a specificsolution describing the race instead of families of general solutions describing all cases of thesystem.Example 2: A Stable Arms RaceThe following system will result in a stable arms race: x(t )  5 x  2 y  1  5 2  1  X    X   y(t )  4 x  3 y  2   4 3   2 To find the solution to this arms race, we will first find the general solution to thehomogenous part of the system using Maple: 48
    • >>> 1 1Therefore, r1  7 ,  1    , r2  1 , and  2    . The Wronskian W [ x(1) , x(2) ]  3e8t  0  1  2thus, the general solution to the homogenous part of the arms race is 1 1 x  c1   e 7 t  c2   e  t  1  2 The general solution to the nonhomogenous part of the arms race will be found by f  5 2  1substituting X (t )    into X     X    . Therefore g  4 3   2  5 2  f   1  5 f  2 g  1  0 0        f  1 and g  2  4 3  g   2  4 f  3g  2  0 1and the solution of that system is xn    . Thus the general solution of the nonhomogenous  2system is 1 1 1 x  xh  xn  c1   e 7 t  c2   e t     1  2  2 49
    • Note that as lim x(t )  1 and lim y(t )  2 because in both equations, the terms with both t  t e7t and et go to 0 as t   . All that is left from the differential equations are the constantterms x(t )  1 and y(t )  2 and what initial values of the system converge to. The direction field with a few trajectories denoting the initial values for thenonhomogenous system is represented byThe direction field of the system shows that for any initial value, the solution approaches thepoint (1, 2) as t   . Thus, we have a stable arms race. 50
    • Example 3: DisarmamentThe following system x(t )  4 x  y  1  4 1   1   X    X   y(t )  x  y  2   1 1  2 will result in disarmament between the competing nations for all initial values. For the sake of simplicity, only the graph of this system and the solution solved by Maplewill be shown because the eigenvectors and eigenvalues generated from P(t) becomecomplicated radicals that would be difficult to manipulate by hand to find an answer. Therefore,the general solution to the system given by Maple output is>>>> 51
    • >As you can see from the above solution, the general solution to the system is becomes verycomplicated but, lim x(t )  1 and lim y(t )  3 thus showing that eventually, the nations will t  t get to a point in time where they are decreasing the rate at which they are spending money onarmaments until they are spending no money on the arms race. The graph of the system is muchmore beneficial in demonstrating an arms race that ends in disarmament. The direction field with a few trajectories denoting the initial values for thenonhomogenous system is represented by 52
    • For the initial values represented by the blue trajectories in the above directional field, thetrajectories will approach the point (1, 3) as t   thus resulting in disarmament with anyinitial value chosen for the system.Example 4: Disarmament/Runaway Arms Race/Stable Arms RaceThe following system x(t )  2 x  4 y  2   2 4   2   X    X   y(t )  4 x  2 y  2   4 2   2 will result in disarmament if x0  y0  2 , a runaway arms race if x0  y0  2 , or a stable armsrace if x0  y0  2 . To find the solution to this arms race, we will first find the general solution to thehomogenous part of the system using Maple:>>>  1 1Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0  1  1 thus, the general solution to the homogenous part of the arms race is 53
    •  1 1 xh  c1   e 2t  c2   e 6t  1  1 The general solution to the nonhomogenous part of the arms race will be found by f  2 4   2 substituting X (t )    into X     X    . Therefore g  4 2   2   2 4  f   2  0       f  1 and g  1  4 2  g   2   1and the solution of the system is xn    . Thus the general solution of the nonhomogenous  1system is  1 1  1 x  xh  xn  c1   e 2t  c2   e 6t     1  1  1The direction field with a few trajectories denoting the initial values for the nonhomogenoussystem is represented by 54
    • For the initial values x0  1.7, y0  0, x0  0, y0  1.7, and x0  y0  2 in the above directionalfield, the trajectories will approach  as t   thus resulting in disarmament. For the initialvalues x0  2, y0  0, x0  0, y0  2, and x0  y0  2 , the trajectories will approach the point (1,1)as t   thus resulting in a stable arms race. For the initial valuesx0  3, y0  0, x0  0, y0  3, and x0  y0  2 , the trajectories will go to  as t   thus resultingin a runaway arms race.Conjectures of Richardson’s Arms Race ModelAfter solving and graphing results of the Arms Race Model to show the different situations indescribing the model, in the following conjectures can be made to better summarize what istheoretically happening in the model with certain values for the coefficients:  If mn - ab < 0, r > 0, and s >0, there will be a runaway arms race.  If mn - ab> 0, r > 0, and s > 0, there will be a stable arms race.  If mn - ab > 0, r < 0, and s < 0, there will be disarmament. r  s  If mn - ab < 0, r < 0, and s < 0, there will be disarmament if x0  y0  , a runaway 2 r  s r  s arms race if x0  y0  , and a stable arms race if x0  y0  . 2 2Real World Examples of Richardson’s Model At the level of strategic weapons during the Cold War, Russia and America were in a two-way arms race. The structure of this is that both sides increased until the Russians achieved equality, at which point the two sides began signing initial SALT agreements as predicted by the model (Hunter, 1980, p. 252). 55
    • Both nations realized that once they were at equality, that the risk of spending more money onarmaments would have worse impact on their economies thus, as the model predicted, the armsrace started to stabilize (even though both nations still have negative feelings toward each other). Another example reveals sharp limitation on the model. In 1939, Russia and Germany signed a nonaggression pact which left Germany and Russia in a two-sided alliance against England, France, and other weaker countries. Yet the arms race in both Russia and Germany accelerated at the maximum economic rate. Why? Because both sides knew that Hitler intended to invade Russia at the earliest feasible time (Hunter, 1980, p. 252).Both nations secretly had negative feelings towards one another and were willing to spend moremoney on armaments without much regard on the negative impact that the spending would haveon their economy. Thus, as Richardson’s Model could have predicted, the Germans and theRussians were at a runaway arms race with each other (at the time).Richardson’s Arms Race Model Extended What was described in this section was the most generic model of the arms race. Themodel has been expanded to include more than two nations and other variables that would affectan arms race. What was interesting is that the arms race model could and was used to modelWorld War II which included numbers for 10 countries that were included. At the most basiclevel of the arms race model, a lot of problems and difficulties arrive that are solved in moredetailed versions of the model but, Richardson’s Model laid a good foundation to understand asocial science topic mathematically. 56
    • Section 6: Application of Systems of Differential Equations – Predator-Prey Model(Nonlinear System of Equations) In the study of the dynamics of a single population, ecologists typically take intoconsideration such factors as the "natural" growth rate and the "carrying capacity" of theenvironment. Mathematical ecology requires the study of populations that interact, therebyaffecting each others growth rates. In this section we study a very special case of such aninteraction, in which there are exactly two species, one of which, the predators, eats the other, theprey. Such pairs exist throughout nature: Lions and gazelles, birds and insects, pandas andeucalyptus trees, and Venus fly traps and flies (Moore & Smith, 2003). Vito Volterra (1860-1940) was a famous Italian mathematician who retired from adistinguished career in pure mathematics in the early 1920s. His son-in-law, HumbertoDAncona, was a biologist who studied the populations of various species of fish in the AdriaticSea. In 1926, DAncona completed a statistical study of the numbers of each species sold on thefish markets of three ports from 1914-1923: Fiume, Trieste, and Venice (Moore & Smith, 2003). DAncona observed that the highest percentages of predators occurred during and just after World War I, when fishing was drastically curtailed. He concluded that the predator-prey balance was at its natural state during the war and that intense fishing before and after the war disturbed this natural balance. Having no biological or ecological explanation for this phenomenon, DAncona asked Volterra if he could come up with a mathematical model that might explain what was going on. In a matter of months, Volterra developed a series of models for interactions of two or more species 57
    • (Moore & Smith, 2003). The first and simplest of these models is the subject of this section. Alfred J. Lotka (1880-1949) was an American mathematical biologist who formulatedmany of the same models as Volterra, independently and at about the same time. His primaryexample of a predator-prey system comprised a plant population and an herbivorous animaldependent on that plant for food (Moore & Smith, 2003). Thus, the Predator-Prey Model is alsoknown as Lotka-Volterra Model named after the two mathematicians that helped developed theequations. To keep the model simple for easier understanding, we will make the followingassumptions that would be unrealistic in most predator-prey interactions:  The predator species is totally dependent on a single prey species as its only food supply.  The prey species has an unlimited food supply.  There is no threat to the prey other than the specific predator. In constructing the model between the two species, we will make the followingassumptions to set up the model which relate to the above assumptions:  x(t ) will represent the number of prey at a time given by t and y(t ) will represent the number of predators at a time also given by t.  In the absence of the predator, the prey grows at a rate proportional to the current dx population; thus  ax, a  0, when y  0 (Boyce & DiPrima, 2001, p. 503). dt 58
    • dy  In the absence of the prey, the predator dies out; thus  cy, c  0, when x  0 (Boyce dt & DiPrima, 2001, p. 503).  The number of encounters between predator and prey is proportional to the product of their populations. Each such encounter tends to promote the growth of the predator and to inhibit the growth of the prey. Thus the growth rate of the predator is increased by a term of the form pxy , while the growth rate of the prey is decreased by a term bxy , where p and b are positive constants (Boyce & DiPrima, 2001, p. 503).As a result of these assumptions, the following equations were formulated to represent thechange of predators and prey over time dx dy  ax  bxy  x(a  by ) and  cy  pxy  y (c  px) dt dtwhere a, c, b, and p are all positive. The growth rate of the prey and the death rate of thepredator is represented by a and c respectively while b and p are measures of the effect of theinteraction between the two species (Boyce & DiPrima, 2001, p. 504). Even though thedifferential equations of the Predator-Prey Model are nonlinear, we can use the linear part of theequation to find a general solution to help explain the behavior of the system. Also, finding thecritical points, the directional field, and trajectories of the directional field will be key in furtherexplaining the behavior of the system. The critical points of the system are the solutions of x(a  by)  0 and y(c  px)  0that is the points (0,0) and (c / p, a / b) . We first examine the solutions of the correspondinglinear system near each critical point. The origin is a saddle point and hence unstable. Entrance 59
    • to the saddle point is along the y-axis and departs via the x-axis. All other trajectories departfrom the neighborhood of the critical point.From the critical point at the origin, the corresponding linear system is d  x   a 0  x      (1) dt  y   0 c  y The eigenvalues and eigenvectors are 1 0 r1  a,  1    , r2  c, and  1    0 1Therefore, the general solution is  x  1  at  0   ct    c1   e  c2   e  y 0 1and the resulting direction field is 60
    • Next, consider the critical point (c / p, a / b) . To examine the behavior and generate a generalsolution around that critical point, we will add values u and v to them to get c ax  u and y   v . Then we will substitute them into x(a  by)  0 and y(c  px)  0 to p b c atranslate u  x  and v  y  . Therefore p b c  a  a  c    u   a  b   v    0 and   v   c  p  u   0 p  b  b  p   c  a    u   a  a  bv   0 and   v   c  c  pu   0 p  b    bc   ap  v  bu   0 and u   pv   0  p   b Therefore, the corresponding linear system is d u   0 bc p  u      (2) dt  v   ap b 0  v The eigenvalues of that system are r   i  ac so the critical point (stable) is along the y-axis;all other trajectories depart from the neighborhood of the critical point (Boyce & DiPrima, 2001,p. 507). 61
    • Returning to the nonlinear system, it can be reduced to a single equation represented as dy dy dt y(c  px)   dx dx dt x(a  by)The above equation can be separated and integrated to get the following solution a ln y  by  c ln x  px  C (3)where C is a constant of integration. 62
    • Even though we cannot solve the solution for x or y, the solution does show that the graph of the equation for a fixed value of c is a closed curve surrounding the nonzero critical point. Thus the critical point is also a center of the initial nonlinear system and the predator/prey populations exhibit a cyclic variation (Boyce & DiPrima, 2001, p. 506).The following examples will help demonstrate the Predator-Prey Model and the aspectsmentioned above that it contains.Example 1: Determine the Behavior of x and y as t→∞ dx dy  x(1.5  0.5 y ) and  y (0.5  x) dt dtFirst, we will find the critical points of the system by using x(a  by)  0 and y(c  px)  0 tofind the points (0, 0) and (1 2,3)Next, we will examine the behavior at the critical point (0,0) by using equation (1) d  x   a 0  x  d  x   1.5 0  x             dt  y   0 c  y  dt  y   0 0.5  y Therefore, the eigenvalues and eigenvectors of the above equation are 1 0 r1  1.5,  1    , r2  0.5, and  1    0 1and its general solution is 63
    •  x  1  1.5t  0  0.5t    c1   e  c2   e  y 0 1After examining the first critical point, we will examine the second critical point (1 2,3) by usingequation (2) d u   0 bc p   u  d u  0  0.5  0.5  1  u            1.5 1 0.5   dt  v   ap b 0  v  dt  v     0  v   d  u   0 0.25   u      dt  v   3 0  v  3i 3i The resulting eigenvalues are r1  and r2   . The resulting eigenvalues have 2 2radicals and complex values that are difficult to express in simpler terms.The original system has a solution from equation (3) a ln y  by  c ln x  px  C  1.5ln y  0.5 y  0.5ln x  x  C 64
    • The resulting directional field with trajectories for different initial populations of thepredator and prey isThe directional field and trajectories show the populations of predator and prey over time. Theellipses around the critical point (1 2,3) show that over a certain period of time, the way in howthe predator and prey interact with each other go through a cycle of change and end up back atthe initial population of both the predator and the prey. Because the predator/prey start from arelatively small population, the prey increase first because there is little predation. Then thepredators, with abundant food, increase in population also. This causes heavier predation and theprey tend to decrease. Finally, with a diminished food supply, the predator population alsodecreases, and the system returns to the original state. The following graph will demonstrate thevariations of the prey (blue) and predator (green) populations with time for the system 65
    • The above graph shows that relationship between predator and prey repeats sinusoidally with aperiod of about t  10 and that the predator population lags behind the prey population. Thenext few examples will examine the manipulation of the coefficients a, c, b, and p and in thedifferential equations and what it means in terms of the solutions. To best describe this, threedimensional graphs will be utilized to show the change of predator and prey over time.Example 2: Determine the Behavior of x and y as t→∞ For this example, we will use the initial values a  1, b  0.02, c  0.4, and p  0.01tomodel the following Predator-Prey Model dx dy  x  0.03 xy and  0.4 y  0.01xy dt dt 66
    • To get a better understanding of how the solution of the system operates, we will analyze thefollowing graphs instead of the numerical solutions in the previous example. Therefore, thegraphs areand 67
    • The first graph has multiple initial values of the predator/prey that includex(0)  15 and y(0)  15 while the other graphs exclusively model the same initial value. Thefirst graph shows general solutions of the Predator-Prey Model with a few initial populationsgiven by the blue trajectories on the graph. The trajectories are in the form of ellipsessurrounding the critical point c p  (40,50) . What is hard to see in the first graph that is modeled much more effectively in the thirdgraph is the path of the trajectory as t   . Because the solutions of the system are periodic (asdemonstrated by the second graph with periodicity of about t  12 ), the trajectories overlap eachother in the first graph t   . In the third graph, which shows both the change of rate ofpredator and prey over time, the trajectory doesn’t overlap itself because the graph is threedimensional thus making the trajectory’s path easier to see that it is periodic as t   . Thegraphs systems are perfectly periodic because the model only takes into account the most basicfactors between the predator and prey interaction. In reality, the graphs wouldn’t be as perfect asthe ones above but, there are a few cases where actual predator and prey populations have beensampled that reflect this idea of periodicity within the interaction. 68
    • The above graph shows is a classical set of data on a pair of interacting populations thatcome close: the Canadian lynx and snowshoe hare pelt-trading records of the Hudson BayCompany over almost a century (Moore & Smith, 2003). To a first approximation, there was apparently nothing keeping the hare population incheck other than predation by lynx and it depended entirely on hares for food. To be sure,trapping for pelts removed large numbers of both species from the populations but these numberswere quite small in comparison to the total populations. So, trapping was not a significant factorin determining the size of either population. On the other hand, it is reasonable to assume that thesuccess of trapping each species was roughly proportional to the numbers of that species in thewild at any given time. Thus, the Hudson Bay data give us a reasonable picture of predator-preyinteraction over an extended period of time. The dominant feature of this graph is the oscillatingbehavior of both populations shown in the second graph of example 2. Focusing back on the example, an interesting observation about the graph would be to seewhat the trajectory with the initial value of the critical point, c p  (40,50) , would look like.Using the graphing technology of Maple, we see that 69
    • and the trajectory shows no change between predator and prey as t   . In this case, where theinitial value is the critical point, we see a ‘perfect balance’ of the predatory/prey interactionillustrated by the model. Of course, in reality, this type of interaction would never exist but, inthis ideal situation with the Predator-Prey Model, it can exist. 70
    • Next, the following graphs will show what happens when the coefficients a, b, c, and pare varied from the initial coefficients a  1, b  0.03, c  0.4, and p  0.01 with the initialvalues for all graphs to be x(0)  15 and y(0)  15 . The first coefficient that will be manipulatedwill be a with the following valuesBy increasing a by 50%, the overall ellipse of the trajectory becomes larger and the periodshortens from t  10 to t  5 . By decreasing a by 50%, the overall ellipse of the trajectorybecomes smaller and the period lengthens from t  10 to t  15 . The above graphs make sensebecause a measures the growth rate of the prey and if there is more prey in the area, it allows formore predators in the same area. The same could be said if there were less prey to feed on, thenthere would be fewer predators that would be able to survive in the same area. 71
    • The next coefficient that will be manipulated will be b with the following valuesBy increasing b by 50%, the overall ellipse of the trajectory becomes smaller and the periodshortens from t  10 to t  9 . By decreasing b by 50%, the overall ellipse of the trajectorybecomes larger and the period lengthens from t  10 to t  11 . The above graphs make sensebecause b measures the negative interaction of the prey and predator with regards to the rate ofchange of the prey. If b becomes larger, the interaction reduces more amounts of prey but if bbecomes smaller, the interaction allows more prey to survive. Another coefficient that will be manipulated will be c with the following values 72
    • By increasing c by 50%, the overall ellipse of the trajectory becomes larger and the periodshortens from t  10 to t  8 . By decreasing c by 50%, the overall ellipse of the trajectorybecomes smaller and the period lengthens from t  10 to t  16 . The above graphs make sensebecause c measures the death rate of the predators. If the predators are dying out quicker, itallows for more prey and also allows more predators at certain times of t when t   . If thepredators are surviving for longer periods of time, there will be less prey in the area which willlead to less predators overall but the cycle between predator/prey interaction will be slower thanif c increased. The last coefficient that will be manipulated will be p with the following valuesBy increasing p by 50%, the overall ellipse of the trajectory becomes smaller and the periodlengthens from t  10 to t  11 . By decreasing p by 50%, the overall ellipse of the trajectorybecomes larger and the period shortens from t  10 to t  9 . The above graphs make sensebecause p measures the positive interaction of the prey and predator with regards to the rate ofchange of the predator. If p becomes larger, the interaction increases the rate of growth of thepredators and allows less of both the predator and the prey to survive. But, if p becomes smaller, 73
    • the interaction decreases the rate of growth of the predators thus allowing more predator andprey to survive in the area. The coefficients in the Predator-Prey Model do contribute to the overall make up of thesolutions with initial populations given. Depending on how the coefficient is manipulated, theoverall size of the ellipse of the can become larger or smaller and the periodicity of the predator-prey interaction can become longer or shorter. We can use the equations to draw severalconclusions about the cyclic variation of the predator and prey on such trajectories:  The predator and the prey populations vary sinusoidally with period 2 ac . This period of oscillation is independent of the initial conditions (Boyce & DiPrima, 2001, p. 508)  The predator and prey populations are out of phase by one-quarter of a cycle. The prey leads and the predator lags (Boyce & DiPrima, 2001, p. 508).  The average populations of predator and prey over one complete cycle are c p and a / b , respectively. These are the same as the equilibrium proportions (Boyce & DiPrima, 2001, p. 508).The last example will show what happens when an external variable, hunters, are figured into themodel.Example 3: The Effect of Hunting Predators To add the extra variable of hunting predators into the Predator-Prey Model, we willsubtract a constant coefficient and the end of the equation of the rate of growth for the predators.Thus 74
    • dx dy  ax  bxy  and  cy  pxy  h dt dtwhere h is the effect of hunting and killing a constant amount of predators every cycle. Goingback to coefficients used in example two, we will use the equations dx dy  x  0.03 xy  and  0.4 y  0.01xy  5 dt dtwhere h  5 (death of 5 predators per unit of time) to generate a three dimensional graphdepicting what happens when hunters are introduced into the equation. With the initialconditions x(0)  40 and y(0)  40 , the above graph shows that when the variable of huntingpredators is introduced, the predators become extinct at t  23 thus allowing the prey to growinfinitely at an exponential rate. Also, as t increases, the spiral of the trajectory becomes more 75
    • unwound until the amount of predators in the area becomes extinct which allows overpopulationof the prey. If there was a hunting variable proposed onto to the equation of the change of rate ofprey, both species lose as both would become extinct as t   . Therefore, the addition of thevariable to either the predator, prey, or both will lead to the extinction of species over time. 76
    • Conclusion Throughout the research process for my topic of systems of differential equations, the useof the mathematical software Maple helped solve the systems and generate directional fields forthe solutions. Without the help of Maple, the process of drawing the directional field for asolution would have been very tedious to generate if not impossible because of the precision anddifferent options Maple has to generate those directional fields. Maple was also helpful ingenerating three dimensional graphs to help further understanding of the Predator-Prey Model.Again, trying to generate a graph by hand with the precision that Maple produces would havebeen near to impossible to construct. In conclusion, the use of Maple within the realm ofsystems of differential equations is almost essential for research of this sort of topic. With theamount of numerical analysis in the topic of systems of differential equations, Maple helpssimplify the process of producing solutions for the systems. Researching the topic of systems of differential equations was a very interesting processas I worked through both the theory and the applications associated with the topic. The hardestpart of the research process was trying to understand the solutions of systems of differentialequations with Eigenvalues that were real and distinct, complex, and imaginary. Trying to learnthe information from books that left out a lot of the key information about solving for each of thedifferent values was difficult but at the same time was rewarding because it felt like I wasdiscovering this topic of mathematics. The research got more interesting and exciting when Istarted to learn about the applications associated with systems of differential equations because Iwas able to take the knowledge learned about the topic and apply it to fields such as ecology andsociology with the Predator-Prey Model and the Richardson’s Arms Race Model. Overall, the 77
    • topic of systems of differential equations was a rewarding mathematical researching experienceand has influenced myself to possibly purse mathematical research as a profession in the future. 78
    • References Boyce, W. E., & DiPrima, R. C. (2001). Elementary differential equations and boundaryvalue problems 7th ed. New York: John Wiley & Sons, Inc. Brown, C. (2007). Differential equations: a modeling approach. Thousand Oaks, CA:Sage Publications, Inc. Hunter, J.E. (1980). Mathematical models of a three-nation arms race. Journal of ConflictResolution, 24(2), 241-252. Moore, L. C., & Smith, D. A. (2003). Predator-prey models. Retrieved October, 7, 2003,from http://www.math.duke.edu/education/ccp/materials/diffeq/predprey/index.html Rainville, E. D., Bedient, P. E., & Bedient, R. E. (1997). Elementary differentialequations. Upper Saddle River, NJ: 1997. 79