Senior Seminar: Systems of Differential Equations

975 views

Published on

The research and paper behind the focus of my senior project: Systems of Differential Equations.

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
975
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
23
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Senior Seminar: Systems of Differential Equations

  1. 1. Systems of Differential Equations Joshua Dagenais 12-04-09 Mentor: Dr. Arunas Dagys 1
  2. 2. Table of Contents Introduction Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues Section 2: Solving Systems of Differential Equations with Complex Eigenvalues Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues Section 4: Solving Systems of Nonhomogenous Differential Equations Section 5: Application of Systems of Differential Equations – Arms Races Section 6: Application of Systems of Differential Equations – Predator-Prey Model Conclusion References 2
  3. 3. Introduction Many laws and principles that help explain the behavior of the natural world arestatements or relations that involve rates at which things change. When explained inmathematical terms, the relations become equations and that rates become derivatives.Equations that contain these rates or derivatives are called differential equations. Therefore,systems of ordinary differential equations arise naturally in laws and principles explainingbehavior of the natural world involving several dependent variables, each of which is a functionof single independent variable. This then becomes a mathematical problem that consists of asystem of two or more differential equations. These systems of differential equations thatdescribe these laws or principles are called mathematical models of the process (Boyce &DiPrima, 2001). A system of first order ordinary differential equations is an interesting mathematicalconcept as it combines 2 different studies of mathematics for its use. By dissecting the phrase,system of first order differential equations, into 2 parts, the 2 different areas of mathematics usedto solve these equations can be found. In the system part of the phrase, it involves linear algebrato solve the system of equations and in this case the system of equations consists of first orderdifferential equations. The ladder part of the phrase, first order of differential equations,indicates that solution strategies for solving them will also be involved when solving systems ofdifferential equations. So with linear algebra for systems and differential equations in mind, what otherunderlying concepts and skills involved with these mathematical concepts must be learned andexplained to solve systems of differential equations? Well, for the linear algebra aspect of 3
  4. 4. solving systems of differential equations, topics that are to mentioned briefly in this paperinclude matrices, characteristic equations, roots of the characteristic equations (eigenvalues),eigenvectors, and the diagonalization of a matrix. For the differential equation part of solvingsystems, topics that are discussed include solving first order differential equations, solvingsimple diagonal systems ( y  Dy ), and solutions of the original systems ( x  Cy wherex(t )  keat ). When everything mentioned is put together, solutions of different types are foundfor systems of differential equations and with the help of mathematical software such as Maple,graphs are able to visually represent the answers of these systems (slope fields) to show thatthere are actually more than one solution called a family of solutions. Also, depending on thetypes of eigenvalues that are found for the system of differential equations, different methods forsolving the systems will be used for eigenvalues that are distinct and real, eigenvalues that arecomplex, and eigenvalues that are repeated, all of which graphically represented in a differentmanner. Along with different methods for solving systems of differential equations, methods forsolving homogenous and nonhomogenous systems will be explained to help further the scope ofthis subject. Why do we care about solving systems of differential equations? Well, there aremany physical problems that involve a number of separate elements linked together in somemanner such as generic application problems which include spring-mass systems, electricalcircuits, and interconnected tanks that need solutions of systems of differential equations to beunderstood and solved. Other, more advanced applications of the theory behind systems ofdifferential equations include the Predator-Prey Model (Lotka-Volterra Model) and theRichardson’s Arms Race Model which connects mathematics with concepts that would havenever been able to be explained without such elegant mathematical equations. The Predator-Prey 4
  5. 5. Model is a system of nonlinear differential equations (even though it is considered an almostlinear system) and the Arms Race Model that uses systems of differential equations that arenonhomogenous. Both models are very interesting applications that will be discussed andexplained later on in this paper. Hopefully, this paper will give the reader insight on whatsystems of linear differential equations are, how to solve them, how to apply them, and how tounderstand and interpret the answers that are derived from problems. 5
  6. 6. Section 1: Solving Systems of Differential Equations with Distinct Real Eigenvalues In this section, we will be solving systems of differential equations where the eigenvaluesfound from the characteristic equation are all real and all distinct. In order to do this, we willfirst take the system  x1  p11 (t ) x1  ...  p1n (t ) xn  g1 (t )  xn  pn1 (t ) x1  ...  pnn (t ) xn  g n (t )  and write it in matrix notation. To do this, we write x1 ,..., xn in vector form:   x1    x     x   nwe put the coefficients p11 (t ),..., pnn (t ) in an n x n matrix:  p11 (t ) p1n (t )    P(t )     p (t ) pnn (t )   n1 we again write x1 ,..., xn in vector form:  x1    x   x   nand write g1 (t ),..., gn (t ) in vector form: 6
  7. 7.  g1 (t )    g (t )     g (t )   n Therefore, the resulting equation using the above vector and matrix notation is represented by x  P(t ) x  g (t ) We will first consider homogenous systems where g (t )  0 , thus x  P(t ) xTo find the general solution of the above system when P(t ) is a 1 x 1 matrix, the system abovereduces to a single first order equation dx  px dtwhere the solution is x  ce pt . Therefore, to solve any other systems with second order orhigher, we will look for solutions of the form x   ertwhere  is a column vector instead of a constant c (because we are dealing with solutions tomore than one differential equation thus giving us multiple constants equating to a vector) and ris an exponent to be solved. Substituting x   ert into both sides of x  P(t ) x gives r ert  P(t ) ertUpon canceling ert , we obtain r  P(t ) or 7
  8. 8. ( P(t )  rI )  0where I is the n x n identity matrix. In order to solve ( P(t )  rI )  0 , we will use theorem 1.Theorem 1: Let A be an n x n matrix of constant real numbers and let X be an n-dimensionalcolumn vector. The system of equations AX  0 has nontrivial solutions, that is, X  0 , if andonly if the determinant of A is zero.In our case, ( P(t )  rI ) is the n x n matrix represented by A and  is the n-dimensional columnvector represented by X. Therefore, in order to find the nontrivial solutions of ( P(t )  rI )  0 ,we must take the determinant of ( P(t )  rI )  0 which is represented p11 (t )  r p1n (t ) 0 pn1 (t ) pnn (t )  rComputing the determinant will yield a characteristic equation, which resembles the structure ofa polynomial of degree n, where the roots of the characteristic equation, eigenvalues denoted byr, will be computed. After the eigenvalues have been computed, r will be substituted back into( P(t )  rI )  0 and solved for the nonzero vector,  , which is called the eigenvector of thematrix P(t ) corresponding to the eigenvalue r1 . The eigenvector will be an n x 1 column vectorthat will have as many values as there are equations to solve for. After finding the eigenvaluesand the eigenvectors for those specific values, they will be substituted back into the equation x   ertwhich will be represented as the following specific solutions 8
  9. 9.  x11 (t )   x1k (t )      x (t )   (1)  ,..., x (t )   (k )  ,...  x (t )   x (t )   n1   nk for the initial system. If the Wronskian of x(1) ,..., x( n) (represented as W [ x(1) ,..., x( n) ] ) does notequal zero, then the general solutions can be represented as a linear combination of the specificsolutions x  c1 x(1) (t )   ck x( k ) (t )The following examples will help illustrate how to solve n x n systems of differential equationswith distinct real eigenvalues. The general solution of the given system of equations will besolved for along with a graph that shows the direction field of the answer.Example 1: Solve the following 2 x 2 system for x  x1  3x1  2 x2  x2  2 x1  2 x2To solve the problem, we rewrite the equations into its matrix form  3 2  x   x  2 2 which is of the form x  P(t ) xwhere 9
  10. 10.  3 2  P (t )     2 2 We then find the eigenvalues of P(t) by finding the characteristic equation and solving for r.Therefore, 3 r 2 det( P(t )  rI )   r 2  r  2  (r  1)(r  2)  0 2 2  rand the eigenvalues of P(t) are r1  1 and r2  2 . Now we compute the eigenvectors for each oftheir respective eigenvalues. We will compute the nontrivial solutions of 3 r 2   c1      0  2 2  r  c2 For r1  1  3  (1) 2   c1   4 2   c1  4c1  2c2  0  c   0     c   0  2c  c  0  c2  2c1  2 2  (1)   2   2 1   2  1 2(Note that both of the resulting equations with c1 and c2 are the same). One such solution of the 1equation is found by choosing c1  1 thus making c2  2 to give the eigenvector  1    .  2 1Knowing that x( n) (t )   ( n)ernt , it follows that x (1) (t )  c1   e  t is a solution of the initial system.  2For r2  2 3 2 2   c1   1 2   c1  c1  2c2  0  c   0     c   0  2c  4c  0  c1  2c2  2 2  2   n   2 4  2  1 2 10
  11. 11. By choosing c1  2 to solve the equation, c2  1. Proper notation of eigenvectors, if possible,insists that fractions should be avoided when representing the numerical value of the eigenvalue.  2Therefore, for r2  2 ,  2    and a second solution is 1  2 x (2) (t )  c2   e 2t 1Now, we check to see if we can represent x1 and x2 as a general solution by taking the Wronskianof both specific solutions. The Wronskian of x(1) (t ) and x(2) (t ) is et 2e2t W [ x (1) , x (2) ]   3et 2et e 2twhich is never equal to zero. It follows that the solutions x(1) (t ) and x(2) (t ) are linearlyindependent. Therefore, the general solution of the system x  P(t ) x is 1  2 x(t )  c1   e  t  c2   e 2t  2 1 11
  12. 12. All the general solutions (represented by the family of red lines), a combination of x(1) (t ) andx(2) (t ) , for which c1  0 and c2  0 , are asymptotic to the line x2  2 x1 . The blue trajectoriesrepresent specific solutions to the system with each trajectory having a different initial value( x1 (0)  a and x2 (0)  b where a and b are any real number ). For the remaining examples in this section, the derivation of the final solution will beshown without all steps shown The purpose of these examples is to show the variety of systemsof differential equations that have distinct real eigenvalues such as a 3 x 3 system and a 2 x 2system with initial conditions given.Example 2: Solve the following 3 x 3 system for x  x1  x1  x2  x3  1 1 1     x2  2 x1  x2  x3   x   2 1 1  x x3  8 x1  5 x2  3x3     8 5 3   First, we find the eigenvalues for the coefficient matrix by the following equation 1 r 1 1 det( P(t )  rI )  2 1  r 1  0 8 5 3  rand solving the resulting characteristic equation.> 12
  13. 13. >Using maple yields the eigenvalues r1  2 , r2  2 , and r3  1and eigenvectors  4 0  3   2   3      5  ,    1  ,    4  1  7   1  2       The eigenvalues above are the same as the given Maple output but manipulated in properlyformat where all values of the eigenvector are integers and the first value is positive. After theeigenvalues and eigenvectors are computed, we find the WronskianW [ x(1) , x(2) , x(3) ]  12et  0 therefore we can substitute all the eigenvalues and eigenvectorsfound into x( n)   ( n)ernt and express the solution as a linear combination  4 0  3   e 2t  c   e 2t  c   e t x(t )  x (t )  x (t )  x (t )  c1  5  3  4  (1) (2) (3) 2 1   7   1  2       Example 3: Solve the 2 x 2 system with initial conditions given for x  x1  5 x1  x2 x (0)  2  5 1 2 where 1  x    x where x(0)     x2  3x1  x2 x2 (0)  1 3 1   1 13
  14. 14. We will start off the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>>  1 1Therefore, r1  4 ,  1    , r2  2 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e6t  0  1  3therefore the specific solutions x (1) and x (2) can be expressed as the general solution  1 1 x(t )  c1   e 4t  c2   e 2t  1  3 2After the general solution has been found, we substitute x(0)    into x(t ) to get  1  1 1 2  1 1  2  x(0)  c1   e 4*0  c2   e 2*0     c1    c2       1  3  1  1  3   1After the equation has been simplified, we multiply c1 and c2 by their respected vectors to yieldthe follow system of equations c1  c2  2 and c1  3c2  1 14
  15. 15. 7 3We then solve the system of equations for c1 and c2 to get c1  and c2  . Substituting 2 2back into the general solution to get the specific solution of the system as 7  1 3  1  x(t )    e 4t    e 2t 2  1 2  3The direction field of general solution along with a trajectory of the specific solution isrepresented asThe above direction field shows the different families of solutions for the general solutiondenoted by the red arrows and the blue trajectory represents the specific solution to the system 2for when the initial starting value was x(0)    . Now, after establishing the basis for solving  1systems of differential equations, we will now delve into different cases of solving systemswhere the eigenvalues are not real and/or distinct. 15
  16. 16. Section 2: Solving Systems of Differential Equations with Complex Eigenvalues In this section, we will use what was previously discussed in the section for solvingsystems with real and distinct eigenvalues on how to generate eigenvalues for an n x n system oflinear homogenous equations with constant coefficients denoted as x  P(t ) xNow if P(t) is real then the coefficients that make up the characteristic equation for r are real andany complex eigenvalues must occur in conjugate pairs (Boyce & DiPrima, 2001, p. 384).Therefore, for a 2 x 2 system, r1  a  bi and r2  a  bi would be eigenvalues where a and b arereal. Also, it follows that the corresponding eigenvectors are complex conjugate pairs of eachother. Therefore, r2  r1 and  2   1 . To help visualize this, take the equation that was formedin the previous section ( P(t )  rI )  0and substitute r1 and  1 into the equation to get ( P(t )  r1I ) 1  0which forms a corresponding general solution to the system. Now, by taking the complexconjugate of the entire equation, the resulting equation becomes ( P(t )  r1I ) 1  0where P(t) and I are not affected by the conjugation because they both have all real values. Theequation then forms another corresponding general solution where r2  r1 and  2   1 . Now, 16
  17. 17. with the eigenvalues and eigenvectors solved for, we can use Euler’s formula to express asolution with real and imaginary parts just as real solutions to the system. Euler’s formula states eit  cos t  i sin tBut, for use with general complex solutions to a system of differential equations, we will use thea modified version of the formula e( i )t  e t (cos( t )  i sin( t ))  e t cos( t )  i e t sin( t )to find the real-value solutions to the system. We can choose either x(1) (t ) or x(2) (t ) to find the 2real-valued solutions because they are conjugates of each other and both will yield the same real-valued solutions. Using x(2) (t ) and  2  a  bi where a and b are real, then we have x(2) (t )  (a  bi)e( i  )t  (a  bi)e t (cos( t )  i sin( t ))Factoring the above equation results in x(2) (t )  et (a cos( t )  bi cos( t )  ai sin( t )  b sin( t ))and separating x(2) (t ) into its real and imaginary parts, x(2) (t ) will yield x(2) (t )  e t (a cos( t )  b sin( t ))  ie t (a sin( t )  b cos( t ))If x(2) (t ) is written as the sum of 2 vectors ( x(2) (t )  u(t )  iv(t ) ), then the vectors yielded are u(t )  e t (a cos( t )  b sin( t )) and v(t )  e t (a sin( t )  b cos( t )) 17
  18. 18. We can disregard the i in front of v(t ) because it is considered to be a multiplier of the vectorand we are only interested in the real-numbered vector solution. If we chose to solve for x(1) (t )instead of x(2) (t ) , we would have gotten the same solution except x(1) (t )  u(t )  iv(t ) . i is alsoconsidered a multiplier of the v(t ) vector therefore we can disregard it and the answers for u (t )and v(t ) would be the same as the ones that were solved for above. u (t ) and v(t ) are theresulting real-valued vector solutions to the system. It is worth mentioning that u (t ) and v(t ) are linearly independent and can be expressed asa single general solution. Therefore, for r1    i, r2    i and that r3 ,..., rn are all real anddistinct. Let the corresponding eigenvectors be  1  a  bi,  2  a  bi,  3 ,...,  n (Boyce &DiPrima, 2001, p. 385). Then the general solution to systems of differential equations withcomplex eigenvalues is x(t )  c1u (t )  c 2 v(t )  c3 3e r3t  ...  cn nerntwhere u(t )  e t (a cos( t )  b sin( t )) , v(t )  e t (a sin( t )  b cos( t )) , and P(t) consists of allreal coefficients. It is only when P(t) consists of all real coefficients that complex eigenvectorsand eigenvalues will occur in conjugate pairs (Boyce & DiPrima, 2001, p. 385). The followingexamples will help illustrate how to solve n x n systems of differential equations with complexeigenvalues. Both the complex and real-valued solutions will be given for each of the examplesand some direction fields will be shown to demonstrate the nature of systems with complexeigenvalues. 18
  19. 19. Example 1: Solve the following 2 x 2 system for x  x1  3x1  2 x2   3 2    x   x  x2  4 x1  x2   4 1 We will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>>  1   1 Therefore, r1  1  2i , r2  1  2i ,  1    , and    2  . To get the eigenvectors in 1  i  1  i proper form from the Maple output, we multiplied both eigenvalues (resulting from the Mapleoutput) by its conjugate to get a real number for the first value and then multiplied it again by 2so that all values in the eigenvector were integers. The Wronskian W [ x(1) , x(2) ]  2e2t  i  0therefore the specific solutions x (1) and x (2) can be expressed as the general solution in complexform  1  (1 2i )t  1  (12i )t x(t )  c1  e  c2  e 1  i  1  i  19
  20. 20. But, we want to be able to find the real-valued solutions of the complex general solution so wewill use x (1) to find the real-valued vectors. Therefore,  1  (1 2i )t x (1) (t )   e 1  i Using Euler’s formula, x (1) becomes  1  t   e (cos(2t )  i sin(2t )) 1  i After Euler’s formula has been applied, we factor the above equation  cos(2t )  i sin(2t )  t  e  cos(2t )  i sin(2t )  i cos(2t )  sin(2t ) and separate the real and imaginary elements into  cos(2t )  t  sin(2t )  et    ie    sin(2t )  cos(2t )   sin(2t )  cos(2t ) The result is the two real-valued solutions of the form u(t )  iv(t ) where  cos(2t )  t  sin(2t )  u (t )  et   and v(t )  e    sin(2t )  cos(2t )   sin(2t )  cos(2t ) Therefore, the general solution to the system with real-valued solutions is  cos(2t )  t  sin(2t )  x(t )  c1u (t )  c2v(t )  c1et    c2e    sin(2t )  cos(2t )   sin(2t )  cos(2t )  20
  21. 21. The resulting direction field showing families of solutions to the general solution to the system isThe blue trajectories show specific solutions when initial conditions are given. Thus, thedirection field creates spiraled solutions where the origin is the center of the spirals called aspiral point. The direction of the motion is away from the spiral point and the trajectoriesbecome unbounded. Also, the spiral point, for this particular solution, is unstable. There arealso systems with complex eigenvalues where the general solution has a spiral point that is stablebecause all trajectories approach it as t increases.Example 2: Solve for the following 3 x 3 system for x  x1  x1  1 0 0     x2  2 x1  x2  2 x3   x   2 1 2  x  x3  3x1  2 x2  x3    3 2 1    21
  22. 22. Again, we will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>>Thus, the eigenvalues are r1  1 , r2  1  2i , and r3  1  2i . The simplified eigenvectors are 2 0 0   2     3  ,    i  , and    i  . Notice that r1 and  1 already contain real-values therefore 1 3   2 1  1      no computations are needed to turn them into real-valued solutions like the other complexeigenvalues and eigenvectors. The Wronskian W [ x(1) , x(2) , x(3) ]  4e3t  i  0 therefore thespecific solutions x (1) , x (2) , x (3) and can be expressed as the general solution in complex form 2  0 0   t   (1 2i )t   x(t )  c1  3  e  c2  i  e  c3  i  e(12i )t 2 1  1      To find the real-valued solutions of the general solution, we will use x(2) (t ) and Euler’s formulain the following equations 22
  23. 23. 0  0  0   0    (1 2i )t   t t   t   x (t )   i  e (2)   i  e (cos(2t )  i sin(2t ))  e  cos(2t )   ie  sin(2t )  1 1  sin(2t )    cos(2t )         Therefore,  0   0    t   u (t )  e  cos(2t )  and v(t )  e  sin(2t )  t  sin(2t )    cos(2t )     and the general solution to the system with real-valued solutions is 2  0   0    t t   t   x(t )  c1r1  c2u (t )  c3v(t )  c1  3  e  c2e  cos(2t )   c3e  sin(2t )  1 2  sin(2t )    cos(2t )        Now that we know how to solve systems that yield real and/or imaginary eigenvalues andeigenvectors, we will now focus our attention on the next case if a eigenvalue is repeated whenfound from the characteristic equation. 23
  24. 24. Section 3: Solving Systems of Differential Equations with Repeated Eigenvalues In this section, we will be solving systems of differential equations where the eigenvaluesfound from the characteristic equation are repeated. We will still be finding solutions of thefollowing equation x  P(t ) xand will still find at least one of the eigenvalues/eigenvectors in the way we previously solvedsystems with distinct eigenvalues. But, when solving for the other repeated eigenvalue, we willsee that the other solution will take the form x   tert  ertwhere  and  are constant vectors. After finding the first solution of the form x(1) (t )   1ert , itmay be intuitive to find a second solution to the system of the form x(2) (t )   1tertbecause of how repeated roots are solved when finding the solution to a second order differentialequation. Substituting that back into x  P(t ) x yields r 1te rt   1e rt  P (t ) 1te rt  r 1te rt   1e rt  P (t ) 1te rt  0   1e rt  rt  1  P (t )t   0But, for the equation to be solved so it is satisfied for all t, the coefficients of te rt and ert musteach be zero (Boyce & DiPrima, 2001, p. 403). Therefore, we find out that in this case,  1  0and thus x2   1tert is not a solution for the second repeated eigenvalue. But, from 24
  25. 25. r 1tert   1ert  P(t ) 1tert  0 ,we see that there is a form of x   tert in the substituted equation along with another term of theform  ert . Therefore, we need to assume that x2   1tert ertWhere  and  are constant vectors. Substituting the above expression into x  P(t ) x gives r 1tert   1ert  rert  P(t )( 1tert ert )  r 1tert  ( 1  r )ert  P(t )( 1tert ert )Equating the coefficients of te rt and ert gives the following conditions P(t ) 1te rt  r 1te rt  0  P(t ) 1  r 1  0  ( P(t )  rI ) 1  0 P(t ) e rt   1e rt  r e rt  0  P(t )  r   1  ( P(t )  rI )   1for the determination of  1 and . The underlined portions are the important conditions derivedfrom the equation. To solve ( P(t )  rI ) 1  0 , all we do is solve for one of the repeatedeigenvalue and eigenvector just like in previous sections. We will solve a matrix equation of theform  p11 (t )  r p1n (t )  1   11   p11 (t )  r 1    p1n (t )  n  11              p (t ) pnn (t )  r n    n   pn1 (t ) 1    pnn (t )  r n   n 1 1  n1    Solving for 1 ,...,n in the above equation will result in the solution of the vector  denoted 25
  26. 26.  1          nAfter equating  1 and  , we substitute them into x(2) (t ) to get the second specific solution x2 (t )   1tert ertThe last term in the above equation can be disregarded because it is a multiple of the firstspecific solution x(1) (t )   1ert but the first 2 terms make a new solution of the form x(2) (t )   1tert ertFinding W [ x(1) , x(2) ](t )  0 will prove that x(1) and x(2) are linearly independent thus allowing usto represent the a general solution to the system in the form x  c1 x(1) (t )  c2 x (1) (t )  ck x ( k ) (t )  x  c1 1e r1t  c2 [ 1te r1t  e r1t ]  ...  ck k 1e rk 1twhere x(1) and x(2) include the repeated eigenvalues of multiplicity 2. For the sake of simplicity, we will focus our examples on solving systems that haverepeated eigenvalues of only multiplicity 2. Also included in one of the examples is a casewhere a repeated eigenvalue give rise to linearly independent eigenvectors (which is easilyidentifiable using Maple) of the matrix P(t ) thus avoiding the complications of solving systemswith repeated eigenvalues. 26
  27. 27. Example 1: Solve the following 2 x 2 system for x  x1  4 x1  x2   4 1   x   x  x2  4 x1  8 x2   4 8 We will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:>>> 0Notice in the resulting eigenvectors that  2    which is a zero multiple of  1 and does us no 0help in finding the second specific solution of the above system. But, the results derived from 1 1Maple gives us r1  r2  6 ,  1    , and x (1) (t )    e 6t . We need to use the equation  2  2 x(2) (t )   1tert ertto solve for  and thus have a second specific solution to the system. To find out the second 1  4 1specific solution, we substitute x (2) (t )    te6t   e6t into x    x to get the following  2  4 8 expression 27
  28. 28.  1  6t 1    4 1  1  6t  4 1   1  6t  e    6te6t   1  6e6t     te   e  2  2  2   4 8  2   4 8  2   4 1  1  6tMultiplying out    te and factoring out a 6 from the result yields  4 8  2   1  6t  1  6t  1  6t  1  6t  4 1   1  6t   e    6te    6e    6te    e  2  2  2  2  4 8  2  1Canceling out the   6te6t on each side of the equation and rearranging the equation yields  2  4 1   1  6t  1  6t  1  6t     e    6e    e  4 8  2   2  2  1 Factoring out   e6t on the left side of the equation and simplifying gives us  2     4 1    1  6t  1  6t   4 8   6 I    e    e     2    2   4 1   6 0    1  6t  1  6t      e    e   4 8   0 6   2   2   2 1   1   1         4 2  2   2   2 1   1   1        , is of the form ( P(t )  rI )   . 1The end product of the above expression,   4 2   2   2 In this case 28
  29. 29.  4 1   6 0   1 ( P(t )  rI )   1         4 8   0 6    2Thus, to solve for  , we solve  2 1   1   1  21  2  1 0         4  2  2  1  0 and 2  1       4 2   2   2  1 2 1(Note that both of the resulting equations with 1 and 2 are the same). After solving for  , wesubstitute it into x(2) (t )   1tert ert to find the second solution of the system to be 1 0 x (2) (t )    te rt    e rt  2 1The Wronskian W [ x(1) , x(2) ]  e12t  0 . Therefore the specific solutions x (1) and x (2) can beexpressed as the general solution 1  1   0   x(t )  c1   e6t  c2   t     e6t  2  2   1  The resulting direction field showing families of solutions to the general solution to the system is 29
  30. 30. The blue trajectories show specific solutions when initial conditions are given. The origin iscalled an improper node. If the eigenvalues are negative, then the trajectories are similar buttraversed in the inward direction. An improper node is asymptotically stable or unstable,depending on whether the eigenvalues are negative or positive (Boyce & DiPrima, 2001, p. 404).Example 2: Solve the following 3 x 3 system for x  x1  x1  2 x2  x3   1 2 1     x2  x2  x3   x   0 1 1  x  x3  2 x3  0 0 2    We will begin the example by using Maple to find the eigenvalues and eigenvectors of thecoefficient matrix:> 30
  31. 31. >> 1  1   t  From the Maple results: r1  r2  1 , r3  2 , x1 (t )   0  e , and x3 (t )  1 e2t . What we need to 0  1    find is the specific solution to x(2) (t ) . In this example, we will use the equation ( P(t )  rI )   1to solve for  , substitute it into x(2) (t )   1tert ert , and use the shortcut to find out the thirdspecific solution to the system. Therefore  1 2 1   1  1      e t   0  e t ( P(t )  rI )     0 1 1   1I   2  1    0 0 2         3    0   0 2 1 1  1    t   t  0 0 1 2  e   0  e  0 0 1     0   3   and 0 22  3  1   1 1 3  0  1  0,2  ,3  0      2 2 3  0 0   31
  32. 32. Substituting what we found for  into x(2) (t )   1tert ert yields 0 1    2 0   t 1 t   t   t x (t )   0  te  (2) e   0  te   1  e 0 2 0  0   0      The Wronskian W [ x(1) , x(2) , x(3) ]  e4t  0 . Therefore, the specific solutions x(1) , x(2) , and x(3) canbe expressed as the general solution 1 1  2   0     2t   t      x(t )  c1 1 e  c2  0  e  c3  0  t   1   et 1 0  0   0           Example 3: Solve the following 3 x 3 system for x  x1   x2  3x3   0 1 3     x2  2 x1  3x2  3x3   x   2 3 3  x  x3  2 x1  x2  x3     2 1 1   For this example, using Maple can unlock a potential shortcut in solving for the general solutionto the above system. Again, we will begin the example by using Maple to find the eigenvaluesand eigenvectors of the coefficient matrix:>> 32
  33. 33. >Unlike the other 2 examples, the Maple output displays 2 linearly independent eigenvectors ofthe repeated eigenvalues r2  r3  2 . Another shortcut for finding eigenvectors of repeatingeigenvalues is found if a math program, such as Maple, is utilized to solve systems of differentialequations. Therefore  1 1  3   2t 2   2t 3   x1 (t )  1 e , x (t )   2  e , x (t )   0  e 2t  1  0  2       and the general solution to the system is 1  1  3    2t      x  c1 1 e  c2  2   c3  0   e 2t 1  0  2          A more advanced look at systems with repeated eigenvalues would include repeatedeigenvalues with multiplicities higher than 2. The equations to solve higher multiplicities ofrepeated eigenvalues become more detailed and difficult to solve for but to find the eigenvaluesfor such values, we would follow the same thought process in how we found the eigenvalue forrepeated eigenvalues of multiplicity 2. For the next section, we will return to our original form 33
  34. 34. of a differential equation x  p1 (t ) x1  ...  pn (t ) xn  g (t ) and solve nonhomogenous systemswhere the value of g (t )  0 . 34
  35. 35. Section 4: Solving Systems of Nonhomogenous Differential Equations Unlike the previous sections where we solved different types of systems of homogeneousdifferential equations with constant coefficients, this section will focus on solving systems ofnonhomogenous differential equations of the form x  P(t ) x  g (t ) The following theorem related to nonhomogenous systems should help us figure outwhere to start solution process: Theorem 2: If x(1) (t ),..., x( n) (t ) are linearly independent solutions of the n-dimensional homogenous system x  P(t ) x on the interval a < t < b and if x p (t ) is any solution of the nonhomogenous system x  P(t ) x  g (t ) on the interval a < t < b, then any solution of the nonhomogenous system can be written x  c1 x (1) (t )   ck x ( n ) (t )  x p (t ) for a unique choice of the constants c1 ,..., cn (Rainville, Bedient, & Bedient, 1997, p. 199). The theorem states that we will need to find a particular solution x p (t ) and add it on to thegeneral solution of the homogenous system that is part of the nonhomogenous system. To dothat, we will be using a variation of parameters technique to find x p (t ) and solve the equationx  P(t ) x  g (t ) . Solutions of the homogenous part of the nonhomogenous systems will take the form x  c1 1er1t   cn n erntand using the variation of parameters technique suggests we seek a solution to thenonhomogenous system to be 35
  36. 36. x p (t )  c1 (t ) 1e r1t   cn (t ) n e rntDirect substitution back into x  P(t ) x  g (t ) yields(r1c1 (t ) 1er1t   rn cn (t ) n ernt )  (c1 (t ) 1er1t    cn (t ) nernt )  ( P(t )c1 (t ) 1er1t    P(t )cn (t ) ne rnt )  g (t )P(t ) multiplied by any eigenvalue found to be a part of the specific solution will result in thatparticular eigenvalue multiplied by its eigenvector because it is already part of the solution to thehomogeneous system. Therefore(r1c1 (t ) 1e r1t   rn cn (t ) n e rnt )  (c1 (t ) 1e r1t    cn (t ) n e rnt )  (r1c1 (t ) 1e r1t    rncn (t ) ne rnt )  g (t )  (c1 (t ) 1e r1t    cn (t ) n e rnt )  g (t ) The resulting equation can be rewritten in matrix form as  11 1n   c1(t )e r t   g1 (t )  1           n  1  nn   cn (t )e rnt   g n (t )        To solve for c1 (t ),..., cn (t ) , we must use Cramer’s Rule to solve Ax  b for x where  11 1n    c1 (t )e r1t   g1 (t )        A , x   ,b     n1  nn   cn (t )e rnt    g (t )       n Cramer’s Rule states that the system has a unique solution that is given by det( Bk ) xk  for k  1,..., n det( A) 36
  37. 37. Therefore g1 (t ) 1n 11 g1 (t ) g n (t )  nn n 1 g n (t )  c1 (t )e  r1t  ,..., cn (t )e  rn t 11 1n 11 1n n 1  nn n 1  nnThus,   c1 (t )e r1t  a1 g1 (t )  ...  an g n (t )  c1 (t )  [a1 g1 (t )  ...  an g n (t )]e  r1t   cn (t )ernt  b1 g1 (t )  ...  bn g n (t )  cn (t )  [b1 g1 (t )  ...  bn g n (t )]e  rntfor some arbitrary constants a1 ,..., an and b 1 ,..., bn . To solve for the general solution, integrateboth sides of the above equation to get c1 (t ),..., cn (t ) , substitute them intox p (t )  c1 (t ) 1e r1t   cn (t ) n e rnt to find the particular solution, and substitute x p (t ) intox  c1 x (1) (t )   ck x ( n ) (t )  x p (t ) to find the general solution for the system. The followingexamples will help demonstrate how to solve systems of nonhomogenous differential equationsand lead into an application of nonhomogenous systems.Example 1: Solve the following 2 x 2 system for x  x1  x2   0 1  0    x   x t   x2  2 x1  3x2  3e   2 3   3e  tWe will begin the example by using Maple to find the eigenvalues and eigenvectors of thehomogenous part of the system 37
  38. 38.  0 1 x   x  2 3 Therefore>>> 1 1The resulting eigenvalues and eigenvectors are r1  1, r2  2,  1    , and  2    . The 1  2Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the systemis  1 1 xh  c1   et  c2   e 2t  1  2To find the particular solution of the nonhomogenous part of the system, we will use thevariation of parameters technique to find a solution of the above equation of the form  1 1 x p  c1 (t )   et  c2 (t )   e 2t  1  2 38
  39. 39.  0 1  0 We will first substitute x p for x and x directly into x    x   t  to get the following  2 3   3e expression  1  1 1 1  0 1  1  0 1  1   0   c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t     c1 (t )e t    c2 (t )e 2t  t   1  1  2  2  2 3  1  2 3  2   3e    1  1 1 1  1 1  0    c1 (t )   et  c1 (t )   et  2c2 (t )   e 2t  c2 (t )   e 2t  c1 (t )   et  2c2 (t )   e 2t  t   1  1  2  2  1  2  3e    1 1  0    c1 (t )   et  c2 (t )   e 2t   t   1  2  3e The final expression given above can be written in matrix notation as  1 1   c1 (t )et   0     2t   t 1 2   c2 (t )e   3e   To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find 0 1 1 0 3et 2 3et 1 3et 3et  c1 (t )et    and c2 (t )e2t   0 1 2 0 1 2 2 3 2 3Thus  3et  t 3  3et  2t 3et  c1 (t )    e =  and c2 (t )   e   2  2  2  2To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that 39
  40. 40. 3t 3et c1 (t )  and c2 (t )  2 2  1 1and substituting into the partial solution x p  c1 (t )   et  c2 (t )   e 2t yields  1  2 3t 1 t 3et  1  2t t  1 t 1 xp   e    e  3te    3e   2  1 2  2  1  2Therefore, the general solution to the nonhomogenous system is  1 1  1 1 x  xh  x p  c1   et  c2   e 2t  3tet    3et    1  2  1  2Example 2: Solve the following 2 x 2 system for x  x1  2 x1  x2  et   2 1   et    x   x   x2  x1  2 x2  3t   1 2   3t We will begin the example by using Maple to find the eigenvalues and eigenvectors of thehomogenous part of the system  2 1  x   x  1 2 Therefore>> 40
  41. 41. >  1 1The resulting eigenvalues and eigenvectors are r1  1, r2  3,  1    , and  2    . The  1  1Wronskian W [ x(1) , x(2) ]  e12t  0 thus, the general solution to the homogenous part of the systemis  1 1 xh  c1   e t  c2   e 3t  1  1To find the particular solution of the nonhomogenous part of the system, we will use thevariation of parameters technique to find a solution of the above equation of the form  1 1 x p  c1 (t )   e t  c2 (t )   e 3t  1  1  2 1   et Substituting x p for x and x directly into x    x    to get the following  1 2   3t   1 1  et  (t )   et  c2 (t )   e3t    c1   1  1  3t The final expression above can be written in matrix notation as  1 1   c1 (t )et   et     3t    1 1  c2 (t )e   3t   To solve for c1 (t ) and c2 (t ) , we will apply Cramer’s Rule to find 41
  42. 42.  et   3t   et   3t   c1 (t )et        and c2 (t )e 3t        2  2  2  2Thus  1   3te   e2t   3te3t  t  c1 (t )        and c2 (t )     2  2   2   2 To solve for c1 (t ) and c2 (t ) , integrate both sides of both equations so that  et   3tet  3et 2 3tet 3et t c1 (t )   + and c2 (t )  2 2 2 4 2 2  1  t  1  3t  3t 3 te t  1  e t 3te 2t 3e 2t   1  x  xh  x p  c1 (t )   e  c2 (t )   e    +          1  1 2 2 2   1  4 2 2   1  1 1and substituting into the partial solution x p  c1 (t )   e t  c2 (t )   e 3t yields  1  1  3tet 3et t 1 t   e  3tet 3et   1  3t t 2 xp    +   e       e  2 2 2   1  4 2 2   1     3t 3 tet  1  et 3te2t 3e2t   1  xp    +         2 2 2   1  4 2 2   1Therefore, the general solution to the nonhomogenous system is  1 1  3t 3 tet  1  et 3te2t 3e2t   1  x  xh  x p  c1 (t )   et  c2 (t )   e 3t    +          1  1 2 2 2   1  4 2 2   1 42
  43. 43. Section 5: Application of Systems of Differential Equations – Arms Races(Nonhomogenous Systems of Equations) In the previous section, we discussed how to solve systems of differential equations thatwere nonhomogenous using a variation of parameters technique. Now, we can apply thatknowledge of solving systems with nonhomogenous equations to solve a model that illustrates anarms race between two competing nations. L.F. Richardson, an English meteorologist, firstproposed this model (also known as the Richardson Model) that tried to mathematically explainan arms race between two rival nations. Richardson himself seemed to have believed that hisperceptions relating to the way nations compete militarily might have been useful in preventingthe outbreak of hostilities in World War II (Brown, 2007, p. 60). Both nations are self-defensive,both fight back to protect their nation, both maintain army and stock weapons, and when onenation expands their army the other nation finds it offensive. Therefore, both nations will spendmoney (in billions of dollars) on armaments x and y that are functions of time t measured inyears. x(t) and y(t) will represent the yearly rate of armament expenditures of the two nationsusing some standard unit. Richardson then made some of the following assumptions about hismodel:  The expenditure for armaments of each country will increase at a rate that is proportional to the other country’s expenditure (each nations mutual fears rate is directly proportional to the expenditure of the other nation) (Rainville, Bedient, & Bedient, 1997, p. 228).  The expenditure for armaments of each country will decrease at a rate that is proportional to its own expenditure (extensive armament expenditures create a drag on the nations economy) (Rainville, Bedient, & Bedient, 1997, p. 228). 43
  44. 44.  The rate of change of arms expenditure for a country has a constant component that measures that level of antagonism of that country toward the other (Rainville, Bedient, & Bedient, 1997, p. 228).  The effects of the three previous assumptions are additive (Rainville, Bedient, & Bedient, 1997, p. 228).The previous assumptions make up the differential equations of the arms race system denoted by dx x(0)  x0  ay  mx  r dt for dy  bx  ny  s y (0)  y0 dtwhere a, m, b, and n are all positive constants. The positive terms ay and bx represent the drive tospend more money are arms due to the level of spending of the other nation, and the negativeterms mx and ny reflect a nation’s desire to inhibit future military spending because of theeconomic burden of its own spending. But, r and s can be any value because they represent theattitudes of each nation towards each other (negative values represent feelings of good will whilepositive values represent feelings of distrust). The initial values x(0) and y(0) represent theinitial amount of money (in billions of dollars) each nation will spend towards armaments. Thesystem can be simplified into x(t )  mx  ay  r y(t )  bx  ny  sand is expressed in matrix notation where  x(t )   m a  x(t )   r           X   P(t ) X  B  y (t )   b n  y (t )   s  44
  45. 45. To solve for the system, we will use the knowledge from the previous section to developgeneral solutions to the homogenous system. For the nonhomogenous part of the system, the fsolution will be a constant solution of the form   because the vector B is made up of gconstants thus making the process of solving by variation of parameters much easier. Lastly, theinitial values (trajectories) for the solution will represent the starting amount of money eachcountry will be spending on armaments. General solutions to the arms race system will represent one of a few types of races: astable arms race, a runaway arms race, a disarmament, or disarmament/runaway/stable arms racedepending on the initial values. The following examples will help demonstrate each of the abovementioned arms races along with slope fields to graphically represent the races.Example 1: A Runaway Arms RaceThe following system will result in a runaway arms race: x(t )  2 x  4 y  8  2 4  8  X    X   y(t )  4 x  2 y  2   4 2  2 To find the solution to this arms race, we will first find the general solution to thehomogenous part of the system using Maple:>>> 45
  46. 46.  1 1Therefore, r1  2 ,  1    , r2  6 , and  2    . The Wronskian W [ x(1) , x(2) ]  2e4t  0  1  1 thus, the general solution to the homogenous part of the arms race is  1 1 xh  c1   e 2t  c2   e 6t  1  1 As mentioned in the beginning of this section, the nonhomogenous system eX   P(t ) X  B has a constant solution of the form   because B is a vector of constants thus fthe solution should also be a vector made up of constants. Therefore, in the equation eX   P(t ) X  B , X (t )    can be substituted into X and X  where f  2 4  8 P(t )    and B     4 2   2to get the following expression  2 4  f   8   2 4  f   8  2 f  4 g  8  f   2  0                      xn  4 2  g   2   4 2  g   2 4 f  2 g  2  g   3 Therefore the general solution of the nonhomogenous system is  1 1  2  x  xh  xn  c1   e 2t  c2   e 6t     1  1  3  46
  47. 47. Note that as lim x(t ) and lim y(t )   . Thus we would predict that the rate that each nation t  t spends their money on armaments would increase infinity resulting in an arms race. The direction field for the nonhomogenous system is represented bywith the initial conditions x0  5, y0  2 and x0  2, y0  5 given. The direction field of thesystem shows that for any initial value, the solution goes to  as t   . Thus, we have arunaway arms race. If you wanted to solve the system with an initial condition given such as x0  5, y0  2 ,we would set up the general solution as 5 1 20  1  60  2   5   1  1   2     c1   e  c2   e        c1    c2       2  1  1  3   2   1  1  3  47
  48. 48. and solve for c1 and c2 . Therefore 5  1  1   2  c1  c2  2  5 c1  c2  7    c1    c2       c  c  3  2  c  c  6  c1  6 and c2  1  2  1  1  3  1 2 1 2Thus, the final solution with the initial conditions given is  1 1  2  x    6e2t    e 6t     1  1  3 or x(t )  6e 2t  e 6t  2 y (t )  6e 2t  e 6t  3The role of the initial value is how much each nation will initially spend on armaments inbillions of dollars. Using initial values when solving an arms race system will lead to a specificsolution describing the race instead of families of general solutions describing all cases of thesystem.Example 2: A Stable Arms RaceThe following system will result in a stable arms race: x(t )  5 x  2 y  1  5 2  1  X    X   y(t )  4 x  3 y  2   4 3   2 To find the solution to this arms race, we will first find the general solution to thehomogenous part of the system using Maple: 48
  49. 49. >>> 1 1Therefore, r1  7 ,  1    , r2  1 , and  2    . The Wronskian W [ x(1) , x(2) ]  3e8t  0  1  2thus, the general solution to the homogenous part of the arms race is 1 1 x  c1   e 7 t  c2   e  t  1  2 The general solution to the nonhomogenous part of the arms race will be found by f  5 2  1substituting X (t )    into X     X    . Therefore g  4 3   2  5 2  f   1  5 f  2 g  1  0 0        f  1 and g  2  4 3  g   2  4 f  3g  2  0 1and the solution of that system is xn    . Thus the general solution of the nonhomogenous  2system is 1 1 1 x  xh  xn  c1   e 7 t  c2   e t     1  2  2 49
  50. 50. Note that as lim x(t )  1 and lim y(t )  2 because in both equations, the terms with both t  t e7t and et go to 0 as t   . All that is left from the differential equations are the constantterms x(t )  1 and y(t )  2 and what initial values of the system converge to. The direction field with a few trajectories denoting the initial values for thenonhomogenous system is represented byThe direction field of the system shows that for any initial value, the solution approaches thepoint (1, 2) as t   . Thus, we have a stable arms race. 50
  51. 51. Example 3: DisarmamentThe following system x(t )  4 x  y  1  4 1   1   X    X   y(t )  x  y  2   1 1  2 will result in disarmament between the competing nations for all initial values. For the sake of simplicity, only the graph of this system and the solution solved by Maplewill be shown because the eigenvectors and eigenvalues generated from P(t) becomecomplicated radicals that would be difficult to manipulate by hand to find an answer. Therefore,the general solution to the system given by Maple output is>>>> 51
  52. 52. >As you can see from the above solution, the general solution to the system is becomes verycomplicated but, lim x(t )  1 and lim y(t )  3 thus showing that eventually, the nations will t  t get to a point in time where they are decreasing the rate at which they are spending money onarmaments until they are spending no money on the arms race. The graph of the system is muchmore beneficial in demonstrating an arms race that ends in disarmament. The direction field with a few trajectories denoting the initial values for thenonhomogenous system is represented by 52

×