Upcoming SlideShare
×

# Eigenvectors & Eigenvalues: The Road to Diagonalisation

7,151 views

Published on

Published in: Education, Technology
11 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
Your message goes here
• Be the first to comment

Views
Total views
7,151
On SlideShare
0
From Embeds
0
Number of Embeds
28
Actions
Shares
0
496
0
Likes
11
Embeds 0
No embeds

No notes for slide

### Eigenvectors & Eigenvalues: The Road to Diagonalisation

1. 1. Eigenvectors and EigenvaluesBy Christopher Gratton cg317@exeter.ac.uk
2. 2. Introduction: Diagonal Matrices Before beginning this topic, we must firstclarify the definition of a “Diagonal Matrix”.A Diagonal Matrix is an n by n Matrix whosenon-diagonal entries have all the value zero.
3. 3. Introduction: Diagonal MatricesIn this presentation, all Diagonal Matrices will be denoted as: where dnn is the n-th row and the n-th column of the Diagonal Matrix.
4. 4. Introduction: Diagonal MatricesFor example, the previously given Matrix of: Can be written in the form: diag(5, 4, 1, 9)
5. 5. Introduction: Diagonal MatricesThe Effects of a Diagonal MatrixThe Identity Matrix is an example of aDiagonal Matrix which has the effect ofmaintaining the properties of a Vector withina given System.For example:
6. 6. Introduction: Diagonal MatricesThe Effects of a Diagonal MatrixHowever, any other Diagonal Matrix will havethe effect of enlarging a Vector in given axes.For example, the following Diagonal Matrix:Has the effect of stretching a Vector by a ScaleFactor of 2 in the x-Axis, 3 in the z-Axis andreflecting the Vector in the y-Axis.
7. 7. The GoalBy the end of this PowerPoint, we should be able to understand and apply the idea of Diagonalisation, using Eigenvalues and Eigenvectors.
8. 8. The GoalThe Matrix Point of ViewBy the end, we should be able to understandhow, given an n by n Matrix, A, we can saythat A is Diagonalisable if and only if there isa Matrix, δ, that allows the following Matrixto be Diagonal:And why this knowledge is significant.
9. 9. The Points of ViewThe Square Matrix, A, may be seen as aLinear Operator, F, defined by:Where X is a Column Vector.
10. 10. The Points of ViewFurthermore:Represents the Linear Operator, F, relative tothe Basis, or Coordinate System, S, whoseElements are the Columns of δ.
11. 11. The Effects of a Coordinate SystemIf we are given A, an n by n Matrix of anykind, then it is possible to interpret it as a Linear Transformation in a given Coordinate System of n-Dimensions.For example: Has the effect of 45 degree Anticlockwise Rotation, in this case, on the Identity Matrix.
12. 12. The Effects of a Coordinate SystemHowever, it is theorised that it is possible to represent this Linear Transformation as a Diagonal Matrix within another, different Coordinate System.We define the effect upon a given Vector in this new Coordinate System as: The a scalar multiplication of the Vector relative to all the axes by an unknownScale Factor, without affecting direction or other properties.
13. 13. The Effects of a Coordinate System This process can be summarised by the following definition: Current New Coordinate Coordinate System SystemWhere:A is the Transformation Matrixv is a non-Zero Vector to be Transformedλ is a Scalar in this new Coordinate System that has the same effect on v as A.
14. 14. The Effects of a Coordinate System This process can be summarised by the following definition: Current New Coordinate Coordinate System SystemWhere: : The Matrix is a LinearTransformation upon the Vector . : is the Scalar which results in thesame Transformation on as .
15. 15. The Effects of a Coordinate SystemThis can be applied in the following example:Matrix Vector A 1 2
16. 16. The Effects of a Coordinate SystemThis can be applied in the following example:Matrix Vector A 3 Thus, when: 4 A is equivalent to the Diagonal Matrix: Which, as discussed previously, has the effect of enlarging the Vector by a Scale Factor of 2 in each Dimension.
17. 17. DefinitionsThus, if is true, we call: the Eigenvector the Eigenvalue of corresponding to
18. 18. Exceptions and Additions• We do not count as an Eigenvector,as for all values of λ.• , however, is allowed as an acceptedEigenvalue.• If is a known Eigenvector of a Matrix,then so is , for all non-Zero values of .• If Vectors are both Eigenvectors of agiven Matrix, and both have the sameresultant Eigenvalue, then will alsobe an Eigenvector of the Matrix.
19. 19. Characteristic PolynomialsEstablishing the Essentials is an Eigenvalue for the Matrix , relativeto the Eigenvector . Is the Identity Matrixof the same Dimensions as . Thus: This is possible, as Is invertible.
20. 20. Characteristic PolynomialsApplication of the KnowledgeWhat this, essentially, leads to is the findingof all Eigenvalues and Eigenvectors of aspecific Matrix.This is done by considering the Matrix, , inaddition to the Identity Matrix, .We then multiply the Identity Matrix by theunknown quantity, .
21. 21. Characteristic PolynomialsApplication of the KnowledgeProceeding this, we then take the lots ofthe Identity Matrix, and subtract the Matrixfrom it.We then take the Determinant of the resultwhich ends up as a Polynomial equation, inorder to find possible values of , theEigenvalues.This can be exemplified by the followingexample:
22. 22. Characteristic PolynomialsCalculating Eigenvalues from a MatrixTo find the Eigenvalues of:We must consider :
23. 23. Characteristic PolynomialsCalculating Eigenvalues from a MatrixThen, equals:Which factorises to:Therefore, the Eigenvalues of the Matrix are:
24. 24. Characteristic PolynomialsCalculating Eigenvectors from the ValuesWith:We need to solve for all givenvalues of .This is done by solving a HomogeneousSystem of Linear Equations. In other words,we must turn into Echelon Formand find the values of , which arethe Diagonals of the Matrix.
25. 25. Characteristic PolynomialsCalculating Eigenvectors from the ValuesFor example, we will take from thePrevious example:
26. 26. Characteristic PolynomialsCalculating Eigenvectors from the ValuesFor example, we will take from thePrevious example:Therefore, the result is that:
27. 27. Characteristic PolynomialsCalculating Eigenvectors from the ValuesFor example, we will take from thePrevious example: Therefore, this is the set of general Eigenvectors for the Eigenvalue of 4.
28. 28. DiagonalisationMentioned earlier was the ultimate goalof Diagonalisation; that is to say, finding aMatrix, , such that the following can beapplied to a given Matrix, :Where the result is a Diagonal Matrix.
29. 29. DiagonalisationThere are a few rules that can be derivedfrom this:Firstly, must be an Invertible Matrix, as theInverse is necessary to the calculation.Secondly, the Eigenvectors of mustnecessarily be Linearly Independent for thisto work.Linear Independence will be covered later.
30. 30. DiagonalisationEigenvectors, Eigenvalues & DiagonalisationIt turns out that the columns of the Matrixare the Eigenvectors of the Matrix . This iswhy they must be Linearly Independent, asMatrix must be Invertible.Furthermore, the Diagonal Entries of theresultant Matrix are the Eigenvaluesassociated with that Column of Eigenvectors.
31. 31. DiagonalisationEigenvectors, Eigenvalues & DiagonalisationFor example, in the previous example, we cancreate a Matrix from the Eigenvalues 4, 2and 6, respectively.It is as follows:
32. 32. DiagonalisationEigenvectors, Eigenvalues & DiagonalisationFurthermore, we can calculate that:
33. 33. DiagonalisationEigenvectors, Eigenvalues & DiagonalisationThus, the Diagonalisation of can be createdby:Solving this gives:The Eigenvalues in the Order given!
34. 34. Linear IndependenceIntroduction This will be a brief section on Linear Independence to enforce that the Eigenvectors of must be Linearly Independent for Diagonalisation to be implemented.
35. 35. Linear IndependenceLinear Independency in x-DimensionsThe vectors are classified as aLinearly Independent set of Vectors if thefollowing rule applies:The only value of the Scalar, , which makesthe equation:True is for all instances of
36. 36. Linear IndependenceLinear Independency in x-DimensionsThe vectors are classified as aLinearly Independent set of Vectors if thefollowing rule applies:The only value of the Scalar, , which makesthe equation:True is for all instances of
37. 37. Linear IndependenceLinear Independency in x-DimensionsIf there are any non-zero values of at anyinstance of within the equation, then thisset of Vectors, , is consideredLinearly Dependent.It is to note that only one instance of atnon-zero is needed to make the dependence.
38. 38. Linear IndependenceLinear Independency in x-DimensionsTherefore, if, say, at , the value of ,then the vector set is Linearly Dependent.But, if were to be omitted from the set,given all other instances of were zero, thenthe set would, therefore, become LinearlyIndependent.
39. 39. Linear IndependenceImplications of Linear IndependenceIf the set of Vectors, is LinearlyIndependent, then it is not possible to writeany of the Vectors in the set in terms of anyof the other Vectors within the same set.Conversely, if a set of Vectors is LinearlyDependent, then it is possible to write atleast one Vector in terms of at least oneother Vector.
40. 40. Linear IndependenceImplications of Linear IndependenceFor example, the Vector set of:Is Linearly Dependent, as can be writtenas:
41. 41. Linear IndependenceImplications of Linear IndependenceFor example, the Vector set of:We can say, however, that this Vector set maybe considered as Linearly Independent ifwere omitted from the set.
42. 42. Linear IndependenceFinding Linear IndependencyThe previous equation can be more usefullywritten as:More significantly, additionally, is the ideathat this can be translated into aHomogeneous System of x Linear Equations,where x is the Dimension quantity of theSystem.
43. 43. Linear IndependenceFinding Linear IndependencyThe previous equation can be more usefullywritten as:More significantly, additionally, is the ideathat this can be translated into aHomogeneous System of x Linear Equations,where x is the Dimension quantity of theSystem.
44. 44. Linear IndependenceFinding Linear IndependencyTherefore, the Matrix of Coefficients, , is ann by x Matrix, where n is the number ofVectors in the System and x is the Dimensionsof the System.The Columns of are equivalent to theVectors of the System, .
45. 45. Linear IndependenceFinding Linear IndependencyTo observe whether is LinearlyIndependent or not, we need to put theMatrix into Echelon Form.If, when in Echelon Form, we can observethat each Column of Unknowns has a LeadingEntry, then the set of Vectors are LinearlyIndependent.
46. 46. Linear IndependenceFinding Linear IndependencyIf not, then the set of Vectors are LinearlyDependent.To find the Coefficients, we can put intoReduced Echelon Form to consider thegeneral solutions.
47. 47. Linear IndependenceFinding Linear Independency: ExampleLet us consider whether the following set ofVectors are Linearly Independent:
48. 48. Linear IndependenceFinding Linear Independency: ExampleThese Vectors can be written in the followingform:
49. 49. Linear IndependenceFinding Linear Independency: ExampleThe following EROs put this Matrix intoEchelon Form:As this Matrix has a leading entry for everyColumn, we can conclude that the set ofVectors is Linearly Independent.
50. 50. SummaryThus, to conclude: is the formula for Eigenvectors and Eigenvalues. is a Matrix that has Eigenvectors andEigenvalues to be calculated. is an Eigenvector of is an Eigenvalue of , corresponding to
51. 51. SummaryThus, to conclude: is the formula for Eigenvectors and Eigenvalues.Given and , we can find by MatrixMultiplying and observing how manytimes the result is, relative to .
52. 52. SummaryThus, to conclude: is the Characteristic Polynomial of . This is used to find the general set of Eigenvalues of , and thus, its Eigenvectors.This is done by finding the determinant of and solving the resultantPolynomial equation to isolate theEigenvalues.
53. 53. SummaryThus, to conclude: is the Characteristic Polynomial of . This is used to find the general set of Eigenvalues of , and thus, its Eigenvectors.Then, by Substituting the Eigenvalues backinto and reducing the Matrix toEchelon Form, we can find the general set ofEigenvectors for that Eigenvalue.
54. 54. SummaryThus, to conclude: is the Diagonalisation of . is a Matrix created from the Eigenvectors of , where each Column is an Eigenvector.In order for to exist, must necessarilybe Invertible, where the Eigenvectors ofare Linearly Independent.
55. 55. SummaryThus, to conclude: is the Diagonalisation of . is a Matrix created from the Eigenvectors of , where each Column is an Eigenvector.The resultant is a Diagonal Matrixwhere the diagonal values are theEigenvalues in the same column as itsassociated Eigenvectors.