This document introduces eigenvalues and eigenvectors. It provides three key points:
1. Eigenvalues are found by setting the determinant of A - λI equal to 0, where λ is the eigenvalue and I is the identity matrix. This yields a characteristic equation with degree n for an n×n matrix A.
2. Each eigenvalue λ yields an eigenvector x by solving (A - λI)x = 0.
3. Eigenvalues and eigenvectors reveal important properties about how a matrix transforms vectors under multiplication. Vectors that are unchanged are eigenvectors, and the scaling factor is the corresponding eigenvalue.
K-Notes are concise study materials intended for quick revision near the end of preparation for exams like GATE. Each K-Note covers the concepts from a subject in 40 pages or less. They are useful for final preparation and travel. Students should use K-Notes in the last 2 months before the exam, practicing questions after reviewing each note. The document then provides a summary of key concepts in linear algebra and matrices, including matrix properties, operations, inverses, and systems of linear equations.
The document discusses the eigenvalue-eigenvector problem, which has applications in solving differential equations, modeling population growth, and calculating matrix powers. It provides mathematical background on homogeneous systems of equations where the eigenvalues are the roots of the characteristic polynomial. Iterative methods like the power method are presented for finding the dominant or lowest eigenvalue of a matrix. Physical examples of mass-spring systems are given where the eigenvalues correspond to vibration frequencies and the eigenvectors to mode shapes.
This document provides an overview of matrix algebra concepts for business students. It defines key terms like matrix, order, types of matrices including identity, diagonal and triangular matrices, and matrix operations such as addition, subtraction and multiplication. It also explains determinants, which evaluate whether a system of linear equations has a unique solution. Determinants are calculated by taking the difference of products of diagonal elements of a square matrix. This document serves as a basic introduction and recap of matrix algebra.
Eigenvalues and eigenfunctions are key concepts in linear algebra. An eigenfunction is a function that when operated on by a linear operator produces a constant multiplied version of itself. The constant is the corresponding eigenvalue. Eigenvalues are the solutions to the characteristic polynomial of the linear operator. Eigenfunctions are not unique as any constant multiple of an eigenfunction is also an eigenfunction with the same eigenvalue. The spectrum of an operator is the set of all its eigenvalues.
This document provides an introduction to basic matrix theory concepts. It defines what a matrix is, explains how to represent vectors as matrices, and covers key matrix concepts like the diagonal matrix, unit matrix, zero matrix, and transpose. It also demonstrates how to add, subtract, and multiply matrices by following specific rules like multiplying rows by columns. Worked examples are provided for adding, subtracting, multiplying, and transposing matrices as well as finding products of matrix operations.
The eigen values of a Hermitian matrix are always real. This is because for a Hermitian matrix A, the quadratic form x*Ax is always real for any vector x. Now, if λ is an eigen value of A corresponding to the eigenvector v, then we have:
λv*v = v*Av
λv*v = v*λv (since Av = λv)
λv*v = λv*v
Therefore, λ must be real. Similarly, for a real symmetric matrix, the quadratic form x'Ax is always real. Hence, the eigen values must be real.
So in summary, the eigen values of both Hermitian and real
K-Notes are concise study materials intended for quick revision near the end of preparation for exams like GATE. Each K-Note covers the concepts from a subject in 40 pages or less. They are useful for final preparation and travel. Students should use K-Notes in the last 2 months before the exam, practicing questions after reviewing each note. The document then provides a summary of key concepts in linear algebra and matrices, including matrix properties, operations, inverses, and systems of linear equations.
The document discusses the eigenvalue-eigenvector problem, which has applications in solving differential equations, modeling population growth, and calculating matrix powers. It provides mathematical background on homogeneous systems of equations where the eigenvalues are the roots of the characteristic polynomial. Iterative methods like the power method are presented for finding the dominant or lowest eigenvalue of a matrix. Physical examples of mass-spring systems are given where the eigenvalues correspond to vibration frequencies and the eigenvectors to mode shapes.
This document provides an overview of matrix algebra concepts for business students. It defines key terms like matrix, order, types of matrices including identity, diagonal and triangular matrices, and matrix operations such as addition, subtraction and multiplication. It also explains determinants, which evaluate whether a system of linear equations has a unique solution. Determinants are calculated by taking the difference of products of diagonal elements of a square matrix. This document serves as a basic introduction and recap of matrix algebra.
Eigenvalues and eigenfunctions are key concepts in linear algebra. An eigenfunction is a function that when operated on by a linear operator produces a constant multiplied version of itself. The constant is the corresponding eigenvalue. Eigenvalues are the solutions to the characteristic polynomial of the linear operator. Eigenfunctions are not unique as any constant multiple of an eigenfunction is also an eigenfunction with the same eigenvalue. The spectrum of an operator is the set of all its eigenvalues.
This document provides an introduction to basic matrix theory concepts. It defines what a matrix is, explains how to represent vectors as matrices, and covers key matrix concepts like the diagonal matrix, unit matrix, zero matrix, and transpose. It also demonstrates how to add, subtract, and multiply matrices by following specific rules like multiplying rows by columns. Worked examples are provided for adding, subtracting, multiplying, and transposing matrices as well as finding products of matrix operations.
The eigen values of a Hermitian matrix are always real. This is because for a Hermitian matrix A, the quadratic form x*Ax is always real for any vector x. Now, if λ is an eigen value of A corresponding to the eigenvector v, then we have:
λv*v = v*Av
λv*v = v*λv (since Av = λv)
λv*v = λv*v
Therefore, λ must be real. Similarly, for a real symmetric matrix, the quadratic form x'Ax is always real. Hence, the eigen values must be real.
So in summary, the eigen values of both Hermitian and real
This document discusses matrix algebra concepts such as determinants, inverses, eigenvalues, and rank. It provides the following key points:
- The determinant of a square matrix is a number that characterizes properties like singularity. It is defined as the sum of products of the matrix elements.
- Cramer's rule provides a formula for solving systems of linear equations using determinants, but it is only practical for small matrices up to 3x3 or 4x4 due to computational complexity.
- A matrix is singular if its determinant is zero, meaning its rows and columns are linearly dependent. The rank of a matrix is the size of the largest non-singular sub-matrix. A full rank matrix has
This document provides a tutorial on basic MATLAB commands for creating, manipulating, and operating on vectors and matrices. It describes how to create vectors and matrices, change their entries, perform matrix multiplication and inversion, extract submatrices, and create special matrices like identity and diagonal matrices. Examples are provided to illustrate various commands like eye, inv, backslash, and how to input vectors, matrices, and create M-files for functions and scripts.
Aplicaciones y subespacios y subespacios vectoriales en laemojose107
se enfoca en la enseñanza del Álgebra Lineal en carreras de ingeniería. Los conceptos vinculados a esta rama de las matemáticas se estudian en los cursos básicos de los primeros años de los planes de estudio en esas carreras. Se estudian conceptos tales como vectores, matrices, sistemas de ecuaciones lineales, espacios vectoriales, transformaciones lineales, valores y. vectores propios, y diagonalización de matrices.
Row space | Column Space | Null space | Rank | NullityVishvesh Jasani
This document discusses row space, column space, and null space of matrices. It defines these concepts and provides theorems about how elementary row operations do not change the row space or null space of a matrix. Examples are given of finding bases for the row space and column space of matrices and determining the rank and nullity of matrices. Key topics covered include the definitions of row space, column space, and null space; how elementary row operations affect these subspaces; using row echelon form to determine bases; and relating rank, nullity, and the dimensions of subspaces.
The document summarizes key concepts related to systems of linear equations and linear algebra, including:
1) A system of n linear equations can be expressed in matrix form as Ax = b, where A is the coefficient matrix, x is the vector of unknowns, and b is the constant vector. If b = 0, the system is homogeneous, otherwise it is nonhomogeneous.
2) If the coefficient matrix A is nonsingular, the system Ax = b has a unique solution that can be found by computing x = A^-1b. If A is singular, the system may have no solution or infinitely many solutions.
3) A set of vectors is linearly dependent if there exist scalar multiples of
On α-characteristic Equations and α-minimal Polynomial of Rectangular MatricesIOSR Journals
In this paper, we study rectangular matrices which satisfy the criteria of the Cayley-Hamilton
theorem for a square matrix.Various results on characteristic polynomials, characteristic equations,
eigenvalues and α-minimal polynomial of rectangular matrices are proved.
AMS SUBJECT CLASSIFICATION CODE: 17D20(γ,δ).
The Computational Algorithm for Supported Solutions Set of Linear Diophantine...IJMER
This document presents an algorithm for computing the minimal supported set of solutions to a system of linear Diophantine equations in the ring of integer numbers. The algorithm is based on a modified TSS method. It first considers the case of a homogeneous linear Diophantine equation and constructs a base set of solutions using the coefficients of the equation. It then extends this approach to a system of homogeneous linear Diophantine equations by constructing the base set for each equation and showing that the vectors form a base for the solution set of the overall system. The algorithm and its properties are illustrated with examples.
A matrix is a set of elements organized into rows and columns. Basic matrix operations include addition, subtraction, and multiplication. A matrix can be multiplied by another matrix if the number of columns of the first equals the number of rows of the second. The determinant of a matrix is a value that is used to determine properties of the matrix such as invertibility. Cramer's rule can be used to solve systems of linear equations involving matrices.
Modul penggunaan kalkulator sainstifik sebagai ABM dalam MatematikNorsyazana Kamarudin
This document provides information about discriminants of quadratic equations. It defines quadratic equations and explains that the discriminant, which is b^2 - 4ac, provides information about the number and type of roots. A positive discriminant indicates two real roots, a zero discriminant indicates one real root, and a negative discriminant indicates no real roots. Examples of solving quadratic equations with a scientific calculator are provided. Worksheets ask students to determine the type of roots and solutions for different quadratic equations using the discriminant and with or without a calculator.
The document contains notes from a previous linear algebra class covering the following topics:
1. There will be a quiz tomorrow on sections 1.1-1.3 focusing on concepts rather than lengthy calculations.
2. Previous topics included systems of linear equations, row reduction, pivot positions, basic and free variables, and the span of vectors.
3. Determining if a vector is in the span of other vectors is equivalent to checking if the corresponding linear system is consistent.
4. Examples are provided of determining if homogeneous systems have non-trivial solutions based on the presence of free variables. The general solution of a homogeneous system is expressed in parametric vector form.
This document contains an unsolved mathematics paper from 1983 consisting of 25 multiple choice questions testing concepts in algebra, geometry, trigonometry, and calculus. The paper is divided into three sections - single answer multiple choice questions, true/false statements, and fill in the blank questions. Sample questions include solving equations, finding roots, determining geometric properties of figures, evaluating integrals and derivatives, and identifying monotonic behavior of functions.
The document discusses equations and their definitions and classifications. It defines equality, equations, identities, variables, terms of an equation, numerical and literal equations, types of equations including polynomial, rational, radical, and absolute value equations. It provides examples of solving linear, rational, and word problems involving equations. Key steps in solving equations are outlined such as isolating the variable, using properties of equality, and verifying solutions.
This document contains a lecture outline on ordinary differential equations (ODEs) given by Neela Nataraj. It begins with basic concepts of differential equations, including definitions and classification by type, order, and linearity. Examples of first and second order ODEs as well as first and second order partial differential equations are provided. The document then discusses the general form of ODEs and classification by order, defining order as the highest derivative appearing in the equation.
This document discusses different methods for solving systems of linear equations, including Cramer's rule, elimination methods, and Gaussian elimination. Cramer's rule uses determinants to find the values of variables by dividing the determinant of the coefficients by the primary determinant. Elimination methods remove one unknown using row operations like addition and subtraction. Gaussian elimination transforms the coefficient matrix into triangular form using row operations, then back substitution can find the unique solution.
This document discusses systems of equations and inequalities. It covers evaluating functions of two variables, solving systems of equations using substitution and elimination methods, graphing systems of equations, and solving linear inequalities symbolically and graphically. The document is a module on these topics, with learning objectives, examples, and explanations of key concepts.
This document discusses the application of vector spaces and subspaces in biotechnology. It begins by introducing the importance of linear algebra in scientific and technological development. It then defines the objectives as understanding vector spaces, subspaces, and dimensionality. Examples of vector spaces and subspaces are provided. Applications include using these concepts to create classification methods for diseases, animals and plants. In conclusions, it is stated that these concepts facilitate study and development in biotechnology.
This document defines the derivative and the four step rule for calculating derivatives. It explains that the derivative represents the slope of the tangent line to a function's graph at a given point. The formal definition of the derivative is presented as the limit of the difference quotient. The four step rule outlines the process for calculating the derivative of a function f(x) by adding an increment to x and y, isolating the change in y, dividing by the change in x, and taking the limit as the change in x approaches zero.
The document provides examples to illustrate how to find the eigenvalues and eigenvectors of a matrix.
1) For a 2x2 matrix, the characteristic polynomial is computed by taking the determinant of the matrix minus the identity matrix. The roots of the characteristic polynomial are the eigenvalues. The corresponding eigenvectors are found by solving the original eigenvalue equation.
2) For a triangular matrix, the eigenvalues are the diagonal elements. The eigenvectors are found by setting rows corresponding to non-diagonal elements to zero.
3) The document provides a numerical example to demonstrate finding the eigenvalues (3, 1, -2) and eigenvectors of a 3x3 matrix.
HG to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you tdbddhdhdhdbhddhhddhhfhfhfhfhhffhfhhfhfhfhhffhhfhfhfhhfhhfbfbfbfbfbfbfbfbbfbbfbfbfbfbfbfbfbbffbbfbfbfbbfbfbfbfrrrwshdtewyegerydyyrhrhrhrrhhrhrhrhrho the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to your families to the to you to the to you to the to you to the to your families to the to you to the to you to the to you to the to you to the to you nrjjrj to the to you to the to you to the to you to the to you to the to you to the to you to
Bffhffhjffhfhfhfh the same y it I will call you to get it I to the two who to the two who to the two who is speaking with my love is the two who to you and the day and the day of my life to the two who is speaking with my love is the two who to the two who is the two who to you and the same to you and your team and you to you and the day of the day and the same y to you and your family and the day of the day and the same to u and you all the happiness and the day of my love and happiness of my life to the two who is speaking to her and the same y to the two who to the two who is the two who to you and your team members tttttttg and you all the best of life to my life and the day and the same y to you and the day of the day and you all the best of life to the two who is speaking with my life to my love is a to a very good friend and you to my life and the same to you and your family to e to the two who to the two who is the two who to you and the day of my love and happiness and happiness and happiness and love to the two who is speaking to you and the same y it is the same to you and your family to you and the day and the same y to the two who to the two who is the two who to you eeee and you all to you and your team and the day of the day and you to be happy with you to be in a to you e of y to you and the same y it is a very happy with the two who is speaking with you to my love is the two who to the two who to you and the day of my life to the two who is the two who to the two who is speaking to you and your family and the same to u and you all the best of life eeeeeeee and you all to you and the day and the same y to the two who to you and your family to you and the day of the day and you to you and your team and the same to you and your family and the day of my love and love and happiness and blessings on you to my life and the same y it is the same to u and the day and the same y to you and the day of the day and you all the happiness and love you and the same y it is a to a very happy with the love is a very happy with you and your team and you to bless you to the to you to the to you to the to you to the to you to the to you to the to the two who to the two who is speaking to you and the day and you to be happy with the two who is the same to u in my love and love you too so cute and you all to you and your team and the same y to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to my love is a to you and your team members of the two of the two who is the two who to you and the day and you to the to you to the to you to the to you and the same y it I love is the two who is speaking with my love and love to the to you to the to you to the to
The document provides information about eigenvalues and eigenvectors. It begins by defining eigenvalues and eigenvectors, and how they relate to a matrix A. It then describes how to compute the eigenvalues and eigenvectors of a matrix by finding the characteristic polynomial and solving the characteristic equation. Two examples are provided to illustrate this process. The document also discusses eigenspaces and proves that the set of eigenvectors for a given eigenvalue forms a subspace. It introduces the concept of diagonalization of matrices using similarity transformations.
The document provides information about quadratic equations including:
1) It defines a quadratic equation as a polynomial equation of the second degree in the form ax2 + bx + c, where a ≠ 0. The constants a, b, and c are the quadratic, linear, and constant coefficients.
2) There are three main methods to solve quadratic equations: factoring, completing the square, or using the quadratic formula.
3) The discriminant, b2 - 4ac, determines the nature of the roots - two real roots if positive, one real root if zero, or two complex roots if negative.
This document discusses matrix algebra concepts such as determinants, inverses, eigenvalues, and rank. It provides the following key points:
- The determinant of a square matrix is a number that characterizes properties like singularity. It is defined as the sum of products of the matrix elements.
- Cramer's rule provides a formula for solving systems of linear equations using determinants, but it is only practical for small matrices up to 3x3 or 4x4 due to computational complexity.
- A matrix is singular if its determinant is zero, meaning its rows and columns are linearly dependent. The rank of a matrix is the size of the largest non-singular sub-matrix. A full rank matrix has
This document provides a tutorial on basic MATLAB commands for creating, manipulating, and operating on vectors and matrices. It describes how to create vectors and matrices, change their entries, perform matrix multiplication and inversion, extract submatrices, and create special matrices like identity and diagonal matrices. Examples are provided to illustrate various commands like eye, inv, backslash, and how to input vectors, matrices, and create M-files for functions and scripts.
Aplicaciones y subespacios y subespacios vectoriales en laemojose107
se enfoca en la enseñanza del Álgebra Lineal en carreras de ingeniería. Los conceptos vinculados a esta rama de las matemáticas se estudian en los cursos básicos de los primeros años de los planes de estudio en esas carreras. Se estudian conceptos tales como vectores, matrices, sistemas de ecuaciones lineales, espacios vectoriales, transformaciones lineales, valores y. vectores propios, y diagonalización de matrices.
Row space | Column Space | Null space | Rank | NullityVishvesh Jasani
This document discusses row space, column space, and null space of matrices. It defines these concepts and provides theorems about how elementary row operations do not change the row space or null space of a matrix. Examples are given of finding bases for the row space and column space of matrices and determining the rank and nullity of matrices. Key topics covered include the definitions of row space, column space, and null space; how elementary row operations affect these subspaces; using row echelon form to determine bases; and relating rank, nullity, and the dimensions of subspaces.
The document summarizes key concepts related to systems of linear equations and linear algebra, including:
1) A system of n linear equations can be expressed in matrix form as Ax = b, where A is the coefficient matrix, x is the vector of unknowns, and b is the constant vector. If b = 0, the system is homogeneous, otherwise it is nonhomogeneous.
2) If the coefficient matrix A is nonsingular, the system Ax = b has a unique solution that can be found by computing x = A^-1b. If A is singular, the system may have no solution or infinitely many solutions.
3) A set of vectors is linearly dependent if there exist scalar multiples of
On α-characteristic Equations and α-minimal Polynomial of Rectangular MatricesIOSR Journals
In this paper, we study rectangular matrices which satisfy the criteria of the Cayley-Hamilton
theorem for a square matrix.Various results on characteristic polynomials, characteristic equations,
eigenvalues and α-minimal polynomial of rectangular matrices are proved.
AMS SUBJECT CLASSIFICATION CODE: 17D20(γ,δ).
The Computational Algorithm for Supported Solutions Set of Linear Diophantine...IJMER
This document presents an algorithm for computing the minimal supported set of solutions to a system of linear Diophantine equations in the ring of integer numbers. The algorithm is based on a modified TSS method. It first considers the case of a homogeneous linear Diophantine equation and constructs a base set of solutions using the coefficients of the equation. It then extends this approach to a system of homogeneous linear Diophantine equations by constructing the base set for each equation and showing that the vectors form a base for the solution set of the overall system. The algorithm and its properties are illustrated with examples.
A matrix is a set of elements organized into rows and columns. Basic matrix operations include addition, subtraction, and multiplication. A matrix can be multiplied by another matrix if the number of columns of the first equals the number of rows of the second. The determinant of a matrix is a value that is used to determine properties of the matrix such as invertibility. Cramer's rule can be used to solve systems of linear equations involving matrices.
Modul penggunaan kalkulator sainstifik sebagai ABM dalam MatematikNorsyazana Kamarudin
This document provides information about discriminants of quadratic equations. It defines quadratic equations and explains that the discriminant, which is b^2 - 4ac, provides information about the number and type of roots. A positive discriminant indicates two real roots, a zero discriminant indicates one real root, and a negative discriminant indicates no real roots. Examples of solving quadratic equations with a scientific calculator are provided. Worksheets ask students to determine the type of roots and solutions for different quadratic equations using the discriminant and with or without a calculator.
The document contains notes from a previous linear algebra class covering the following topics:
1. There will be a quiz tomorrow on sections 1.1-1.3 focusing on concepts rather than lengthy calculations.
2. Previous topics included systems of linear equations, row reduction, pivot positions, basic and free variables, and the span of vectors.
3. Determining if a vector is in the span of other vectors is equivalent to checking if the corresponding linear system is consistent.
4. Examples are provided of determining if homogeneous systems have non-trivial solutions based on the presence of free variables. The general solution of a homogeneous system is expressed in parametric vector form.
This document contains an unsolved mathematics paper from 1983 consisting of 25 multiple choice questions testing concepts in algebra, geometry, trigonometry, and calculus. The paper is divided into three sections - single answer multiple choice questions, true/false statements, and fill in the blank questions. Sample questions include solving equations, finding roots, determining geometric properties of figures, evaluating integrals and derivatives, and identifying monotonic behavior of functions.
The document discusses equations and their definitions and classifications. It defines equality, equations, identities, variables, terms of an equation, numerical and literal equations, types of equations including polynomial, rational, radical, and absolute value equations. It provides examples of solving linear, rational, and word problems involving equations. Key steps in solving equations are outlined such as isolating the variable, using properties of equality, and verifying solutions.
This document contains a lecture outline on ordinary differential equations (ODEs) given by Neela Nataraj. It begins with basic concepts of differential equations, including definitions and classification by type, order, and linearity. Examples of first and second order ODEs as well as first and second order partial differential equations are provided. The document then discusses the general form of ODEs and classification by order, defining order as the highest derivative appearing in the equation.
This document discusses different methods for solving systems of linear equations, including Cramer's rule, elimination methods, and Gaussian elimination. Cramer's rule uses determinants to find the values of variables by dividing the determinant of the coefficients by the primary determinant. Elimination methods remove one unknown using row operations like addition and subtraction. Gaussian elimination transforms the coefficient matrix into triangular form using row operations, then back substitution can find the unique solution.
This document discusses systems of equations and inequalities. It covers evaluating functions of two variables, solving systems of equations using substitution and elimination methods, graphing systems of equations, and solving linear inequalities symbolically and graphically. The document is a module on these topics, with learning objectives, examples, and explanations of key concepts.
This document discusses the application of vector spaces and subspaces in biotechnology. It begins by introducing the importance of linear algebra in scientific and technological development. It then defines the objectives as understanding vector spaces, subspaces, and dimensionality. Examples of vector spaces and subspaces are provided. Applications include using these concepts to create classification methods for diseases, animals and plants. In conclusions, it is stated that these concepts facilitate study and development in biotechnology.
This document defines the derivative and the four step rule for calculating derivatives. It explains that the derivative represents the slope of the tangent line to a function's graph at a given point. The formal definition of the derivative is presented as the limit of the difference quotient. The four step rule outlines the process for calculating the derivative of a function f(x) by adding an increment to x and y, isolating the change in y, dividing by the change in x, and taking the limit as the change in x approaches zero.
The document provides examples to illustrate how to find the eigenvalues and eigenvectors of a matrix.
1) For a 2x2 matrix, the characteristic polynomial is computed by taking the determinant of the matrix minus the identity matrix. The roots of the characteristic polynomial are the eigenvalues. The corresponding eigenvectors are found by solving the original eigenvalue equation.
2) For a triangular matrix, the eigenvalues are the diagonal elements. The eigenvectors are found by setting rows corresponding to non-diagonal elements to zero.
3) The document provides a numerical example to demonstrate finding the eigenvalues (3, 1, -2) and eigenvectors of a 3x3 matrix.
HG to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you tdbddhdhdhdbhddhhddhhfhfhfhfhhffhfhhfhfhfhhffhhfhfhfhhfhhfbfbfbfbfbfbfbfbbfbbfbfbfbfbfbfbfbbffbbfbfbfbbfbfbfbfrrrwshdtewyegerydyyrhrhrhrrhhrhrhrhrho the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to your families to the to you to the to you to the to you to the to your families to the to you to the to you to the to you to the to you to the to you nrjjrj to the to you to the to you to the to you to the to you to the to you to the to you to
Bffhffhjffhfhfhfh the same y it I will call you to get it I to the two who to the two who to the two who is speaking with my love is the two who to you and the day and the day of my life to the two who is speaking with my love is the two who to the two who is the two who to you and the same to you and your team and you to you and the day of the day and the same y to you and your family and the day of the day and the same to u and you all the happiness and the day of my love and happiness of my life to the two who is speaking to her and the same y to the two who to the two who is the two who to you and your team members tttttttg and you all the best of life to my life and the day and the same y to you and the day of the day and you all the best of life to the two who is speaking with my life to my love is a to a very good friend and you to my life and the same to you and your family to e to the two who to the two who is the two who to you and the day of my love and happiness and happiness and happiness and love to the two who is speaking to you and the same y it is the same to you and your family to you and the day and the same y to the two who to the two who is the two who to you eeee and you all to you and your team and the day of the day and you to be happy with you to be in a to you e of y to you and the same y it is a very happy with the two who is speaking with you to my love is the two who to the two who to you and the day of my life to the two who is the two who to the two who is speaking to you and your family and the same to u and you all the best of life eeeeeeee and you all to you and the day and the same y to the two who to you and your family to you and the day of the day and you to you and your team and the same to you and your family and the day of my love and love and happiness and blessings on you to my life and the same y it is the same to u and the day and the same y to you and the day of the day and you all the happiness and love you and the same y it is a to a very happy with the love is a very happy with you and your team and you to bless you to the to you to the to you to the to you to the to you to the to you to the to the two who to the two who is speaking to you and the day and you to be happy with the two who is the same to u in my love and love you too so cute and you all to you and your team and the same y to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to the to you to my love is a to you and your team members of the two of the two who is the two who to you and the day and you to the to you to the to you to the to you and the same y it I love is the two who is speaking with my love and love to the to you to the to you to the to
The document provides information about eigenvalues and eigenvectors. It begins by defining eigenvalues and eigenvectors, and how they relate to a matrix A. It then describes how to compute the eigenvalues and eigenvectors of a matrix by finding the characteristic polynomial and solving the characteristic equation. Two examples are provided to illustrate this process. The document also discusses eigenspaces and proves that the set of eigenvectors for a given eigenvalue forms a subspace. It introduces the concept of diagonalization of matrices using similarity transformations.
The document provides information about quadratic equations including:
1) It defines a quadratic equation as a polynomial equation of the second degree in the form ax2 + bx + c, where a ≠ 0. The constants a, b, and c are the quadratic, linear, and constant coefficients.
2) There are three main methods to solve quadratic equations: factoring, completing the square, or using the quadratic formula.
3) The discriminant, b2 - 4ac, determines the nature of the roots - two real roots if positive, one real root if zero, or two complex roots if negative.
This document provides an overview of ordinary differential equations with constant coefficients. It defines key terms like order, degree, homogeneous and non-homogeneous equations. It describes the general forms of linear differential equations and how to find the complementary function and particular integral to determine the general solution. Specifically, it outlines four cases for determining the complementary function based on whether the roots of the auxiliary equation are real/complex and distinct/repeated. It also includes two examples of solving second and fourth order linear differential equations.
This document provides an introduction to matrix algebra and random vectors. It defines key concepts such as vectors, matrices, matrix operations, and properties of positive definite matrices. Vectors are defined as arrays of real numbers that can be added or multiplied by scalars. Matrices are rectangular arrays of numbers that can be added or multiplied. Positive definite matrices are matrices where the quadratic form is always nonnegative. The eigenvalues and eigenvectors of a symmetric positive definite matrix allow geometric interpretation of distances defined by the matrix.
The document proposes a new method called Spectral Regression Discriminant Analysis (SRDA) to address the computational challenges of Linear Discriminant Analysis (LDA) on large, high-dimensional datasets. SRDA combines spectral graph analysis and regression to reduce the time complexity of LDA from quadratic to linear. It works by using the eigenvectors of the within-class scatter matrix to define a regression problem, the solution of which provides the projection vectors that maximize class separability. Experiments on four datasets show SRDA has comparable classification accuracy to LDA but can scale to much larger problems.
The document discusses eigenvalues, eigenvectors, and diagonalization of matrices. It begins by defining eigenvalues and eigenvectors and providing an example of finding them for a matrix. It then discusses computing eigenvalues and eigenvectors, including using the characteristic equation and polynomial. The document explains diagonalization of matrices, including when a matrix is diagonalizable. It provides examples of finding eigenvalues, eigenvectors, and diagonalizing symmetric matrices. It concludes by defining orthogonal matrices.
This document discusses integrals involving exponential functions. It shows that integrating the exponential function results in dividing the constant in the exponent. It evaluates the important definite integral from 0 to infinity of e^-ax, which equals 1/a. It also evaluates the double integral from -infinity to infinity of e^-a(x^2+y^2), which equals sqrt(pi/a). Taking derivatives of these integrals generates related integrals involving x and x^4 that are useful in kinetic theory of gases.
The document discusses pseudospectra as an alternative to eigenvalues for analyzing non-normal matrices and operators. It defines three equivalent definitions of pseudospectra: (1) the set of points where the resolvent is larger than ε-1, (2) the set of points that are eigenvalues of a perturbed matrix with perturbation smaller than ε, and (3) the set of points where the resolvent applied to a unit vector is larger than ε. It also shows that pseudospectra are nested sets and their intersection is the spectrum. The definitions extend to operators on Hilbert spaces using singular values.
This document provides information about ACE Educational Academy, an institution that provides training in electrical engineering. It includes a foreword about the book "Electromagnetic Fields" written by Venugopala Swamy for exams like GATE and engineering services. The book aims to explain electromagnetic field concepts simply. The document then lists topics that will be covered in the book, including vector analysis, electric fields, capacitance, Maxwell's equations, and inductance of simple geometries. It provides some introductory information about using vector analysis concepts like gradient, divergence and curl to solve electromagnetic field problems.
1. The matrix is not invertible as it has repeated rows.
2. The eigenvalue is 0 since a matrix is not invertible if it has 0 as an eigenvalue.
3. The eigenvectors corresponding to 0 can be found by reducing the matrix A - 0I to row echelon form. This gives the equation x1 + x2 + x3 = 0 with x2 and x3 as free variables, so two linearly independent eigenvectors are (1, -1, 0) and (1, 0, -1).
The document defines eigenvalues and eigenvectors. An eigenvector is a non-zero vector whose direction does not change when a linear transformation is applied. The associated scalar multiplier is the eigenvalue. Eigenvalues are found by setting the determinant of A - λI equal to 0. This characteristic equation has roots that are the eigenvalues. Eigenvectors correspond to distinct eigenvalues and are nonzero solutions to (λI - A)x = 0. The document provides examples of finding eigenvalues and eigenvectors and lists several properties of eigenvalues and eigenvectors.
This document discusses three problems related to partial differential equations:
1) Finding eigenfunctions and eigenvalues for an operator and bounding the maximum value of a Rayleigh quotient.
2) Solving the Laplacian eigenproblem in a cylinder using finite differences and comparing to analytical solutions.
3) Solving the wave equation for a vibrating string driven by an oscillating force and deriving the Green's function.
The document discusses several key concepts:
1. It defines union and intersection of sets, with the union being elements in A or B or both, and intersection being elements in both A and B.
2. It provides examples of multiplying polynomials by distributing one polynomial over the other.
3. It explains how to convert between degrees and radians, noting radians can be understood as the central angle that subtends an arc of length equal to the radius.
The document discusses eigenvalues and eigenvectors. It provides examples of finding the eigenvalues and eigenvectors of different matrices. The key steps are:
1) Find the characteristic equation of the matrix by calculating the determinant of (A - λI).
2) Solve the characteristic equation to find the eigenvalues.
3) For each eigenvalue, solve the system (A - λI)x = 0 to find the corresponding eigenvector(s).
Here are the key steps to find the eigenvalues of the given matrix:
1) Write the characteristic equation: det(A - λI) = 0
2) Expand the determinant: (1-λ)(-2-λ) - 4 = 0
3) Simplify and factor: λ(λ + 1)(λ + 2) = 0
4) Find the roots: λ1 = 0, λ2 = -1, λ3 = -2
Therefore, the eigenvalues of the given matrix are -1 and -2.
The document discusses eigenvalues and eigenvectors. It defines an eigenvalue problem as finding scale constants (λ) and nonzero vectors (X) such that when a square matrix (A) multiplies a vector (X), it produces a vector in the same direction but scaled by λ. The characteristic polynomial is used to find the eigenvalues by setting its determinant equal to 0. Once the eigenvalues are obtained, the corresponding eigenvectors can be found by solving the homogeneous system (A - λI)X = 0. Examples are provided to demonstrate finding the eigenvalues and eigenvectors of different matrices.
This document discusses various matrix decomposition techniques including least squares, eigendecomposition, and singular value decomposition. It begins with an introduction to the importance of linear algebra and decompositions for applications. Then it provides examples of using least squares to fit curves to data and find regression lines. It defines eigenvalues and eigenvectors and provides examples of eigendecomposition. It also discusses diagonalization of matrices and using the eigendecomposition to raise matrices to powers. Finally, it discusses singular value decomposition and its applications.
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
Juneteenth Freedom Day 2024 David Douglas School District
Ila0601
1. Chapter 6
Eigenvalues and Eigenvectors
6.1 Introduction to Eigenvalues
Linear equations Ax D b come from steady state problems. Eigenvalues have their greatest
importance in dynamic problems. The solution of du=dt D Au is changing with time—
growing or decaying or oscillating. We can’t find it by elimination. This chapter enters a
new part of linear algebra, based on Ax D x. All matrices in this chapter are square.
A good model comes from the powers A; A2
; A3
; : : : of a matrix. Suppose you need the
hundredth power A100
. The starting matrix A becomes unrecognizable after a few steps,
and A100
is very close to Œ :6 :6I :4 :4 :
Ä
:8 :3
:2 :7
Ä
:70 :45
:30 :55
Ä
:650 :525
:350 :475
Ä
:6000 :6000
:4000 :4000
A A2
A3
A100
A100
was found by using the eigenvalues of A, not by multiplying 100 matrices. Those
eigenvalues (here they are 1 and 1=2) are a new way to see into the heart of a matrix.
To explain eigenvalues, we first explain eigenvectors. Almost all vectors change di-
rection, when they are multiplied by A. Certain exceptional vectors x are in the same
direction as Ax. Those are the “eigenvectors”. Multiply an eigenvector by A, and the
vector Ax is a number times the original x.
The basic equation is Ax D x. The number is an eigenvalue of A.
The eigenvalue tells whether the special vector x is stretched or shrunk or reversed or left
unchanged—when it is multiplied by A. We may find D 2 or 1
2
or 1 or 1. The eigen-
value could be zero! Then Ax D 0x means that this eigenvector x is in the nullspace.
If A is the identity matrix, every vector has Ax D x. All vectors are eigenvectors of I.
All eigenvalues “lambda” are D 1. This is unusual to say the least. Most 2 by 2 matrices
have two eigenvector directions and two eigenvalues. We will show that det.A I/ D 0.
283
2. 284 Chapter 6. Eigenvalues and Eigenvectors
This section will explain how to compute the x’s and ’s. It can come early in the course
because we only need the determinant of a 2 by 2 matrix. Let me use det.A I/ D 0 to
find the eigenvalues for this first example, and then derive it properly in equation (3).
Example 1 The matrix A has two eigenvalues D 1 and D 1=2. Look at det.A I/:
A D
Ä
:8 :3
:2 :7
det
Ä
:8 :3
:2 :7
D 2 3
2
C
1
2
D . 1/
Â
1
2
Ã
:
I factored the quadratic into 1 times 1
2
, to see the two eigenvalues D 1 and
D 1
2
. For those numbers, the matrix A I becomes singular (zero determinant). The
eigenvectors x1 and x2 are in the nullspaces of A I and A 1
2
I.
.A I/x1 D 0 is Ax1 D x1 and the first eigenvector is .:6; :4/.
.A 1
2
I/x2 D 0 is Ax2 D 1
2
x2 and the second eigenvector is .1; 1/:
x1 D
Ä
:6
:4
and Ax1 D
Ä
:8 :3
:2 :7
Ä
:6
:4
D x1 (Ax D x means that 1 D 1)
x2 D
Ä
1
1
and Ax2 D
Ä
:8 :3
:2 :7
Ä
1
1
D
Ä
:5
:5
(this is 1
2
x2 so 2 D 1
2
).
If x1 is multiplied again by A, we still get x1. Every power of A will give An
x1 D x1.
Multiplying x2 by A gave 1
2
x2, and if we multiply again we get . 1
2
/2
times x2.
When A is squared, the eigenvectors stay the same. The eigenvalues are squared.
This pattern keeps going, because the eigenvectors stay in their own directions (Figure 6.1)
and never get mixed. The eigenvectors of A100
are the same x1 and x2. The eigenvalues
of A100
are 1100
D 1 and .1
2
/100
D very small number.
D 1 Ax1 D x1 D
Ä
:6
:4
Ax2 D 2x2 D
Ä
:5
:5D :5
x2 D
Ä
1
1
A2x1 D .1/2x1
A2x2 D .:5/2x2 D
Ä
:25
:25
Ax D x
A2
x D 2
x
2
D :25
2
D 1
Figure 6.1: The eigenvectors keep their directions. A2
has eigenvalues 12
and .:5/2
.
Other vectors do change direction. But all other vectors are combinations of the two
eigenvectors. The first column of A is the combination x1 C .:2/x2:
Separate into eigenvectors
Ä
:8
:2
D x1 C .:2/x2 D
Ä
:6
:4
C
Ä
:2
:2
: (1)
3. 6.1. Introduction to Eigenvalues 285
Multiplying by A gives .:7; :3/, the first column of A2
. Do it separately for x1 and .:2/x2.
Of course Ax1 D x1. And A multiplies x2 by its eigenvalue 1
2
:
Multiply each xi by i A
Ä
:8
:2
D
Ä
:7
:3
is x1 C
1
2
.:2/x2 D
Ä
:6
:4
C
Ä
:1
:1
:
Each eigenvector is multiplied by its eigenvalue, when we multiply by A. We didn’t need
these eigenvectors to find A2
. But it is the good way to do 99 multiplications. At every step
x1 is unchanged and x2 is multiplied by . 1
2
/, so we have .1
2
/99
:
A99
Ä
:8
:2
is really x1 C .:2/.
1
2
/99
x2 D
Ä
:6
:4
C
2
4
very
small
vector
3
5 :
This is the first column of A100
. The number we originally wrote as :6000 was not exact.
We left out .:2/.1
2
/99
which wouldn’t show up for 30 decimal places.
The eigenvector x1 is a “steady state” that doesn’t change (because 1 D 1/. The
eigenvector x2 is a “decaying mode” that virtually disappears (because 2 D :5/. The
higher the power of A, the closer its columns approach the steady state.
We mention that this particular A is a Markov matrix. Its entries are positive and
every column adds to 1. Those facts guarantee that the largest eigenvalue is D 1 (as we
found). Its eigenvector x1 D .:6; :4/ is the steady state—which all columns of Ak
will
approach. Section 8.3 shows how Markov matrices appear in applications like Google.
For projections we can spot the steady state . D 1/ and the nullspace . D 0/.
Example 2 The projection matrix P D
Ä
:5 :5
:5 :5
has eigenvalues D 1 and D 0.
Its eigenvectors are x1 D .1; 1/ and x2 D .1; 1/. For those vectors, P x1 D x1 (steady
state) and P x2 D 0 (nullspace). This example illustrates Markov matrices and singular
matrices and (most important) symmetric matrices. All have special ’s and x’s:
1. Each column of P D
Ä
:5 :5
:5 :5
adds to 1, so D 1 is an eigenvalue.
2. P is singular, so D 0 is an eigenvalue.
3. P is symmetric, so its eigenvectors .1; 1/ and .1; 1/ are perpendicular.
The only eigenvalues of a projection matrix are 0 and 1. The eigenvectors for D 0
(which means P x D 0x/ fill up the nullspace. The eigenvectors for D 1 (which means
P x D x/ fill up the column space. The nullspace is projected to zero. The column space
projects onto itself. The projection keeps the column space and destroys the nullspace:
Project each part v D
Ä
1
1
C
Ä
2
2
projects onto P v D
Ä
0
0
C
Ä
2
2
:
Special properties of a matrix lead to special eigenvalues and eigenvectors.
That is a major theme of this chapter (it is captured in a table at the very end).
4. 286 Chapter 6. Eigenvalues and Eigenvectors
Projections have D 0 and 1. Permutations have all j j D 1. The next matrix R (a
reflection and at the same time a permutation) is also special.
Example 3 The reflection matrix R D 0 1
1 0 has eigenvalues 1 and 1.
The eigenvector .1; 1/ is unchanged by R. The second eigenvector is .1; 1/—its signs
are reversed by R. A matrix with no negative entries can still have a negative eigenvalue!
The eigenvectors for R are the same as for P , because reflection D 2.projection/ I:
R D 2P I
Ä
0 1
1 0
D 2
Ä
:5 :5
:5 :5
Ä
1 0
0 1
: (2)
Here is the point. If P x D x then 2P x D 2 x. The eigenvalues are doubled when
the matrix is doubled. Now subtract Ix D x. The result is .2P I/x D .2 1/x.
When a matrix is shifted by I, each is shifted by 1. No change in eigenvectors.
Figure 6.2: Projections P have eigenvalues 1 and 0. Reflections R have D 1 and 1.
A typical x changes direction, but not the eigenvectors x1 and x2.
Key idea: The eigenvalues of R and P are related exactly as the matrices are related:
The eigenvalues of R D 2P I are 2.1/ 1 D 1 and 2.0/ 1 D 1.
The eigenvalues of R2
are 2
. In this case R2
D I. Check .1/2
D 1 and . 1/2
D 1.
The Equation for the Eigenvalues
For projections and reflections we found ’s and x’s by geometry: P x D x; P x D 0;
Rx D x. Now we use determinants and linear algebra. This is the key calculation in
the chapter—almost every application starts by solving Ax D x.
First move x to the left side. Write the equation Ax D x as .A I/x D 0. The
matrix A I times the eigenvector x is the zero vector. The eigenvectors make up the
nullspace of A I. When we know an eigenvalue , we find an eigenvector by solving
.A I/x D 0.
Eigenvalues first. If .A I/x D 0 has a nonzero solution, A I is not invertible.
The determinant of A I must be zero. This is how to recognize an eigenvalue :
5. 6.1. Introduction to Eigenvalues 287
Eigenvalues The number is an eigenvalue of A if and only if A I is singular:
det.A I/ D 0: (3)
This “characteristic equation” det.A I/ D 0 involves only , not x. When A is n by n,
the equation has degree n. Then A has n eigenvalues and each leads to x:
For each solve .A I/x D 0 or Ax D x to find an eigenvector x:
Example 4 A D
Ä
1 2
2 4
is already singular (zero determinant). Find its ’s and x’s.
When A is singular, D 0 is one of the eigenvalues. The equation Ax D 0x has
solutions. They are the eigenvectors for D 0. But det.A I/ D 0 is the way to find all
’s and x’s. Always subtract I from A:
Subtract from the diagonal to find A I D
Ä
1 2
2 4
: (4)
Take the determinant “ad bc” of this 2 by 2 matrix. From 1 times 4 ,
the “ad” part is 2
5 C 4. The “bc” part, not containing , is 2 times 2.
det
Ä
1 2
2 4
D .1 /.4 / .2/.2/ D 2
5 : (5)
Set this determinant 2
5 to zero. One solution is D 0 (as expected, since A is
singular). Factoring into times 5, the other root is D 5:
det.A I/ D 2
5 D 0 yields the eigenvalues 1 D 0 and 2 D 5 :
Now find the eigenvectors. Solve .A I/x D 0 separately for 1 D 0 and 2 D 5:
.A 0I/x D
Ä
1 2
2 4
Ä
y
z
D
Ä
0
0
yields an eigenvector
Ä
y
z
D
Ä
2
1
for 1 D 0
.A 5I/x D
Ä
4 2
2 1
Ä
y
z
D
Ä
0
0
yields an eigenvector
Ä
y
z
D
Ä
1
2
for 2 D 5:
The matrices A 0I and A 5I are singular (because 0 and 5 are eigenvalues). The
eigenvectors .2; 1/ and .1; 2/ are in the nullspaces: .A I/x D 0 is Ax D x.
We need to emphasize: There is nothing exceptional about D 0. Like every other
number, zero might be an eigenvalue and it might not. If A is singular, it is. The eigenvec-
tors fill the nullspace: Ax D 0x D 0. If A is invertible, zero is not an eigenvalue. We shift
A by a multiple of I to make it singular.
In the example, the shifted matrix A 5I is singular and 5 is the other eigenvalue.
6. 288 Chapter 6. Eigenvalues and Eigenvectors
Summary To solve the eigenvalue problem for an n by n matrix, follow these steps:
1. Compute the determinant of A I. With subtracted along the diagonal, this
determinant starts with n
or n
. It is a polynomial in of degree n.
2. Find the roots of this polynomial, by solving det.A I/ D 0. The n roots are
the n eigenvalues of A. They make A I singular.
3. For each eigenvalue , solve .A I/x D 0 to find an eigenvector x.
A note on the eigenvectors of 2 by 2 matrices. When A I is singular, both rows are
multiples of a vector .a; b/. The eigenvector is any multiple of .b; a/. The example had
D 0 and D 5:
D 0 W rows of A 0I in the direction .1; 2/; eigenvector in the direction .2; 1/
D 5 W rows of A 5I in the direction . 4; 2/; eigenvector in the direction .2; 4/:
Previously we wrote that last eigenvector as .1; 2/. Both .1; 2/ and .2; 4/ are correct.
There is a whole line of eigenvectors—any nonzero multiple of x is as good as x.
MATLAB’s eig.A/ divides by the length, to make the eigenvector into a unit vector.
We end with a warning. Some 2 by 2 matrices have only one line of eigenvectors.
This can only happen when two eigenvalues are equal. (On the other hand A D I has
equal eigenvalues and plenty of eigenvectors.) Similarly some n by n matrices don’t have
n independent eigenvectors. Without n eigenvectors, we don’t have a basis. We can’t write
every v as a combination of eigenvectors. In the language of the next section, we can’t
diagonalize a matrix without n independent eigenvectors.
Good News, Bad News
Bad news first: If you add a row of A to another row, or exchange rows, the eigenvalues
usually change. Elimination does not preserve the ’s. The triangular U has its eigenvalues
sitting along the diagonal—they are the pivots. But they are not the eigenvalues of A!
Eigenvalues are changed when row 1 is added to row 2:
U D
Ä
1 3
0 0
has D 0 and D 1; A D
Ä
1 3
2 6
has D 0 and D 7:
Good news second: The product 1 times 2 and the sum 1 C 2 can be found quickly
from the matrix. For this A, the product is 0 times 7. That agrees with the determinant
(which is 0/. The sum of eigenvalues is 0 C 7. That agrees with the sum down the main
diagonal (the trace is 1 C 6/. These quick checks always work:
The product of the n eigenvalues equals the determinant.
The sum of the n eigenvalues equals the sum of the n diagonal entries.
7. 6.1. Introduction to Eigenvalues 289
The sum of the entries on the main diagonal is called the trace of A:
1 C 2 C C n D trace D a11 C a22 C C ann: (6)
Those checks are very useful. They are proved in Problems 16–17 and again in the next
section. They don’t remove the pain of computing ’s. But when the computation is wrong,
they generally tell us so. To compute the correct ’s, go back to det.A I/ D 0.
The determinant test makes the product of the ’s equal to the product of the pivots
(assuming no row exchanges). But the sum of the ’s is not the sum of the pivots—as the
example showed. The individual ’s have almost nothing to do with the pivots. In this new
part of linear algebra, the key equation is really nonlinear: multiplies x.
Why do the eigenvalues of a triangular matrix lie on its diagonal?
Imaginary Eigenvalues
One more bit of news (not too terrible). The eigenvalues might not be real numbers.
Example 5 The 90ı
rotation QD 0 1
1 0 has no real eigenvectors. Its eigenvalues are
D i and D i. Sum of ’s D trace D 0. Product D determinant D 1.
After a rotation, no vector Qx stays in the same direction as x (except x D 0 which is
useless). There cannot be an eigenvector, unless we go to imaginary numbers. Which we
do.
To see how i can help, look at Q2
which is I. If Q is rotation through 90ı
, then
Q2
is rotation through 180ı
. Its eigenvalues are 1 and 1. (Certainly Ix D 1x.)
Squaring Q will square each , so we must have 2
D 1. The eigenvalues of the 90ı
rotation matrix Q are Ci and i, because i2
D 1.
Those ’s come as usual from det.Q I/ D 0. This equation gives 2
C 1 D 0.
Its roots are i and i. We meet the imaginary number i also in the eigenvectors:
Complex
eigenvectors
Ä
0 1
1 0
Ä
1
i
D i
Ä
1
i
and
Ä
0 1
1 0
Ä
i
1
D i
Ä
i
1
:
Somehow these complex vectors x1 D .1; i/ and x2 D .i; 1/ keep their direction as
they are rotated. Don’t ask me how. This example makes the all-important point that real
matrices can easily have complex eigenvalues and eigenvectors. The particular eigenvalues
i and i also illustrate two special properties of Q:
1. Q is an orthogonal matrix so the absolute value of each is j j D 1.
2. Q is a skew-symmetric matrix so each is pure imaginary.
8. 290 Chapter 6. Eigenvalues and Eigenvectors
A symmetric matrix .AT
D A/ can be compared to a real number. A skew-symmetric
matrix .AT
D A/ can be compared to an imaginary number. An orthogonal matrix
.AT
A D I/ can be compared to a complex number with j j D 1. For the eigenvalues those
are more than analogies—they are theorems to be proved in Section 6:4.
The eigenvectors for all these special matrices are perpendicular. Somehow .i; 1/ and
.1; i/ are perpendicular (Chapter 10 explains the dot product of complex vectors).
Eigshow in MATLAB
There is a MATLAB demo (just type eigshow), displaying the eigenvalue problem for a 2
by 2 matrix. It starts with the unit vector x D .1; 0/. The mouse makes this vector move
around the unit circle. At the same time the screen shows Ax, in color and also moving.
Possibly Ax is ahead of x. Possibly Ax is behind x. Sometimes Ax is parallel to x. At
that parallel moment, Ax D x (at x1 and x2 in the second figure).
The eigenvalue is the length of Ax, when the unit eigenvector x lines up. The built-in
choices for A illustrate three possibilities: 0; 1, or 2 directions where Ax crosses x.
0. There are no real eigenvectors. Ax stays behind or ahead of x. This means the
eigenvalues and eigenvectors are complex, as they are for the rotation Q.
1. There is only one line of eigenvectors (unusual). The moving directions Ax and x
touch but don’t cross over. This happens for the last 2 by 2 matrix below.
2. There are eigenvectors in two independent directions. This is typical! Ax crosses x
at the first eigenvector x1, and it crosses back at the second eigenvector x2. Then
Ax and x cross again at x1 and x2.
You can mentally follow x and Ax for these five matrices. Under the matrices I will
count their real eigenvectors. Can you see where Ax lines up with x?
A D
Ä
2 0
0 1
Ä
0 1
1 0
Ä
0 1
1 0
Ä
1 1
1 1
Ä
1 1
0 1
2 2 0 1 1
9. 6.1. Introduction to Eigenvalues 291
When A is singular (rank one), its column space is a line. The vector Ax goes up
and down that line while x circles around. One eigenvector x is along the line. Another
eigenvector appears when Ax2 D 0. Zero is an eigenvalue of a singular matrix.
REVIEW OF THE KEY IDEAS
1. Ax D x says that eigenvectors x keep the same direction when multiplied by A.
2. Ax D x also says that det.A I/ D 0. This determines n eigenvalues.
3. The eigenvalues of A2
and A 1
are 2
and 1
, with the same eigenvectors.
4. The sum of the ’s equals the sum down the main diagonal of A (the trace).
The product of the ’s equals the determinant.
5. Projections P , reflections R, 90ı
rotations Q have special eigenvalues 1; 0; 1; i; i.
Singular matrices have D 0. Triangular matrices have ’s on their diagonal.
WORKED EXAMPLES
6.1 A Find the eigenvalues and eigenvectors of A and A2
and A 1
and A C 4I:
A D
Ä
2 1
1 2
and A2
D
Ä
5 4
4 5
:
Check the trace 1 C 2 and the determinant 1 2 for A and also A2
.
Solution The eigenvalues of A come from det.A I/ D 0:
det.A I/ D
ˇ
ˇ
ˇ
ˇ
2 1
1 2
ˇ
ˇ
ˇ
ˇ D 2
4 C 3 D 0:
This factors into . 1/. 3/ D 0 so the eigenvalues of A are 1 D 1 and 2 D 3. For the
trace, the sum 2C2 agrees with 1C3. The determinant 3 agrees with the product 1 2 D 3.
The eigenvectors come separately by solving .A I/x D 0 which is Ax D x:
D 1: .A I/x D
Ä
1 1
1 1
Ä
x
y
D
Ä
0
0
gives the eigenvector x1 D
Ä
1
1
D 3: .A 3I/x D
Ä
1 1
1 1
Ä
x
y
D
Ä
0
0
gives the eigenvector x2 D
Ä
1
1
10. 292 Chapter 6. Eigenvalues and Eigenvectors
A2
and A 1
and A C 4I keep the same eigenvectors as A. Their eigenvalues are 2
and
1
and C 4:
A2
has eigenvalues 12
D 1 and 32
D 9 A 1
has
1
1
and
1
3
A C 4I has
1 C 4 D 5
3 C 4 D 7
The trace of A2
is 5 C 5 which agrees with 1 C 9. The determinant is 25 16 D 9.
Notes for later sections: A has orthogonal eigenvectors (Section 6.4 on symmetric
matrices). A can be diagonalized since 1 ¤ 2 (Section 6.2). A is similar to any 2 by 2
matrix with eigenvalues 1 and 3 (Section 6.6). A is a positive definite matrix (Section 6.5)
since A D AT
and the ’s are positive.
6.1 B Find the eigenvalues and eigenvectors of this 3 by 3 matrix A:
Symmetric matrix
Singular matrix
Trace 1 C 2 C 1 D 4
A D
2
4
1 1 0
1 2 1
0 1 1
3
5
Solution Since all rows of A add to zero, the vector x D .1; 1; 1/ gives Ax D 0. This
is an eigenvector for the eigenvalue D 0. To find 2 and 3 I will compute the 3 by 3
determinant:
det.A I/ D
ˇ
ˇ
ˇ
ˇ
ˇ
ˇ
1 1 0
1 2 1
0 1 1
ˇ
ˇ
ˇ
ˇ
ˇ
ˇ
D .1 /.2 /.1 / 2.1 /
D .1 /Œ.2 /.1 / 2
D .1 /. /.3 /:
That factor confirms that D 0 is a root, and an eigenvalue of A. The other factors
.1 / and .3 / give the other eigenvalues 1 and 3, adding to 4 (the trace). Each
eigenvalue 0; 1; 3 corresponds to an eigenvector:
x1 D
2
4
1
1
1
3
5 Ax1 D 0x1 x2 D
2
4
1
0
1
3
5 Ax2 D 1x2 x3 D
2
4
1
2
1
3
5 Ax3 D 3x3 :
I notice again that eigenvectors are perpendicular when A is symmetric.
The 3 by 3 matrix produced a third-degree (cubic) polynomial for det.A I/ D
3
C 4 2
3 . We were lucky to find simple roots D 0; 1; 3. Normally we would use
a command like eig.A/, and the computation will never even use determinants (Section 9.3
shows a better way for large matrices).
The full command ŒS; D D eig.A/ will produce unit eigenvectors in the columns of
the eigenvector matrix S. The first one happens to have three minus signs, reversed from
.1; 1; 1/ and divided by
p
3. The eigenvalues of A will be on the diagonal of the eigenvalue
matrix (typed as D but soon called ƒ).
11. 6.1. Introduction to Eigenvalues 293
Problem Set 6.1
1 The example at the start of the chapter has powers of this matrix A:
A D
Ä
:8 :3
:2 :7
and A2
D
Ä
:70 :45
:30 :55
and A1
D
Ä
:6 :6
:4 :4
:
Find the eigenvalues of these matrices. All powers have the same eigenvectors.
(a) Show from A how a row exchange can produce different eigenvalues.
(b) Why is a zero eigenvalue not changed by the steps of elimination?
2 Find the eigenvalues and the eigenvectors of these two matrices:
A D
Ä
1 4
2 3
and A C I D
Ä
2 4
2 4
:
A C I has the eigenvectors as A. Its eigenvalues are by 1.
3 Compute the eigenvalues and eigenvectors of A and A 1
. Check the trace !
A D
Ä
0 2
1 1
and A 1
D
Ä
1=2 1
1=2 0
:
A 1
has the eigenvectors as A. When A has eigenvalues 1 and 2, its inverse
has eigenvalues .
4 Compute the eigenvalues and eigenvectors of A and A2
:
A D
Ä
1 3
2 0
and A2
D
Ä
7 3
2 6
:
A2
has the same as A. When A has eigenvalues 1 and 2, A2
has eigenvalues
. In this example, why is 2
1 C 2
2 D 13?
5 Find the eigenvalues of A and B (easy for triangular matrices) and A C B:
A D
Ä
3 0
1 1
and B D
Ä
1 1
0 3
and A C B D
Ä
4 1
1 4
:
Eigenvalues of A C B (are equal to)(are not equal to) eigenvalues of A plus eigen-
values of B.
6 Find the eigenvalues of A and B and AB and BA:
A D
Ä
1 0
1 1
and B D
Ä
1 2
0 1
and AB D
Ä
1 2
1 3
and BA D
Ä
3 2
1 1
:
(a) Are the eigenvalues of AB equal to eigenvalues of A times eigenvalues of B?
(b) Are the eigenvalues of AB equal to the eigenvalues of BA?
12. 294 Chapter 6. Eigenvalues and Eigenvectors
7 Elimination produces A D LU . The eigenvalues of U are on its diagonal; they
are the . The eigenvalues of L are on its diagonal; they are all . The
eigenvalues of A are not the same as .
8 (a) If you know that x is an eigenvector, the way to find is to .
(b) If you know that is an eigenvalue, the way to find x is to .
9 What do you do to the equation Ax D x, in order to prove (a), (b), and (c)?
(a) 2
is an eigenvalue of A2
, as in Problem 4.
(b) 1
is an eigenvalue of A 1
, as in Problem 3.
(c) C 1 is an eigenvalue of A C I, as in Problem 2.
10 Find the eigenvalues and eigenvectors for both of these Markov matrices A and A1
.
Explain from those answers why A100
is close to A1
:
A D
Ä
:6 :2
:4 :8
and A1
D
Ä
1=3 1=3
2=3 2=3
:
11 Here is a strange fact about 2 by 2 matrices with eigenvalues 1 ¤ 2: The columns
of A 1I are multiples of the eigenvector x2. Any idea why this should be?
12 Find three eigenvectors for this matrix P (projection matrices have D1 and 0):
Projection matrix P D
2
4
:2 :4 0
:4 :8 0
0 0 1
3
5 :
If two eigenvectors share the same , so do all their linear combinations. Find an
eigenvector of P with no zero components.
13 From the unit vector u D 1
6
; 1
6
; 3
6
; 5
6
construct the rank one projection matrix
P D uuT
. This matrix has P 2
D P because uT
u D 1.
(a) P uDu comes from .uuT
/uDu. /. Then u is an eigenvector with D1.
(b) If v is perpendicular to u show that P v D 0. Then D 0.
(c) Find three independent eigenvectors of P all with eigenvalue D 0.
14 Solve det.Q I/ D 0 by the quadratic formula to reach D cos  ˙ i sin Â:
Q D
Ä
cos  sin Â
sin  cos Â
rotates the xy plane by the angle Â. No real ’s.
Find the eigenvectors of Q by solving .Q I/x D 0. Use i2
D 1.
13. 6.1. Introduction to Eigenvalues 295
15 Every permutation matrix leaves x D .1; 1; : : :; 1/ unchanged. Then D 1. Find
two more ’s (possibly complex) for these permutations, from det.P I/ D 0 :
P D
2
4
0 1 0
0 0 1
1 0 0
3
5 and P D
2
4
0 0 1
0 1 0
1 0 0
3
5 :
16 The determinant of A equals the product 1 2 n. Start with the polynomial
det.A I/ separated into its n factors (always possible). Then set D 0 :
det.A I/ D . 1 /. 2 / . n / so det A D :
Check this rule in Example 1 where the Markov matrix has D 1 and 1
2
.
17 The sum of the diagonal entries (the trace) equals the sum of the eigenvalues:
A D
Ä
a b
c d
has det.A I/ D 2
.a C d/ C ad bc D 0:
The quadratic formula gives the eigenvalues D .aCd C
p
/=2 and D .
Their sum is . If A has 1 D 3 and 2 D 4 then det.A I/ D .
18 If A has 1 D 4 and 2 D 5 then det.A I/ D . 4/. 5/ D 2
9 C 20.
Find three matrices that have trace a C d D 9 and determinant 20 and D 4; 5.
19 A 3 by 3 matrix B is known to have eigenvalues 0; 1; 2. This information is enough
to find three of these (give the answers where possible) :
(a) the rank of B
(b) the determinant of BT
B
(c) the eigenvalues of BT
B
(d) the eigenvalues of .B2
C I/ 1
.
20 Choose the last rows of A and C to give eigenvalues 4; 7 and 1; 2; 3:
Companion matrices A D
Ä
0 1
C D
2
4
0 1 0
0 0 1
3
5 :
21 The eigenvalues of A equal the eigenvalues of AT
. This is because det.A I/
equals det.AT
I/. That is true because . Show by an example that the
eigenvectors of A and AT
are not the same.
22 Construct any 3 by 3 Markov matrix M: positive entries down each column add to 1.
Show that M T
.1; 1; 1/ D .1; 1; 1/. By Problem 21, D 1 is also an eigenvalue
of M. Challenge: A 3 by 3 singular Markov matrix with trace 1
2
has what ’s ?
14. 296 Chapter 6. Eigenvalues and Eigenvectors
23 Find three 2 by 2 matrices that have 1 D 2 D 0. The trace is zero and the
determinant is zero. A might not be the zero matrix but check that A2
D 0.
24 This matrix is singular with rank one. Find three ’s and three eigenvectors:
A D
2
4
1
2
1
3
5 2 1 2 D
2
4
2 1 2
4 2 4
2 1 2
3
5 :
25 Suppose A and B have the same eigenvalues 1; : : :; n with the same independent
eigenvectors x1; : : :; xn. Then A D B. Reason: Any vector x is a combination
c1x1 C C cnxn. What is Ax? What is Bx?
26 The block B has eigenvalues 1; 2 and C has eigenvalues 3; 4 and D has eigenval-
ues 5; 7. Find the eigenvalues of the 4 by 4 matrix A:
A D
Ä
B C
0 D
D
2
6
6
4
0 1 3 0
2 3 0 4
0 0 6 1
0 0 1 6
3
7
7
5 :
27 Find the rank and the four eigenvalues of A and C:
A D
2
6
6
4
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
3
7
7
5 and C D
2
6
6
4
1 0 1 0
0 1 0 1
1 0 1 0
0 1 0 1
3
7
7
5 :
28 Subtract I from the previous A. Find the ’s and then the determinants of
B D A I D
2
6
6
4
0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0
3
7
7
5 and C D I A D
2
6
6
4
0 1 1 1
1 0 1 1
1 1 0 1
1 1 1 0
3
7
7
5 :
29 (Review) Find the eigenvalues of A, B, and C:
A D
2
4
1 2 3
0 4 5
0 0 6
3
5 and B D
2
4
0 0 1
0 2 0
3 0 0
3
5 and C D
2
4
2 2 2
2 2 2
2 2 2
3
5 :
30 When a C b Dc C d show that .1; 1/ is an eigenvector and find both eigenvalues:
A D
Ä
a b
c d
:
15. 6.1. Introduction to Eigenvalues 297
31 If we exchange rows 1 and 2 and columns 1 and 2, the eigenvalues don’t change.
Find eigenvectors of A and B for D 11. Rank one gives 2 D 3 D 0.
A D
2
4
1 2 1
3 6 3
4 8 4
3
5 and B D PAP T
D
2
4
6 3 3
2 1 1
8 4 4
3
5 :
32 Suppose A has eigenvalues 0; 3; 5 with independent eigenvectors u; v; w.
(a) Give a basis for the nullspace and a basis for the column space.
(b) Find a particular solution to Ax D v C w. Find all solutions.
(c) AxDu has no solution. If it did then would be in the column space.
33 Suppose u; v are orthonormal vectors in R2
, and A D uvT
. Compute A2
D uvT
uvT
to discover the eigenvalues of A. Check that the trace of A agrees with 1 C 2.
34 Find the eigenvalues of this permutation matrix P from det .P I/ D 0. Which
vectors are not changed by the permutation? They are eigenvectors for D 1. Can
you find three more eigenvectors?
P D
2
6
6
4
0 0 0 1
1 0 0 0
0 1 0 0
0 0 1 0
3
7
7
5 :
Challenge Problems
35 There are six 3 by 3 permutation matrices P . What numbers can be the determinants
of P ? What numbers can be pivots? What numbers can be the trace of P ? What
four numbers can be eigenvalues of P , as in Problem 15?
36 Is there a real 2 by 2 matrix (other than I) with A3
D I? Its eigenvalues must satisfy
3
D 1. They can be e2 i=3
and e 2 i=3
. What trace and determinant would this
give? Construct a rotation matrix as A (which angle of rotation?).
37 (a) Find the eigenvalues and eigenvectors of A. They depend on c:
A D
Ä
:4 1 c
:6 c
:
(b) Show that A has just one line of eigenvectors when c D 1:6.
(c) This is a Markov matrix when c D:8. Then An
will approach what matrix A1
?