1. Quiz 4 will cover sections 3.3, 5.1, and 5.2 and will be on Thursday, February 18.
2. To find the nth power of a matrix A that has been diagonalized as A = PDP-1, one raises the diagonal elements of D to the nth power to obtain Dn, leaving P and P-1 unchanged.
3. A matrix is diagonalizable if and only if it has n linearly independent eigenvectors, allowing it to be written as A = PDP-1, where the columns of P are the eigenvectors and the diagonal elements of D are the corresponding eigenvalues.
This document discusses linear independence, basis, and dimension in linear algebra. It defines linear independence as vectors being linearly independent if the only solution that produces the zero vector is the trivial solution with all coefficients equal to zero. A basis is defined as a set of linearly independent vectors that span the vector space. The dimension of a vector space is the number of vectors in any basis of that space. The dimensions of the four fundamental subspaces (row space, column space, nullspace, and left nullspace) of a matrix are defined in terms of the rank of the matrix.
Liner algebra-vector space-1 introduction to vector space and subspace Manikanta satyala
This document discusses the key differences between scalar and vector quantities. Scalars only have magnitude, while vectors have both magnitude and direction. It then defines vector spaces as sets of vectors that are closed under vector addition and scalar multiplication. Examples of vector spaces include n-dimensional spaces, matrix spaces, polynomial spaces, and function spaces. Subspaces are also introduced as vector spaces that are subsets of a larger vector space and satisfy the same properties.
(1) The document discusses inner product spaces and related linear algebra concepts such as orthogonal vectors and bases, Gram-Schmidt process, orthogonal complements, and orthogonal projections.
(2) Key topics covered include defining inner products and their properties, finding orthogonal vectors and constructing orthogonal bases, using Gram-Schmidt process to orthogonalize a set of vectors, defining and finding orthogonal complements of subspaces, and computing orthogonal projections of vectors.
(3) Examples are provided to demonstrate computing orthogonal bases, orthogonal complements, and orthogonal projections in inner product spaces.
The document discusses diagonalization of matrices. Diagonalization involves finding an invertible matrix P such that P-1AP is a diagonal matrix. The procedure for diagonalizing a matrix A includes: (1) finding the eigenvalues of A, (2) finding linearly independent eigenvectors corresponding to the eigenvalues, (3) forming the matrix P with the eigenvectors as columns, and (4) showing that P-1AP is a diagonal matrix with the eigenvalues along the diagonal. An example demonstrates finding the eigenvectors of a 2x2 matrix A and constructing the matrix P to diagonalize A.
This document defines and provides examples of linear differential equations. It discusses:
1) Linear differential equations can be written in the form P(x)y'=Q(x) or P(y)x'=Q(y), where multiplying both sides by an integrating factor μ results in a total derivative.
2) First order linear differential equations of the form P(x)y'=Q(x) have an integrating factor of e∫P(x)dx. The general solution is y(IF)=C.
3) Bernoulli's equation is a differential equation of the form P(x)y'+Q(x)y^n=R(x), where the general solution depends
The document discusses concepts related to partial differentiation and its applications. It covers topics like tangent planes, linear approximations, differentials, Taylor expansions, maxima and minima problems, and the Lagrange method. Specifically, it defines the tangent plane to a surface at a point using partial derivatives, describes how to find the linear approximation of functions, and explains how to find maximum and minimum values of functions using critical points and the second derivative test.
This document discusses linear independence, basis, and dimension in linear algebra. It defines linear independence as vectors being linearly independent if the only solution that produces the zero vector is the trivial solution with all coefficients equal to zero. A basis is defined as a set of linearly independent vectors that span the vector space. The dimension of a vector space is the number of vectors in any basis of that space. The dimensions of the four fundamental subspaces (row space, column space, nullspace, and left nullspace) of a matrix are defined in terms of the rank of the matrix.
Liner algebra-vector space-1 introduction to vector space and subspace Manikanta satyala
This document discusses the key differences between scalar and vector quantities. Scalars only have magnitude, while vectors have both magnitude and direction. It then defines vector spaces as sets of vectors that are closed under vector addition and scalar multiplication. Examples of vector spaces include n-dimensional spaces, matrix spaces, polynomial spaces, and function spaces. Subspaces are also introduced as vector spaces that are subsets of a larger vector space and satisfy the same properties.
(1) The document discusses inner product spaces and related linear algebra concepts such as orthogonal vectors and bases, Gram-Schmidt process, orthogonal complements, and orthogonal projections.
(2) Key topics covered include defining inner products and their properties, finding orthogonal vectors and constructing orthogonal bases, using Gram-Schmidt process to orthogonalize a set of vectors, defining and finding orthogonal complements of subspaces, and computing orthogonal projections of vectors.
(3) Examples are provided to demonstrate computing orthogonal bases, orthogonal complements, and orthogonal projections in inner product spaces.
The document discusses diagonalization of matrices. Diagonalization involves finding an invertible matrix P such that P-1AP is a diagonal matrix. The procedure for diagonalizing a matrix A includes: (1) finding the eigenvalues of A, (2) finding linearly independent eigenvectors corresponding to the eigenvalues, (3) forming the matrix P with the eigenvectors as columns, and (4) showing that P-1AP is a diagonal matrix with the eigenvalues along the diagonal. An example demonstrates finding the eigenvectors of a 2x2 matrix A and constructing the matrix P to diagonalize A.
This document defines and provides examples of linear differential equations. It discusses:
1) Linear differential equations can be written in the form P(x)y'=Q(x) or P(y)x'=Q(y), where multiplying both sides by an integrating factor μ results in a total derivative.
2) First order linear differential equations of the form P(x)y'=Q(x) have an integrating factor of e∫P(x)dx. The general solution is y(IF)=C.
3) Bernoulli's equation is a differential equation of the form P(x)y'+Q(x)y^n=R(x), where the general solution depends
The document discusses concepts related to partial differentiation and its applications. It covers topics like tangent planes, linear approximations, differentials, Taylor expansions, maxima and minima problems, and the Lagrange method. Specifically, it defines the tangent plane to a surface at a point using partial derivatives, describes how to find the linear approximation of functions, and explains how to find maximum and minimum values of functions using critical points and the second derivative test.
The document discusses vector spaces and related linear algebra concepts. It defines vector spaces and lists the axioms that must be satisfied. Examples of vector spaces include the set of all pairs of real numbers and the space of 2x2 symmetric matrices. The document also discusses subspaces, linear combinations, span, basis, dimension, row space, column space, null space, rank, nullity, and change of basis. It provides examples and explanations of these fundamental linear algebra topics.
This document provides information about vector spaces and subspaces. It defines a vector space as a set of objects called vectors that can be added together and multiplied by scalars, subject to certain rules. A subspace is a subset of a vector space that is closed under vector addition and scalar multiplication. The null space of a matrix is the set of solutions to the homogeneous equation Ax=0 and is a subspace. The column space of a matrix is the set of all linear combinations of its columns and is also a subspace. Examples are provided to illustrate these concepts.
The document discusses inner product spaces and orthonormal bases. It defines an inner product space as a vector space with an inner product defined on it. An orthonormal basis is introduced as a set of orthogonal unit vectors that form a basis. The Gram-Schmidt process is presented as a method for transforming a basis into an orthonormal basis. Properties of inner products, such as the Cauchy-Schwarz inequality and orthogonal projections, are covered.
1) An eigenvector of a square matrix A is a non-zero vector x that satisfies the equation Ax = λx, where λ is the corresponding eigenvalue.
2) The zero vector cannot be an eigenvector, but λ = 0 can be an eigenvalue.
3) For a matrix A, the eigenvectors and eigenvalues can be found by solving the system of equations (A - λI)x = 0, where λI is the identity matrix multiplied by the eigenvalue λ.
- Rolle's theorem states that if a function f(x) is continuous on a closed interval [a,b] and differentiable on the open interval (a,b), with f(a) = f(b), then there exists at least one value c in (a,b) where the derivative f'(c) = 0.
- The mean value theorem states that if a function f(x) is continuous on a closed interval [a,b] and differentiable on the open interval (a,b), then there exists a value c in (a,b) such that the average rate of change of f over the interval [a,b] equals the instantaneous rate of change
The document provides an overview of vector spaces and related linear algebra concepts. It defines vector spaces, subspaces, basis, dimension, and rank. Key points include:
- A vector space is a set that is closed under vector addition and scalar multiplication. It must satisfy certain axioms.
- A subspace is a subset of a vector space that is also a vector space.
- A basis is a minimal set of linearly independent vectors that span the entire vector space. The dimension of a vector space is the number of vectors in its basis.
- The rank of a matrix is the number of linearly independent rows in its row-reduced echelon form. It provides a measure of the matrix's linear
This document discusses Newton's forward and backward difference interpolation formulas for equally spaced data points. It provides the formulations for calculating the forward and backward differences up to the kth order. For equally spaced points, the forward difference formula approximates a function f(x) using its kth forward difference at the initial point x0. Similarly, the backward difference formula approximates f(x) using its kth backward difference at x0. The document includes an example problem of using these formulas to estimate the Bessel function and exercises involving interpolation of the gamma function and exponential function.
This document provides an overview of complex analysis, including:
1) Limits and their uniqueness in complex analysis, such as the limit of a function f(z) as z approaches z0.
2) The definition of a continuous function in complex analysis as one where the limit exists at each point in the domain and equals the function value.
3) Analytic functions, which are differentiable in some neighborhood of each point in their domain.
This document discusses Rolle's Theorem from calculus. Rolle's Theorem states that if a function f is continuous on a closed interval [a,b] and differentiable on the open interval (a,b), and if f(a) = f(b), then there exists at least one value c in the interval (a,b) where the derivative of f is equal to 0. The document provides an example of applying Rolle's Theorem to show that the derivative of the function f(x) = x^2 - 3x + 2 is equal to 0 at some point between the two x-intercepts of the function.
This document discusses eigen values, eigen vectors, and diagonalization of matrices. It defines eigen values as the roots of the characteristic equation of a matrix. Eigen vectors are non-zero vectors that satisfy AX=λX, where λ is the eigen value. Diagonalization is the process of transforming a matrix A into a diagonal matrix D using a similarity transformation with an invertible matrix P, such that D=P-1AP. The document provides examples to illustrate these concepts and lists various properties of eigen values and eigen vectors.
Linear differential equation with constant coefficientSanjay Singh
The document discusses linear differential equations with constant coefficients. It defines the order, auxiliary equation, complementary function, particular integral and general solution. It provides examples of determining the complementary function and particular integral for different types of linear differential equations. It also discusses Legendre's linear equations, Cauchy-Euler equations, and solving simultaneous linear differential equations.
The document defines row echelon form and reduced row echelon form for matrices. Row echelon form requires that leading 1's occur farther to the right in lower rows. Reduced row echelon form further requires that all entries above leading 1's are zero. The document also discusses Gauss elimination method and elementary row operations for transforming a matrix into row echelon or reduced row echelon form.
The document discusses various methods to compute the rank of a matrix:
1) Using Gauss elimination, where the rank is the number of pivot columns in the echelon form of the matrix.
2) Using determinants of sub-matrices (minors), where the rank is the largest order of a non-zero minor.
3) Transforming the matrix to normal form using row and column operations, where the rank is the number of non-zero rows of the resulting identity matrix.
Worked examples are provided to illustrate computing the rank of matrices using these different methods.
Eigen values and eigen vectors engineeringshubham211
mathematics...for engineering mathematics.....learn maths...............................The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function
...
This document provides information about eigenvalues and eigenvectors. It defines eigenvalues and eigenvectors as scalars (λ) and vectors (x) that satisfy the equation Ax = λx, where A is a matrix. It discusses properties of eigenvalues including that the sum of eigenvalues is the trace of A, and the product is the determinant. The characteristic equation is defined as det(A - λI) = 0, where the roots are the eigenvalues. Cayley-Hamilton theorem states that every matrix satisfies its own characteristic equation. Examples are given to demonstrate Cayley-Hamilton theorem.
1) An inner product space is a vector space with an inner product defined that satisfies certain properties like linearity and positive-definiteness.
2) The Gram-Schmidt process is used to transform a basis into an orthogonal basis and then an orthonormal basis by successively subtracting projections.
3) The angle between two vectors in an inner product space can be computed using the inner product and the norms of the vectors.
1. The document announces that students should bring any exam 1 grade questions without delay, and that the homework for exam 2 has been uploaded and may be updated. It also notes that the last day to drop the class is February 4th and there is no class on that date.
2. The document covers topics from the last class including computing 3x3 determinants, determinants of triangular matrices, and techniques for larger matrices.
3. The document then provides examples of computing determinants and discusses important properties including that row operations do not change the determinant value while row interchanges flip the sign, and multiplying a row scales the determinant.
The document contains announcements from a class instructor. It notifies students that if they have not been able to access the class website or did not receive an email, to contact the instructor. It also reminds students that homeworks are posted on the class website and to check for any updates.
The document discusses vector spaces and related linear algebra concepts. It defines vector spaces and lists the axioms that must be satisfied. Examples of vector spaces include the set of all pairs of real numbers and the space of 2x2 symmetric matrices. The document also discusses subspaces, linear combinations, span, basis, dimension, row space, column space, null space, rank, nullity, and change of basis. It provides examples and explanations of these fundamental linear algebra topics.
This document provides information about vector spaces and subspaces. It defines a vector space as a set of objects called vectors that can be added together and multiplied by scalars, subject to certain rules. A subspace is a subset of a vector space that is closed under vector addition and scalar multiplication. The null space of a matrix is the set of solutions to the homogeneous equation Ax=0 and is a subspace. The column space of a matrix is the set of all linear combinations of its columns and is also a subspace. Examples are provided to illustrate these concepts.
The document discusses inner product spaces and orthonormal bases. It defines an inner product space as a vector space with an inner product defined on it. An orthonormal basis is introduced as a set of orthogonal unit vectors that form a basis. The Gram-Schmidt process is presented as a method for transforming a basis into an orthonormal basis. Properties of inner products, such as the Cauchy-Schwarz inequality and orthogonal projections, are covered.
1) An eigenvector of a square matrix A is a non-zero vector x that satisfies the equation Ax = λx, where λ is the corresponding eigenvalue.
2) The zero vector cannot be an eigenvector, but λ = 0 can be an eigenvalue.
3) For a matrix A, the eigenvectors and eigenvalues can be found by solving the system of equations (A - λI)x = 0, where λI is the identity matrix multiplied by the eigenvalue λ.
- Rolle's theorem states that if a function f(x) is continuous on a closed interval [a,b] and differentiable on the open interval (a,b), with f(a) = f(b), then there exists at least one value c in (a,b) where the derivative f'(c) = 0.
- The mean value theorem states that if a function f(x) is continuous on a closed interval [a,b] and differentiable on the open interval (a,b), then there exists a value c in (a,b) such that the average rate of change of f over the interval [a,b] equals the instantaneous rate of change
The document provides an overview of vector spaces and related linear algebra concepts. It defines vector spaces, subspaces, basis, dimension, and rank. Key points include:
- A vector space is a set that is closed under vector addition and scalar multiplication. It must satisfy certain axioms.
- A subspace is a subset of a vector space that is also a vector space.
- A basis is a minimal set of linearly independent vectors that span the entire vector space. The dimension of a vector space is the number of vectors in its basis.
- The rank of a matrix is the number of linearly independent rows in its row-reduced echelon form. It provides a measure of the matrix's linear
This document discusses Newton's forward and backward difference interpolation formulas for equally spaced data points. It provides the formulations for calculating the forward and backward differences up to the kth order. For equally spaced points, the forward difference formula approximates a function f(x) using its kth forward difference at the initial point x0. Similarly, the backward difference formula approximates f(x) using its kth backward difference at x0. The document includes an example problem of using these formulas to estimate the Bessel function and exercises involving interpolation of the gamma function and exponential function.
This document provides an overview of complex analysis, including:
1) Limits and their uniqueness in complex analysis, such as the limit of a function f(z) as z approaches z0.
2) The definition of a continuous function in complex analysis as one where the limit exists at each point in the domain and equals the function value.
3) Analytic functions, which are differentiable in some neighborhood of each point in their domain.
This document discusses Rolle's Theorem from calculus. Rolle's Theorem states that if a function f is continuous on a closed interval [a,b] and differentiable on the open interval (a,b), and if f(a) = f(b), then there exists at least one value c in the interval (a,b) where the derivative of f is equal to 0. The document provides an example of applying Rolle's Theorem to show that the derivative of the function f(x) = x^2 - 3x + 2 is equal to 0 at some point between the two x-intercepts of the function.
This document discusses eigen values, eigen vectors, and diagonalization of matrices. It defines eigen values as the roots of the characteristic equation of a matrix. Eigen vectors are non-zero vectors that satisfy AX=λX, where λ is the eigen value. Diagonalization is the process of transforming a matrix A into a diagonal matrix D using a similarity transformation with an invertible matrix P, such that D=P-1AP. The document provides examples to illustrate these concepts and lists various properties of eigen values and eigen vectors.
Linear differential equation with constant coefficientSanjay Singh
The document discusses linear differential equations with constant coefficients. It defines the order, auxiliary equation, complementary function, particular integral and general solution. It provides examples of determining the complementary function and particular integral for different types of linear differential equations. It also discusses Legendre's linear equations, Cauchy-Euler equations, and solving simultaneous linear differential equations.
The document defines row echelon form and reduced row echelon form for matrices. Row echelon form requires that leading 1's occur farther to the right in lower rows. Reduced row echelon form further requires that all entries above leading 1's are zero. The document also discusses Gauss elimination method and elementary row operations for transforming a matrix into row echelon or reduced row echelon form.
The document discusses various methods to compute the rank of a matrix:
1) Using Gauss elimination, where the rank is the number of pivot columns in the echelon form of the matrix.
2) Using determinants of sub-matrices (minors), where the rank is the largest order of a non-zero minor.
3) Transforming the matrix to normal form using row and column operations, where the rank is the number of non-zero rows of the resulting identity matrix.
Worked examples are provided to illustrate computing the rank of matrices using these different methods.
Eigen values and eigen vectors engineeringshubham211
mathematics...for engineering mathematics.....learn maths...............................The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function
...
This document provides information about eigenvalues and eigenvectors. It defines eigenvalues and eigenvectors as scalars (λ) and vectors (x) that satisfy the equation Ax = λx, where A is a matrix. It discusses properties of eigenvalues including that the sum of eigenvalues is the trace of A, and the product is the determinant. The characteristic equation is defined as det(A - λI) = 0, where the roots are the eigenvalues. Cayley-Hamilton theorem states that every matrix satisfies its own characteristic equation. Examples are given to demonstrate Cayley-Hamilton theorem.
1) An inner product space is a vector space with an inner product defined that satisfies certain properties like linearity and positive-definiteness.
2) The Gram-Schmidt process is used to transform a basis into an orthogonal basis and then an orthonormal basis by successively subtracting projections.
3) The angle between two vectors in an inner product space can be computed using the inner product and the norms of the vectors.
1. The document announces that students should bring any exam 1 grade questions without delay, and that the homework for exam 2 has been uploaded and may be updated. It also notes that the last day to drop the class is February 4th and there is no class on that date.
2. The document covers topics from the last class including computing 3x3 determinants, determinants of triangular matrices, and techniques for larger matrices.
3. The document then provides examples of computing determinants and discusses important properties including that row operations do not change the determinant value while row interchanges flip the sign, and multiplying a row scales the determinant.
The document contains announcements from a class instructor. It notifies students that if they have not been able to access the class website or did not receive an email, to contact the instructor. It also reminds students that homeworks are posted on the class website and to check for any updates.
1. A complex number λ is an eigenvalue of a matrix A if there exists a non-zero vector x such that Ax = λx.
2. If a matrix has complex eigenvalues, it provides important information about the matrix, such as in problems involving vibrations and rotations in space.
3. For a complex eigenvalue λ = a + bi, a is called the real part and b is called the imaginary part. The absolute value |λ| represents the "length" or magnitude of the eigenvalue.
The document contains announcements and information about a class. It announces corrections to lecture slides, the last day to drop the class with a refund, and provides definitions and examples related to echelon form, reduced row echelon form, pivot positions, and solving systems of linear equations.
The document contains announcements about an exam, practice exam, review sessions, and exam grading for a class. It states that Exam 2 will be on Thursday, February 25 in class. A practice exam will be uploaded by 2 pm that day. Optional review topics will be covered the next day but will not be on the exam. A review session will be held on Wednesday with office hours from 1-4 pm. It also reminds students that a different class starts on Monday and to collect graded exams on Friday between 7 am and 6 pm.
- Quiz 4 will be tomorrow covering sections 3.3, 5.1, and 5.2 of the textbook. It will include 3 problems on Cramer's rule, finding eigenvectors given eigenvalues, and finding characteristic polynomials/eigenvalues of 2x2 and 3x3 matrices. Students must show all work.
- Chapter 6 objectives include extending geometric concepts like length, distance, and perpendicularity to Rn. These concepts are useful for least squares fitting of experimental data to a system of equations.
- The inner product of two vectors u and v in Rn is defined as their dot product, which is the sum of the component-wise products of corresponding elements in u and v.
1. There will be a quiz on Quiz 4 after the next lecture. Exam 2 will be on Feb 25 and cover material from Exam 1 to what is covered on Feb 22.
2. A practice exam will be uploaded on Feb 22 after the remaining material is covered. Optional topics on Feb 23 will not be covered on the exam.
3. Review session on Feb 24 in class. Office hours on Feb 24 from 1-4pm.
The document contains notes from a previous linear algebra class covering the following topics:
1. There will be a quiz tomorrow on sections 1.1-1.3 focusing on concepts rather than lengthy calculations.
2. Previous topics included systems of linear equations, row reduction, pivot positions, basic and free variables, and the span of vectors.
3. Determining if a vector is in the span of other vectors is equivalent to checking if the corresponding linear system is consistent.
4. Examples are provided of determining if homogeneous systems have non-trivial solutions based on the presence of free variables. The general solution of a homogeneous system is expressed in parametric vector form.
Quiz 2 will cover sections 1.4, 1.5, 1.7, and 1.8 on Wednesday January 27. Students with issues on quiz 1 should discuss with the instructor as soon as possible. The solution to quiz 1 will be posted on the website by Monday.
The document discusses linear transformations and provides examples of applying linear transformations to vectors. It defines key concepts such as the domain, co-domain, and range of a transformation. Examples are provided of interesting linear transformations including rotation and reflection transformations. Solutions to examples involving finding the image of vectors under given linear transformations are shown.
- There will be no class on Monday for Martin Luther King Day.
- Quiz 1 will be held in class on Wednesday and will cover sections 1.1, 1.2, and 1.3.
- Students should know all definitions clearly for the quiz, which will focus on conceptual understanding rather than lengthy calculations.
The document defines key concepts in vector spaces including vector space, subspace, span of a set of vectors, and basis. It provides examples to illustrate these concepts. Specifically:
- A vector space is a set of objects called vectors that can be added together and multiplied by scalars, satisfying certain properties.
- A subspace is a subset of a vector space that is itself a vector space under the operations of the original space.
- The span of a set of vectors S is the set of all possible linear combinations of the vectors in S.
- A basis is a set of vectors that spans a vector space and is linearly independent. It provides a standard representation for vectors in the space.
This document contains notes on diagonalization, eigenvalues, and eigenvectors. It discusses how to solve recurrence relations using matrix multiplication and raises matrices to arbitrary powers by diagonalizing them. Diagonalization involves finding an invertible matrix P such that P-1MP is a diagonal matrix D. The columns of P are the eigenvectors of M, and the entries of D are the corresponding eigenvalues. This allows raising M to a power to be reduced to raising the simpler diagonal matrix D to the same power.
This document discusses diagonalization of matrices. It defines similarity of matrices and notes that similar matrices have the same characteristic polynomial and eigenvalues. It then discusses diagonalizing matrices by finding the eigenvalues and corresponding eigenvectors, constructing a change of basis matrix P from the eigenvectors, and constructing a diagonal matrix D from the eigenvalues. It provides examples of diagonalizing matrices with real and complex eigenvalues.
The document discusses various types of matrices:
- Row and column matrices are matrices with only one row or column respectively.
- A square matrix has the same number of rows and columns.
- A diagonal matrix has non-zero elements only along its main diagonal.
- An identity matrix has ones along its main diagonal and zeros elsewhere.
- A scalar matrix has all elements along its main diagonal multiplied by a scalar.
- A null matrix has all elements equal to zero.
The document also discusses properties such as the transpose of a matrix, symmetric matrices, and how to add, subtract and multiply matrices.
The document discusses linear transformations and linear independence. It contains examples and explanations of:
1) How a matrix A can transform a vector x from R4 to a new vector b in R2, representing the linear transformation.
2) How finding vectors x such that Ax=b is equivalent to finding pre-images of b under the transformation A.
3) Key concepts related to linear transformations like domain and range.
Conheça a linha de produtos para beleza que é sucesso em todo o Brasil. Para se tornar um distribuidor da Azenka Cosmetics, acesse www.myazenka.com.br e saiba mais.
Eigenvalues and eigenvectors of symmetric matricesIvan Mateev
The document discusses eigenvalues and eigenvectors of symmetric matrices. It provides an overview of linear transformations and how they can be represented by matrices. It then discusses how eigenvalues and eigenvectors are defined as vectors that do not change direction under a linear transformation except for their sign. The document outlines methods for computing eigenvalues and eigenvectors, including using tridiagonal matrices, householder transformations, and sturm sequences to optimize the computation. Faster algorithms are needed as the current methods are slow.
This document summarizes key concepts regarding eigenvalues and eigenvectors of matrices:
- Eigenvalues are scalars such that there exist non-zero eigenvectors satisfying Ax = λx.
- The characteristic equation states that λ is an eigenvalue if and only if it satisfies det(A - λI) = 0.
- A matrix is diagonalizable if it can be written as A = PDP-1, where D is a diagonal matrix of eigenvalues and P is a matrix of corresponding eigenvectors. Diagonalizable matrices can easily compute powers by raising the eigenvalues to powers.
Chapter 3: Linear Systems and Matrices - Part 3/SlidesChaimae Baroudi
The document discusses determinants of matrices. Some key points:
- The determinant (det) of a square matrix is a single number that can be used to determine properties of the matrix, such as invertibility.
- Formulas are given for calculating the determinant of matrices based on their size, such as the cofactor expansion method.
- Certain types of matrices have simple determinant values, such as triangular and diagonal matrices. The determinant of a triangular matrix is the product of its diagonal entries, and the determinant of a diagonal matrix is the product of its diagonal entries.
On Fully Indecomposable Quaternion Doubly Stochastic Matricesijtsrd
In traditional years, fully indecomposable matrices have played an vital part in various research topics. For example, they have been used in establishing a necessary condition for a matrix to have a positive inverse also, in the case of simultaneously row and column scaling sub ordinate to the unitarily invariant norms, the minimal condition number diagonalizable, sub stochastic matrices , Kronecker products is achieved for fully indecomposable matrices. In the existence of diagonal matrices D1 and D2 , with strictly positive diagonal elements, such that D1 AD2 is quaternion doubly stochastic, is established for an nXn non negative fully indecomposable matrix A. In a related scaling for fully indecomposable non negative rectangular matrices is also discussed. Dr. Gunasekaran K. | Mrs. Seethadevi R. "On Fully Indecomposable Quaternion Doubly Stochastic Matrices" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25351.pdfPaper URL: https://www.ijtsrd.com/mathemetics/other/25351/on-fully-indecomposable-quaternion-doubly-stochastic-matrices/dr-gunasekaran-k
This document discusses various matrix decomposition techniques including least squares, eigendecomposition, and singular value decomposition. It begins with an introduction to the importance of linear algebra and decompositions for applications. Then it provides examples of using least squares to fit curves to data and find regression lines. It defines eigenvalues and eigenvectors and provides examples of eigendecomposition. It also discusses diagonalization of matrices and using the eigendecomposition to raise matrices to powers. Finally, it discusses singular value decomposition and its applications.
APEX INSTITUTE has been established with sincere and positive resolve to do something rewarding for ENGG. / PRE-MEDICAL aspirants. For this the APEX INSTITUTE has been instituted to provide a relentlessly motivating and competitive atmosphere.
This document discusses several key linear algebra concepts:
1) A square matrix is diagonalizable if it can be transformed into a diagonal matrix through multiplication by an invertible matrix. Diagonalizable matrices can be easily raised to high powers.
2) Eigenvalues and eigenvectors are values and vectors that are unchanged by transformation by the matrix, up to a scaling factor for eigenvectors.
3) Orthogonal matrices preserve lengths and angles when multiplying vectors. The Cayley-Hamilton theorem states that every matrix satisfies its own characteristic equation.
K-Notes are concise study materials intended for quick revision near the end of preparation for exams like GATE. Each K-Note covers the concepts from a subject in 40 pages or less. They are useful for final preparation and travel. Students should use K-Notes in the last 2 months before the exam, practicing questions after reviewing each note. The document then provides a summary of key concepts in linear algebra and matrices, including matrix properties, operations, inverses, and systems of linear equations.
This document provides an overview of row space, column space, and null space of matrices. It defines these concepts and gives examples of finding bases for the row space, column space, and null space. It also introduces the rank-nullity theorem and defines the rank and nullity of a matrix. Examples are provided to demonstrate calculating the rank and nullity. The document appears to be teaching notes for a linear algebra course.
Matrices
CMSC 56 | Discrete Mathematical Structure for Computer Science
November 30, 2018
Instructor: Allyn Joy D. Calcaben
College of Arts & Sciences
University of the Philippines Visayas
Section 0.7 Quadratic Equations from Precalculus Prerequisite.docxbagotjesusa
This document provides an overview of solving quadratic equations through various methods including:
- Extracting square roots to solve equations of the form x^2 = c
- Completing the square to transform equations into the form (x + b/2a)^2 = d
- Using the quadratic formula to solve any quadratic equation of the form ax^2 + bx + c = 0
It also provides strategies for determining the best approach, such as factoring if possible or using the quadratic formula if not. Examples are worked through to demonstrate each technique.
Eigen value and vector of linear transformation.pptxAtulTiwari892261
The document discusses eigenvalues and eigenvectors of linear transformations. It defines eigenvalues as scalars that scale eigenvectors by when a linear transformation is applied. Eigenvectors are non-zero vectors that change only in scale and not direction when a linear transformation is applied. The document provides theorems and examples for finding the eigenvalues and eigenvectors of matrices, including finding their characteristic equations and solving homogeneous systems. It determines the dimension of eigenspaces corresponds to eigenvalues.
The document provides information about a test for candidates applying for an M.Tech in Computer Science. It describes:
1) The test will have two parts - a morning objective test (Test MIII) and an afternoon short answer test (Test CS).
2) The CS test booklet will have two groups - Group A covering analytical ability and mathematics at the B.Sc. pass level, and Group B covering advanced topics in mathematics, statistics, physics, computer science, and engineering at the B.Sc. Hons. and B.Tech. levels.
3) Sample questions are provided for both Group A (mathematical reasoning and basic concepts) and Group B (advanced topics in real analysis
The document describes a test for candidates applying for an M.Tech. in Computer Science. [The test consists of two parts - an objective test in the morning and a short answer test in the afternoon. The short answer test has two groups - Group A covers analytical ability and mathematics at the B.Sc. level, while Group B covers additional topics in mathematics, statistics, physics, computer science, or engineering depending on the candidate's choice.] The document provides sample questions testing concepts in mathematics including algebra, calculus, number theory, and logic.
This document discusses properties of operations on matrices such as addition, subtraction, scalar multiplication, and matrix multiplication. Some key points made:
- Matrices behave similarly to real numbers under addition and subtraction, following the same commutative, associative, identity, and inverse properties.
- Scalar multiplication of matrices also follows similar properties to real numbers.
- Matrix multiplication is not commutative in general and does not follow other properties of real number multiplication like cancellation.
- For a matrix to have a multiplicative inverse (inverse matrix), it must be a square matrix and not all square matrices have inverses.
- Powers and exponentials of matrices can be defined analogously to real numbers using repeated matrix multiplication.
Unlock Your Mathematical Potential with MathAssignmentHelp.com! 🧮✨Maths Assignment Help
Are complex equations and theorems giving you a tough time? Say goodbye to math-related stress because MathAssignmentHelp.com is here to rescue you! 🚀📚
🌟 **Discover Our Services:**
🔢 **1. Personalized Guidance:** Our team of experienced mathematicians is ready to provide you with personalized guidance, making even the most challenging concepts crystal clear.
📊 **2. Homework Assistance:** Don't let assignments weigh you down! We offer top-notch homework help that ensures your work is accurate and submission-ready.
🧭 **3. Exam Prep:** Tackling upcoming exams? Our comprehensive study materials and expert insights will boost your confidence and help you excel.
📈 **4. Concept Clarification:** Whether it's calculus, algebra, statistics, or any other math branch, we specialize in clarifying concepts, filling in knowledge gaps, and helping you truly grasp the subject.
🤝 **Why Choose Us?**
✅ **Qualified Experts:** Our team consists of math aficionados with profound expertise in various mathematical domains.
⏱️ **Timely Assistance:** Tight deadlines? No problem! We're equipped to handle urgent assignments without compromising on quality.
🌐 **User-Friendly Platform:** Our website's intuitive interface ensures a seamless experience from start to finish.
🔒 **100% Confidential:** Your privacy matters. We ensure all your information and interactions with us remain strictly confidential.
🌈 **Embrace the Joy of Learning Math:** Mathematics is not just about numbers; it's about problem-solving, critical thinking, and unlocking new perspectives. Let's make your math journey exciting and fulfilling together!
📣 **Special Offer:** For a limited time, new users get an exclusive discount on their first service. Don't miss out on this opportunity to experience stress-free math learning!
👉 **Visit Us Today:** [MathAssignmentHelp.com](https://www.mathsassignmenthelp.com/)
📱 **Follow Us:** Stay updated with math tips, fun facts, and more on our social media channels! 📚🎉
Let's conquer math together! 🚀🧮
This document discusses vector algebra concepts including:
1. Vectors can represent quantities that have both magnitude and direction, unlike scalars which only have magnitude.
2. Common vector operations include addition, subtraction, and determining the resultant or sum of multiple vectors.
3. The dot product of two vectors produces a scalar value that can indicate whether vectors are parallel or perpendicular and define physical quantities like work and electric fields.
4. The cross product of two vectors produces a new vector that is perpendicular to the original vectors and can define quantities like angular velocity and motion in electromagnetic fields.
Here are the key steps to find the eigenvalues of the given matrix:
1) Write the characteristic equation: det(A - λI) = 0
2) Expand the determinant: (1-λ)(-2-λ) - 4 = 0
3) Simplify and factor: λ(λ + 1)(λ + 2) = 0
4) Find the roots: λ1 = 0, λ2 = -1, λ3 = -2
Therefore, the eigenvalues of the given matrix are -1 and -2.
This document provides contact information for math assignment help, including a phone number and email address. It then presents solutions to several problems from a linear algebra textbook. The problems cover topics like writing a quadratic form as a sum of squares, finding the closest line and plane of best fit to a set of points, orthonormal vectors, and determinants. Solutions are provided in mathematical notation and include working steps.
This document provides definitions and notation for set theory concepts. It defines what a set is, ways to describe sets (explicitly by listing elements or implicitly using set builder notation), and basic set relationships like subset, proper subset, union, intersection, complement, power set, and Cartesian product. It also discusses Russell's paradox and defines important sets like the natural numbers. Key identities for set operations like idempotent, commutative, associative, distributive, De Morgan's laws, and complement laws are presented. Proofs of identities using logical equivalences and membership tables are demonstrated.
The document discusses the process for finding the eigenvalues of a square matrix. It begins by defining the characteristic equation as det(A - λI) = 0, where A is the matrix and λI subtracts λ from the diagonal. The characteristic polynomial is obtained by computing this determinant. For a 2x2 matrix, it is a quadratic equation that can be factored to find the two eigenvalues. Larger matrices may require numerical methods. The sum of eigenvalues equals the trace, and their product equals the determinant. A matrix will always have n eigenvalues for its size n. An example problem is presented to demonstrate the full process.
1. The matrix is not invertible as it has repeated rows.
2. The eigenvalue is 0 since a matrix is not invertible if it has 0 as an eigenvalue.
3. The eigenvectors corresponding to 0 can be found by reducing the matrix A - 0I to row echelon form. This gives the equation x1 + x2 + x3 = 0 with x2 and x3 as free variables, so two linearly independent eigenvectors are (1, -1, 0) and (1, 0, -1).
Eigenvalues and Eigenvectors (Tacoma Narrows Bridge video included)Prasanth George
- There is a quiz tomorrow on sections 3.1 and 3.2 of the course material. Calculators will not be allowed and determinants must be calculated using the methods learned.
- Eigenvalues and eigenvectors are related to the linear transformation of a matrix A acting on a vector x. They give a better understanding of the transformation.
- The 1940 collapse of the Tacoma Narrows Bridge is explained by oscillations caused by the wind frequency matching the bridge's natural frequency, which is the eigenvalue of smallest magnitude based on a mathematical model of the bridge. Eigenvalues are important for engineering structure design.
1. Quiz 3 will cover sections 3.1 and 3.2 on February 11th. No calculators will be allowed and determinants must be found using the methods taught.
2. The homework problems have been updated, so students should check for the latest list.
3. To find the inverse of a 3x3 matrix A, first find the adjugate of A (denoted adjA) by writing the cofactors with alternating signs, then divide adjA by the determinant of A.
The document contains announcements and information about an exam for a class. It includes the following key points:
- Students should bring any grade-related questions about Exam 1 without delay. The homework for Exam 2 has been uploaded.
- The professor is planning to cover chapters 3, 5, and 6 for Exam 2.
- The last day for students to drop the class with a grade of "W" is February 4th.
The document contains announcements for an upcoming exam:
1. Students should bring any grade related questions about quiz 2 without delay. Test 1 will be on February 1st covering sections 1.1-1.5, 1.7-1.8, 2.1-2.3 and 2.8-2.9.
2. A sample exam 1 will be posted by that evening. Students should review for the exam after the lecture.
3. The instructor will be available in their office all day the following day to answer any questions.
It also provides tips for preparing for the exam, including doing homework problems and sample exams within the time limit to practice time management.
The document contains announcements and information about an upcoming exam:
- A quiz and test are scheduled. Sample exams and review sessions will be provided.
- Exam 1 will cover several sections of the textbook and the professor will be available for questions.
- Tips are provided for studying including doing homework, examples, and practicing sample exams.
- Sections about subspaces and column/null spaces of matrices are summarized, including properties and examples.
Quiz 2 will be held on January 27 covering sections 1.4, 1.5, 1.7, and 1.8. Test 1 is scheduled for February 1. The document then provides steps to find the inverse of a 2x2 matrix, discusses invertibility if the determinant is 0, and gives an example of finding the inverse of a 3x3 matrix using row reduction of the augmented matrix.
The document discusses the following:
1. There will be a quiz on Jan 27 covering sections 1.4, 1.5, 1.7, and 1.8 and any issues with quiz 1 should be discussed asap.
2. Test 1 will be on Feb 1 in class with more details to come.
3. Matrix multiplication is defined only when the number of columns of the first matrix equals the number of rows of the second matrix.
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
1. Announcements
Quiz 4 will be on Thurs Feb 18 on sec 3.3, 5.1 and 5.2.
Check the grade sheet for any mistakes or omissions.
2. Last Class
1. Characteristic equation and characteristic polynomial of a
square matrtix.
3. Last Class
1. Characteristic equation and characteristic polynomial of a
square matrtix.
2. Finding eigenvalues of a 2 × 2 matrix and the char.polynomial
of a 3 × 3 matrix.
4. Last Class
1. Characteristic equation and characteristic polynomial of a
square matrtix.
2. Finding eigenvalues of a 2 × 2 matrix and the char.polynomial
of a 3 × 3 matrix.
3. The char.equation of a 2 × 2 matrix is a quadratic equation
which can be factorized (or use formula for solving quad.
equation) to give the eigenvalues.
5. Section 5.3 Diagonalization
1. To factorize the given matrix A in the form A = PDP −1 where
D and P gives information about the eigenvalues and
eigenvectors.
6. Section 5.3 Diagonalization
1. To factorize the given matrix A in the form A = PDP −1 where
D and P gives information about the eigenvalues and
eigenvectors.
2. Useful in computing higher powers of A quickly (without
multiplying A many times)
7. Section 5.3 Diagonalization
1. To factorize the given matrix A in the form A = PDP −1 where
D and P gives information about the eigenvalues and
eigenvectors.
2. Useful in computing higher powers of A quickly (without
multiplying A many times)
3. This factorization is very useful in "decoupling" complicated
dynamical systems (dierential equations)
8. Section 5.3 Diagonalization
1. To factorize the given matrix A in the form A = PDP −1 where
D and P gives information about the eigenvalues and
eigenvectors.
2. Useful in computing higher powers of A quickly (without
multiplying A many times)
3. This factorization is very useful in decoupling complicated
dynamical systems (dierential equations)
4. D in the above factorization stands for a diagonal matrix.
Properties of diagonal matrices make life a lot easier.
13. Powers of Diagonal Matrices
What about A23 ?
Based on the above pattern:
23 423 0
A = .
0 623
14. Powers of Diagonal Matrices
What about A23 ?
Based on the above pattern:
23 423 0
A = .
0 623
For a general exponent k ,
k= 4k 0
0 6k
A .
15. Observations
1. Raising a diagonal matrix to a power is same as raising the
diagonal elements to the same power and the result is still a
diagonal matrix.
16. Observations
1. Raising a diagonal matrix to a power is same as raising the
diagonal elements to the same power and the result is still a
diagonal matrix.
2. Please note that this will work for any diagonal matrix (3 × 3 or
any size)
17. Observations
1. Raising a diagonal matrix to a power is same as raising the
diagonal elements to the same power and the result is still a
diagonal matrix.
2. Please note that this will work for any diagonal matrix (3 × 3 or
any size)
3. DO NOT do this to a general matrix (even a triangular matrix)
20. Diagonalization
A square matrix A is diagonalizable if
1. A is similar to a diagonal matrix D which means
21. Diagonalization
A square matrix A is diagonalizable if
1. A is similar to a diagonal matrix D which means
2. We can write A = PDP −1 for some invertible matrix P .
22. Diagonalization
A square matrix A is diagonalizable if
1. A is similar to a diagonal matrix D which means
2. We can write A = PDP −1 for some invertible matrix P .
If A = PDP −1 what is A2 ?
2
A = (PDP −1 )(PDP −1 ) = PD (P −1 P ) DP −1 = PD 2 P −1
I
23. Diagonalization
A square matrix A is diagonalizable if
1. A is similar to a diagonal matrix D which means
2. We can write A = PDP −1 for some invertible matrix P .
If A = PDP −1 what is A2 ?
2
A = (PDP −1 )(PDP −1 ) = PD (P −1 P ) DP −1 = PD 2 P −1
I
Similarly,
3
A = (PD 2 P −1 )(PDP −1 ) = PD 2 (P −1 P ) DP −1 = PD 3 P −1
A2 I
and so on
24. Taking Advantage of Diagonal Matrix
To nd Ak of any square matrix A,
1. Diagonalize A, in other words factorize A as PDP −1 for
suitable D and invertible P .
25. Taking Advantage of Diagonal Matrix
To nd Ak of any square matrix A,
1. Diagonalize A, in other words factorize A as PDP −1 for
suitable D and invertible P .
2. Raise the diagonal entries of D to k , no change to P and P −1 .
26. Taking Advantage of Diagonal Matrix
To nd Ak of any square matrix A,
1. Diagonalize A, in other words factorize A as PDP −1 for
suitable D and invertible P .
2. Raise the diagonal entries of D to k , no change to P and P −1 .
3. Find the product PD k P −1 .
The following theorem says when exactly we can diagonalize a
square matrix A. (very important)
27. The Diagonalization Theorem
Theorem
An n ×n matrix A is diagonalizable if and only if A has
n linearly independent eigenvectors.
28. The Diagonalization Theorem
Theorem
An n ×n matrix A is diagonalizable if and only if A has
n linearly independent eigenvectors.
A = PDP −1 , where D is a diagonal matrix, if and only if the
columns of P are n linearly independent eigenvectors of A.
29. The Diagonalization Theorem
Theorem
An n ×n matrix A is diagonalizable if and only if A has
n linearly independent eigenvectors.
A = PDP −1 , where D is a diagonal matrix, if and only if the
columns of P are n linearly independent eigenvectors of A.
If this is done, the diagonal entries of D are the eigenvalues of A
that correspond to the respective eigenvectors in P .
30. Notes
1. You can arrange the eigenvalues of A in any order you like to
form D .
31. Notes
1. You can arrange the eigenvalues of A in any order you like to
form D .
2. Arrange the linearly independent eigenvectors of A as columns
to form P . This should correspond to how you write D .
32. Notes
1. You can arrange the eigenvalues of A in any order you like to
form D .
2. Arrange the linearly independent eigenvectors of A as columns
to form P . This should correspond to how you write D .
3. This means the rst column in P must be the eigenvector of
the rst eigenvalue in D , the second column in P the
eigenvector corresponding to the second eigenvalue in D and
so on. This is very important.
33. Notes
1. You can arrange the eigenvalues of A in any order you like to
form D .
2. Arrange the linearly independent eigenvectors of A as columns
to form P . This should correspond to how you write D .
3. This means the rst column in P must be the eigenvector of
the rst eigenvalue in D , the second column in P the
eigenvector corresponding to the second eigenvalue in D and
so on. This is very important.
4. Of course, you could write P rst and arrange the eigenvalues
of D accordingly.
34. Example 2, section 5.3
Let A = PDP −1 . For the given P and D , compute A4 .
2 −3 1 0
P = ,D =
−3 5 0 1 /2
35. Example 2, section 5.3
Let A = PDP −1 . For the given P and D , compute A4 .
2 −3 1 0
P = ,D =
−3 5 0 1 /2
Solution: Since A = PDP −1 , A4 = PD 4 P −1
36. Example 2, section 5.3
Let A = PDP −1 . For the given P and D , compute A4 .
2 −3 1 0
P = ,D =
−3 5 0 1 /2
Solution: Since A = PDP −1 , A4 = PD 4 P −1
4 14 0 1 0
D = =
0 (1/2)4 0 1/16
37. Example 2, section 5.3
Let A = PDP −1 . For the given P and D , compute A4 .
2 −3 1 0
P = ,D =
−3 5 0 1 /2
Solution: Since A = PDP −1 , A4 = PD 4 P −1
4 14 0 1 0
D = =
0 (1/2)4 0 1/16
Here det P = 10 − 9 = 1, we can nd P −1 . (interchange the main
diagonals, change signs of o diagonals, divide by det P = 1)
38. Example 2, section 5.3
Let A = PDP −1 . For the given P and D , compute A4 .
2 −3 1 0
P = ,D =
−3 5 0 1 /2
Solution: Since A = PDP −1 , A4 = PD 4 P −1
4 14 0 1 0
D = =
0 (1/2)4 0 1/16
Here det P = 10 − 9 = 1, we can nd P −1 . (interchange the main
diagonals, change signs of o diagonals, divide by det P = 1)
−1 5 3
P =
3 2
39. Example 2, section 5.3
A
4 = PD 4 P −1 =⇒
4 2 −3 1 0 5 3
A =
−3 5 0 1/16 3 2
P D4 P −1
40. Example 2, section 5.3
A
4 = PD 4 P −1 =⇒
4 2 −3 1 0 5 3
A =
−3 5 0 1/16 3 2
P D4 P −1
2 −3/16 5 3
=
−3 5/16 3 2
PD 4 P −1
41. Example 2, section 5.3
A
4 = PD 4 P −1 =⇒
4 2 −3 1 0 5 3
A =
−3 5 0 1/16 3 2
P D4 P −1
2 −3/16 5 3
=
−3 5/16 3 2
PD 4 P −1
10 − 9/16 6 − 6/16
=
−15 + 15/16 −9 + 10/16
PD 4 P −1
42. Example 2, section 5.3
A
4 = PD 4 P −1 =⇒
4 2 −3 1 0 5 3
A =
−3 5 0 1/16 3 2
P D4 P −1
2 −3/16 5 3
=
−3 5/16 3 2
PD 4 P −1
10 − 9/16 6 − 6/16
=
−15 + 15/16 −9 + 10/16
PD 4 P −1
151/16 90/16 1 151 90
= =
−225/16 −134/16 16 −225 −134
43. Example 4, section 5.3 (slightly modied)
−2 12
Let A = . Use the factorization PDP −1 to compute A6
−1 5
where
3 4 2 0
P = ,D =
1 1 0 1
44. Example 4, section 5.3 (slightly modied)
−2 12
Let A = . Use the factorization PDP −1 to compute A6
−1 5
where
3 4 2 0
P = ,D =
1 1 0 1
Solution: Since A = PDP −1 , A6 = PD 6 P −1
6 26 0 64 0
D = =
0 16 0 1
45. Example 4, section 5.3 (slightly modied)
−2 12
Let A = . Use the factorization PDP −1 to compute A6
−1 5
where
3 4 2 0
P = ,D =
1 1 0 1
Solution: Since A = PDP −1 , A6 = PD 6 P −1
6 26 0 64 0
D = =
0 16 0 1
Here det P = 3 − 4 = −1, we can nd P −1 . (interchange the main
diagonal entries, change signs of o diagonal entries, divide by
det P = −1)
−1 −1 4
P =
1 −3
46. Example 2, section 5.3
A
6 = PD 6 P −1 =⇒
6 3 4 64 0 −1 4
A =
1 1 0 1 1 −3
P D6 P −1
47. Example 2, section 5.3
A
6 = PD 6 P −1 =⇒
6 3 4 64 0 −1 4
A =
1 1 0 1 1 −3
P D6 P −1
192 4 −1 4
=
64 1 1 −3
PD 6 P −1
48. Example 2, section 5.3
A
6 = PD 6 P −1 =⇒
6 3 4 64 0 −1 4
A =
1 1 0 1 1 −3
P D6 P −1
192 4 −1 4
=
64 1 1 −3
PD 6 P −1
−188 756
=
−63 253
PD 6 P −1
49. Example 6, section 5.3
The matrix A is factored in the form PDP −1 . Find the eigenvalues
of A and the basis for each eigenspace.
4 0 −2 −2 0 −1 5 0 0 0 0 1
2 5 4 = 0 1 2 0 5 0 2 1 4
0 0 5 1 0 0 0 0 4 −1 0 −2
50. Example 6, section 5.3
The matrix A is factored in the form PDP −1 . Find the eigenvalues
of A and the basis for each eigenspace.
4 0 −2 −2 0 −1 5 0 0 0 0 1
2 5 4 = 0 1 2 0 5 0 2 1 4
0 0 5 1 0 0 0 0 4 −1 0 −2
Solution: The eigenvalues of A are the entries of the diagonal
matrix D . Here the eigenvalues are λ = 5, 5, 4. Note that 5 has
multiplicity 2 (repeated).
51. Example 6, section 5.3
The matrix A is factored in the form PDP −1 . Find the eigenvalues
of A and the basis for each eigenspace.
4 0 −2 −2 0 −1 5 0 0 0 0 1
2 5 4 = 0 1 2 0 5 0 2 1 4
0 0 5 1 0 0 0 0 4 −1 0 −2
Solution: The eigenvalues of A are the entries of the diagonal
matrix D . Here the eigenvalues are λ = 5, 5, 4. Note that 5 has
multiplicity 2 (repeated).
The eigenvectors of λ = 5 are (the rst 2 columns of P )
−2 0
0 , 1
1 0
52. Example 6, section 5.3
The matrix A is factored in the form PDP −1 . Find the eigenvalues
of A and the basis for each eigenspace.
4 0 −2 −2 0 −1 5 0 0 0 0 1
2 5 4 = 0 1 2 0 5 0 2 1 4
0 0 5 1 0 0 0 0 4 −1 0 −2
Solution: The eigenvalues of A are the entries of the diagonal
matrix D . Here the eigenvalues are λ = 5, 5, 4. Note that 5 has
multiplicity 2 (repeated).
The eigenvectors of λ = 5 are (the rst 2 columns of P )
−2 0
0 , 1
1 0
−1
An eigenvector of λ = 4 is (the last column of P ) 2
0
53. Steps to Diagonalize an n × n Matrix
1. First nd the eigenvalues of A using the char. equation (from
sec 5.2). Eigenvalues will be provided in the problem for
dicult 3 × 3 matrices and larger matrices that are not
triangular.
54. Steps to Diagonalize an n × n Matrix
1. First nd the eigenvalues of A using the char. equation (from
sec 5.2). Eigenvalues will be provided in the problem for
dicult 3 × 3 matrices and larger matrices that are not
triangular.
2. Find the eigenvectors for each eigenvalue (based on sec 5.1).
55. Steps to Diagonalize an n × n Matrix
1. First nd the eigenvalues of A using the char. equation (from
sec 5.2). Eigenvalues will be provided in the problem for
dicult 3 × 3 matrices and larger matrices that are not
triangular.
2. Find the eigenvectors for each eigenvalue (based on sec 5.1).
3. Make sure you have n linearly independent eigenvectors.
Otherwise you cannot diagonalize.
56. Steps to Diagonalize an n × n Matrix
1. First nd the eigenvalues of A using the char. equation (from
sec 5.2). Eigenvalues will be provided in the problem for
dicult 3 × 3 matrices and larger matrices that are not
triangular.
2. Find the eigenvectors for each eigenvalue (based on sec 5.1).
3. Make sure you have n linearly independent eigenvectors.
Otherwise you cannot diagonalize.
4. If you are successful with step 3, write P and D carefully.
(Make sure that the columns of P and D correspond to
eachother)
57. Steps to Diagonalize an n × n Matrix
1. First nd the eigenvalues of A using the char. equation (from
sec 5.2). Eigenvalues will be provided in the problem for
dicult 3 × 3 matrices and larger matrices that are not
triangular.
2. Find the eigenvectors for each eigenvalue (based on sec 5.1).
3. Make sure you have n linearly independent eigenvectors.
Otherwise you cannot diagonalize.
4. If you are successful with step 3, write P and D carefully.
(Make sure that the columns of P and D correspond to
eachother)
5. For a 2 × 2 matrix, compute P −1 . For 3 × 3 and larger matrices,
compute the products AP and PD and make sure they are
exactly the same.
58. Important
Theorem
An n ×n matrix with n distinct eigenvalues is diagonalizable.
59. Important
Theorem
An n ×n matrix with n distinct eigenvalues is diagonalizable.
This is because (from section 5.1)
Theorem
The eigenvectors corresponding to distinct eigenvalues are linearly
independent
60. Important
1. If there are no repeated eigenvalues, diagonalization is
guaranteed.
61. Important
1. If there are no repeated eigenvalues, diagonalization is
guaranteed.
2. Presence of repeated eigenvalues immediately does not mean
that diagonalization fails.
62. Important
1. If there are no repeated eigenvalues, diagonalization is
guaranteed.
2. Presence of repeated eigenvalues immediately does not mean
that diagonalization fails.
3. If you can get enough linearly independent eigenvectors from
the repeated eigenvalue, we can still diagonalize.
63. Important
1. If there are no repeated eigenvalues, diagonalization is
guaranteed.
2. Presence of repeated eigenvalues immediately does not mean
that diagonalization fails.
3. If you can get enough linearly independent eigenvectors from
the repeated eigenvalue, we can still diagonalize.
4. For example, suppose a 3 × 3 matrix has eigenvalues 2, 2, and
4. If we can get 2 linearly independent eigenvectors for
eigenvalue 2, we are good. If the eigenvalue 2 gives only one
eigenvector, diagonalization fails.
65. Example 8, section 5.3
5 1
Diagonalize A = if possible.
0 5
Solution: What are the eigenvalues of A?
66. Example 8, section 5.3
5 1
Diagonalize A = if possible.
0 5
Solution: What are the eigenvalues of A?
We can write the char.equation and solve if necessary. Look
carefully at A. It is triangular. The eigenvalues are thus λ = 5, 5.
67. Example 8, section 5.3
5 1
Diagonalize A = if possible.
0 5
Solution: What are the eigenvalues of A?
We can write the char.equation and solve if necessary. Look
carefully at A. It is triangular. The eigenvalues are thus λ = 5, 5.
Since 5 is a repeated eigenvalue there is a possibility that
diagonalization may fail. But we have to nd the eigenvectors to
conrm this. Start with the matrix A − 5I .
5 1 5 0 0 1
A − 5I = − =
0 5 0 5 0 0
69. Example 8, section 5.3
From the rst row, x2 = 0 and x1 is free. Thus an eigenvector is
x1 x1 1
= = x1 .
x2 0 0
70. Example 8, section 5.3
From the rst row, x2 = 0 and x1 is free. Thus an eigenvector is
x1 x1 1
= = x1 .
x2 0 0
1
Fix x1 = 1 and an eigenvector is .
0
71. Example 8, section 5.3
From the rst row, x2 = 0 and x1 is free. Thus an eigenvector is
x1 x1 1
= = x1 .
x2 0 0
1
Fix x1 = 1 and an eigenvector is .
0
We are unable to nd another eigenvector for λ = 5 so that we have
2 linearly independent eigenvectors. So A is NOT diagonalizable.
72. Example 10, section 5.3
2 3
Diagonalize A = if possible.
4 1
Solution: We have to write the char.equation and solve to nd the
eigenvalues.
73. Example 10, section 5.3
2 3
Diagonalize A = if possible.
4 1
Solution: We have to write the char.equation and solve to nd the
eigenvalues. So,
2−λ 3
=0
4 1−λ
74. Example 10, section 5.3
2 3
Diagonalize A = if possible.
4 1
Solution: We have to write the char.equation and solve to nd the
eigenvalues. So,
2−λ 3
=0
4 1−λ
=⇒ (2 − λ)(1 − λ) − 12 = 0
=⇒ 2 − 3λ + λ2 − 12 = 0
75. Example 10, section 5.3
2 3
Diagonalize A = if possible.
4 1
Solution: We have to write the char.equation and solve to nd the
eigenvalues. So,
2−λ 3
=0
4 1−λ
=⇒ (2 − λ)(1 − λ) − 12 = 0
=⇒ 2 − 3λ + λ2 − 12 = 0
=⇒ λ2 − 3λ − 10 = 0
76. Example 10, section 5.3
2 3
Diagonalize A = if possible.
4 1
Solution: We have to write the char.equation and solve to nd the
eigenvalues. So,
2−λ 3
=0
4 1−λ
=⇒ (2 − λ)(1 − λ) − 12 = 0
=⇒ 2 − 3λ + λ2 − 12 = 0
=⇒ λ2 − 3λ − 10 = 0
=⇒ (λ − 5)(λ + 2) = 0
=⇒ λ = 5, λ = −2
77. Example 10, section 5.3
2 3
Diagonalize A = if possible.
4 1
Solution: We have to write the char.equation and solve to nd the
eigenvalues. So,
2−λ 3
=0
4 1−λ
=⇒ (2 − λ)(1 − λ) − 12 = 0
=⇒ 2 − 3λ + λ2 − 12 = 0
=⇒ λ2 − 3λ − 10 = 0
=⇒ (λ − 5)(λ + 2) = 0
=⇒ λ = 5, λ = −2
Since we have distinct eigenvalues, we can surely diagonalize A.
First nd an eigenvector for each eigenvalue.
83. Example 10, section 5.3
For λ = −2,
2 3 2 0 4 3
A + 2I = + =
4 1 0 2 4 3
4 3 R 2−R 1 4 3
A + 2I = −− −
− −→
4 3 0 0
84. Example 10, section 5.3
For λ = −2,
2 3 2 0 4 3
A + 2I = + =
4 1 0 2 4 3
4 3 R 2−R 1 4 3
A + 2I = −− −
− −→
4 3 0 0
x2 is a free variable and from rst row, x1 = − 3 x2 .
4
x1 − 3 x2
4 −3
4
= = x2 .
x2 x2 1
85. Example 10, section 5.3
For λ = −2,
2 3 2 0 4 3
A + 2I = + =
4 1 0 2 4 3
4 3 R 2−R 1 4 3
A + 2I = −− −
− −→
4 3 0 0
x2 is a free variable and from rst row, x1 = − 3 x2 .
4
x1 − 3 x2
4 −3
4
= = x2 .
x2 x2 1
−3
Pick x2 = 4 and an eigenvector for λ = −2 is .
4
86. Example 10, section 5.3
We can now write P using these 2 eigenvectors as columns.
1 −3
P = .
1 4
D would be the eigenvalues written as diagonal entries, in the same
order
5 0
D = .
0 −2
87. Example 10, section 5.3
We can now write P using these 2 eigenvectors as columns.
1 −3
P = .
1 4
D would be the eigenvalues written as diagonal entries, in the same
order
5 0
D = .
0 −2
Also since det P = 7,
−1 4 /7 3 /7
P =
−1/7 1/7
88. Example 10, section 5.3
We can now write P using these 2 eigenvectors as columns.
1 −3
P = .
1 4
D would be the eigenvalues written as diagonal entries, in the same
order
5 0
D = .
0 −2
Also since det P = 7,
−1 4 /7 3 /7
P =
−1/7 1/7
−1 1 −3 5 0 4/7 3/7
PDP =
1 4 0 −2 −1/7 1/7
89. Example 10, section 5.3
We can now write P using these 2 eigenvectors as columns.
1 −3
P = .
1 4
D would be the eigenvalues written as diagonal entries, in the same
order
5 0
D = .
0 −2
Also since det P = 7,
−1 4 /7 3 /7
P =
−1/7 1/7
−1 1 −3 5 0 4/7 3/7
PDP =
1 4 0 −2 −1/7 1/7
5 6 4/7 3/7 2 3
= =
5 −8 −1/7 1/7 4 1
90. Example 12, section 5.3
4 2 2
Diagonalize A = 2 4 2 if possible if λ = 2, 8 are the
2 2 4
eigenvalues.
91. Example 12, section 5.3
4 2 2
Diagonalize A = 2 4 2 if possible if λ = 2, 8 are the
2 2 4
eigenvalues.
Solution: Only 2 eigenvalues λ = 2, 8 are given. This means one of
these could be repeated. One way to check is to nd the trace of
the matrix which is 4+4+4=12 and the sum of the eigenvalues
which is 2+8+?. Since they must be same ? must be 2.
92. Example 12, section 5.3
4 2 2
Diagonalize A = 2 4 2 if possible if λ = 2, 8 are the
2 2 4
eigenvalues.
Solution: Only 2 eigenvalues λ = 2, 8 are given. This means one of
these could be repeated. One way to check is to nd the trace of
the matrix which is 4+4+4=12 and the sum of the eigenvalues
which is 2+8+?. Since they must be same ? must be 2.
Since we have repeated eigenvalue 2, it is possible (not already
sure) that A may not be diagonalizable. Finding the eigenvectors
for λ = 2 is the only way to nd out.
99. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
100. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
Divide all rows by 2 and interchange the rst 2 rows
1 −2 1
A−8I = −2 1 1
1 1 −2
101. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
Divide all rows by 2 and interchange the rst 2 rows
1 −2 1 1 −2 1
R 2+2R 1
A−8I = −2 1 1 =⇒ 0 −3 3
1 1 −2 R 3−R 1 0 3 −3
102. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
Divide all rows by 2 and interchange the rst 2 rows
1 −2 1 1 −2 1 1 −2 1
A−8I = −2 1 1 R 2+2R 1 0 −3 3 R=⇒ 2 0
=⇒
3+R
−3 3
1 1 −2 R 3−R 1 0 3 −3
0 0 0
103. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
Divide all rows by 2 and interchange the rst 2 rows
1 −2 1 1 −2 1 1 −2 1
A−8I = −2 1 1 R 2+2R 1 0 −3 3 R=⇒ 2 0
=⇒
3+R
−3 3
1 1 −2 R 3−R 1 0 3 −3
0 0 0
x3 is a free variable. From second row, x2 = x3 . From rst row,
x1 = 2x2 − x3 = x3
104. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
Divide all rows by 2 and interchange the rst 2 rows
1 −2 1 1 −2 1 1 −2 1
A−8I = −2 1 1 R 2+2R 1 0 −3 3 R=⇒ 2 0
=⇒
3+R
−3 3
1 1 −2 R 3−R 1 0 3 −3
0 0 0
x3 is a free variable. From second row, x2 = x3 . From rst row,
x1 = 2x2 − x3 = x3
1
x1 x3
x2 = x3 = x3 1
x3 x3 1
105. This means A is diagonalizable. We have to nd an eigenvector for
λ = 8. For λ = 8,
4 2 2 8 0 0 −4 2 2
A − 8I = 2 4 2 − 0 8 0 = 2 −4 2
2 2 4 0 0 8 2 2 −4
Divide all rows by 2 and interchange the rst 2 rows
1 −2 1 1 −2 1 1 −2 1
A−8I = −2 1 1 R 2+2R 1 0 −3 3 R=⇒ 2 0
=⇒
3+R
−3 3
1 1 −2 R 3−R 1 0 3 −3
0 0 0
x3 is a free variable. From second row, x2 = x3 . From rst row,
x1 = 2x2 − x3 = x3
1
x1 x3
x2 = x3 = x3 1
x3 x3 1
1
An eigenvector for λ = 8 is 1 .
1
106. Example 10, section 5.3
We can now write P using these 3 eigenvectors as columns.
−1 −1 1
P = 1 0 1 .
0 1 1
107. Example 10, section 5.3
We can now write P using these 3 eigenvectors as columns.
−1 −1 1
P = 1 0 1 .
0 1 1
D would be the eigenvalues written as diagonal entries, in the same
order
2 0 0
D = 0 2 0 .
0 0 8
108. Example 10, section 5.3
We can now write P using these 3 eigenvectors as columns.
−1 −1 1
P = 1 0 1 .
0 1 1
D would be the eigenvalues written as diagonal entries, in the same
order
2 0 0
D = 0 2 0 .
0 0 8
Find the products AP and PD (you must nd these clearly).
4 2 2 −1 −1 1 −2 −2 8
AP = 2 4 2 1 0 1 = 2 0 8
2 2 4 0 1 1 0 2 8
109. Example 10, section 5.3
We can now write P using these 3 eigenvectors as columns.
−1 −1 1
P = 1 0 1 .
0 1 1
D would be the eigenvalues written as diagonal entries, in the same
order
2 0 0
D = 0 2 0 .
0 0 8
Find the products AP and PD (you must nd these clearly).
4 2 2 −1 −1 1 −2 −2 8
AP = 2 4 2 1 0 1 = 2 0 8
2 2 4 0 1 1 0 2 8
−1 −1 1 2 0 0 −2 −2 8
PD = 1 0 1 0 2 0 = 2 0 8
0 1 1 0 0 8 0 2 8