This document provides solutions to problems from Problem Set 7 in 18.06 Linear Algebra. It includes solutions to 6 problems involving eigenvalues and eigenvectors of matrices. Key details include:
- Finding eigenvalues and eigenvectors of specific matrices like A = [matrix]
- Showing that the characteristic polynomial of a matrix A equals 0 using its diagonalization
- Deriving that the inverse of an invertible matrix A can be written as a polynomial function of A
- Explaining that the eigenvalues of a matrix A are also the eigenvalues of its transpose AT, while the eigenvectors may differ.
Eigen values and eigen vectors engineeringshubham211
mathematics...for engineering mathematics.....learn maths...............................The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function
...
this the slide for learning derogatory and non derogatory matrices. You can eaily refer this. we have also stated some basic learnt topics in matrices so that you won't confusing over the steps of example. this is the part of linear algebra. the slide template is taken from slidesgo.com but it is designed by me. we have also mentioned the polynomials that we have to find and the necessary steps that we need to follow to show or prove whether the matrix is derogatory or not. i hope you like this slide.
Eigen values and eigen vectors engineeringshubham211
mathematics...for engineering mathematics.....learn maths...............................The individual items in a matrix are called its elements or entries.[4] Provided that they are the same size (have the same number of rows and the same number of columns), two matrices can be added or subtracted element by element. The rule for matrix multiplication, however, is that two matrices can be multiplied only when the number of columns in the first equals the number of rows in the second. Any matrix can be multiplied element-wise by a scalar from its associated field. A major application of matrices is to represent linear transformations, that is, generalizations of linear functions such as f(x) = 4x. For example, the rotation of vectors in three dimensional space is a linear transformation which can be represented by a rotation matrix R: if v is a column vector (a matrix with only one column) describing the position of a point in space, the product Rv is a column vector describing the position of that point after a rotation. The product of two transformation matrices is a matrix that represents the composition of two linear transformations. Another application of matrices is in the solution of systems of linear equations. If the matrix is square, it is possible to deduce some of its properties by computing its determinant. For example, a square matrix has an inverse if and only if its determinant is not zero. Insight into the geometry of a linear transformation is obtainable (along with other information) from the matrix's eigenvalues and eigenvectors.
Applications of matrices are found in most scientific fields. In every branch of physics, including classical mechanics, optics, electromagnetism, quantum mechanics, and quantum electrodynamics, they are used to study physical phenomena, such as the motion of rigid bodies. In computer graphics, they are used to project a 3-dimensional image onto a 2-dimensional screen. In probability theory and statistics, stochastic matrices are used to describe sets of probabilities; for instance, they are used within the PageRank algorithm that ranks the pages in a Google search.[5] Matrix calculus generalizes classical analytical notions such as derivatives and exponentials to higher dimensions.
A major branch of numerical analysis is devoted to the development of efficient algorithms for matrix computations, a subject that is centuries old and is today an expanding area of research. Matrix decomposition methods simplify computations, both theoretically and practically. Algorithms that are tailored to particular matrix structures, such as sparse matrices and near-diagonal matrices, expedite computations in finite element method and other computations. Infinite matrices occur in planetary theory and in atomic theory. A simple example of an infinite matrix is the matrix representing the derivative operator, which acts on the Taylor series of a function
...
this the slide for learning derogatory and non derogatory matrices. You can eaily refer this. we have also stated some basic learnt topics in matrices so that you won't confusing over the steps of example. this is the part of linear algebra. the slide template is taken from slidesgo.com but it is designed by me. we have also mentioned the polynomials that we have to find and the necessary steps that we need to follow to show or prove whether the matrix is derogatory or not. i hope you like this slide.
Looking for assistance with your math assignments? Search no more! Our website provides professional help for math assignments. Our team of experienced tutors is dedicated to providing excellent support for all your math assignments. If you're encountering challenges with algebra, calculus, or any other math branch, we're here to offer the assistance you require. Take a moment to explore our website now and find out how we can help you.
For further details,
visit: www.mathsassignmenthelp.com
Contact us via email: info@mathsassignmenthelp.com or reach out to us on
WhatsApp at +1(315)557-6473.
Information related to Gauss-Jordan elimination.
Visit mathsassignmenthelp.com or email info@mathsassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with linear algebra assignment.
Section 18.3-19.1.Today we will discuss finite-dimensional.docxkenjordan97598
Section 18.3-19.1.
Today we will discuss finite-dimensional associative algebras and their representations.
Definition 1. Let A be a finite-dimensional associative algebra over a field F . An element
a ∈ A is nilpotent if an = 0 for some positive integer n. An algebra is said to be nilpotent if
all of its elements are.
Exercise 2. Every subalgebra and factor algebra of a nilpotent algebra are nilpotent. Con-
versely, if I ⊂ A is a nilpotent ideal, and the quotient algebra A/I is nilpotent, then so is
A.
Example. The algebra of all strictly lower- (or upper-) triangular n×n matrices is nilpotent.
Proposition 3. If the algebra A is nilpotent, then An = 0 for some n ∈ Z+, that is the
product of any n elements if the algebra A equals 0.
Proof. Let B ⊂ A be the maximal subspace for which there exists n ∈ Z+ such that Bn = 0.
Note that B is closed under multiplication, i.e. B ⊃ Bk for any k ∈ Z+. Assume B 6= A
and choose an element a ∈ A\B. Since aBn = 0, there exists k ∈ Z+ such that aBk 6⊂ B.
Replacing a with a non-zero element in aBk we obtain aB ⊂ B. Recall that there exists
m ∈ Z+ so that am = 0. Now, let us set C = B ⊕〈a〉. Then we have
Cmn = 0
which contradicts the definition of the subspace B. �
Unless A is commutative, the set of all nilpotent elements of A does not have to be an
ideal (in general, not even a subspace). On the other hand, if I,J are nilpotent ideals in A,
then so is
I + J = {x + y |x ∈ I,y ∈ J} .
Therefore, there exists the maximal nilpotent ideal which contains every other nilpotent ideal
of A.
Definition 4. The radical of A is the maximal nilpotent ideal of A it is denoted rad(A).
The algebra A is semisimple if rad(A) = 0.
If char(F) = 0, there exists an alternative description of semisimple algebras. Consider
the regular representation of the algebra A
ρ: A → L(A), ρ(a)(b) = ab,
and define a “scalar product” on A via
(a,b) = tr(ρ(ab)) = tr(ρ(a)ρ(b)).
One can see that (·, ·) is a symmetric bilinear function on A satisfying (ab,c) = (a,bc).
Definition 5. For any ideal I ⊂ A, its orthogonal complement I⊥ is defined as
I⊥ = {a ∈ A |(a,i) = 0 for all i ∈ I} .
Proposition 6. For any ideal I ⊂ A, its orthogonal complement I⊥ ⊂ A is also an ideal.
Proof. For any a ∈ A, x ∈ I⊥, and y ∈ I we have
(xa,y) = (x,ay) = 0 and (ax,y) = (y,ax) = (ya,x) = 0.
�
1
2
Proposition 7. If A is an algebra over a field F with char(F) = 0, then every element a ∈ A
orthogonal to all of its powers is nilpotent.
Proof. Let a ∈ A be such that
(a,an) = tr ρ(a)n+1 = 0 for all n ∈ Z+.
Let L ⊃ F be the splitting field of the characteristic polynomial f(x) of the operator ρ(a).
Then, over L we have
f(x) = tk0
s∏
i=1
(t−λi)ki,
where λi are distinct for i = 1, . . . ,s, and
tr ρ(a)n+1 =
s∑
i=1
kiλ
n+1
i = 0.
Letting n run through the set 1, . . . ,s, the above equation yields a system of s homogeneous
linear equations in variables k1, . . . ,ks. The determinant of this system equals
λ21 . . .λ
2
nV (λ1, . . . ,λn),
where V (λ1.
Section 18.3-19.1.Today we will discuss finite-dimensional.docxrtodd280
Section 18.3-19.1.
Today we will discuss finite-dimensional associative algebras and their representations.
Definition 1. Let A be a finite-dimensional associative algebra over a field F . An element
a ∈ A is nilpotent if an = 0 for some positive integer n. An algebra is said to be nilpotent if
all of its elements are.
Exercise 2. Every subalgebra and factor algebra of a nilpotent algebra are nilpotent. Con-
versely, if I ⊂ A is a nilpotent ideal, and the quotient algebra A/I is nilpotent, then so is
A.
Example. The algebra of all strictly lower- (or upper-) triangular n×n matrices is nilpotent.
Proposition 3. If the algebra A is nilpotent, then An = 0 for some n ∈ Z+, that is the
product of any n elements if the algebra A equals 0.
Proof. Let B ⊂ A be the maximal subspace for which there exists n ∈ Z+ such that Bn = 0.
Note that B is closed under multiplication, i.e. B ⊃ Bk for any k ∈ Z+. Assume B 6= A
and choose an element a ∈ A\B. Since aBn = 0, there exists k ∈ Z+ such that aBk 6⊂ B.
Replacing a with a non-zero element in aBk we obtain aB ⊂ B. Recall that there exists
m ∈ Z+ so that am = 0. Now, let us set C = B ⊕〈a〉. Then we have
Cmn = 0
which contradicts the definition of the subspace B. �
Unless A is commutative, the set of all nilpotent elements of A does not have to be an
ideal (in general, not even a subspace). On the other hand, if I,J are nilpotent ideals in A,
then so is
I + J = {x + y |x ∈ I,y ∈ J} .
Therefore, there exists the maximal nilpotent ideal which contains every other nilpotent ideal
of A.
Definition 4. The radical of A is the maximal nilpotent ideal of A it is denoted rad(A).
The algebra A is semisimple if rad(A) = 0.
If char(F) = 0, there exists an alternative description of semisimple algebras. Consider
the regular representation of the algebra A
ρ: A → L(A), ρ(a)(b) = ab,
and define a “scalar product” on A via
(a,b) = tr(ρ(ab)) = tr(ρ(a)ρ(b)).
One can see that (·, ·) is a symmetric bilinear function on A satisfying (ab,c) = (a,bc).
Definition 5. For any ideal I ⊂ A, its orthogonal complement I⊥ is defined as
I⊥ = {a ∈ A |(a,i) = 0 for all i ∈ I} .
Proposition 6. For any ideal I ⊂ A, its orthogonal complement I⊥ ⊂ A is also an ideal.
Proof. For any a ∈ A, x ∈ I⊥, and y ∈ I we have
(xa,y) = (x,ay) = 0 and (ax,y) = (y,ax) = (ya,x) = 0.
�
1
2
Proposition 7. If A is an algebra over a field F with char(F) = 0, then every element a ∈ A
orthogonal to all of its powers is nilpotent.
Proof. Let a ∈ A be such that
(a,an) = tr ρ(a)n+1 = 0 for all n ∈ Z+.
Let L ⊃ F be the splitting field of the characteristic polynomial f(x) of the operator ρ(a).
Then, over L we have
f(x) = tk0
s∏
i=1
(t−λi)ki,
where λi are distinct for i = 1, . . . ,s, and
tr ρ(a)n+1 =
s∑
i=1
kiλ
n+1
i = 0.
Letting n run through the set 1, . . . ,s, the above equation yields a system of s homogeneous
linear equations in variables k1, . . . ,ks. The determinant of this system equals
λ21 . . .λ
2
nV (λ1, . . . ,λn),
where V (λ1.
Are you in search of trustworthy and affordable assistance for your Linear Algebra assignments? Look no further! Our team of experienced math experts specializing in Linear Algebra is here to provide you with reliable help. Whether your assignment is big or small, we offer personalized assistance to ensure that all work is completed to the highest standard. Our prices are competitive and tailored to be student-friendly, making it easier for you to access the help you need.
Why struggle with your Linear Algebra assignments when you can receive expert assistance today? Don't hesitate to contact us now for more information on how our team can support you with your math assignments, specifically in the field of Linear Algebra. Visit our website at www.mathsassignmenthelp.com,
send us an email at info@mathsassignmenthelp.com,
or reach out to us via WhatsApp at +1(315)557-6473.
Similar to Partial midterm set7 soln linear algebra (20)
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
COLLEGE BUS MANAGEMENT SYSTEM PROJECT REPORT.pdfKamal Acharya
The College Bus Management system is completely developed by Visual Basic .NET Version. The application is connect with most secured database language MS SQL Server. The application is develop by using best combination of front-end and back-end languages. The application is totally design like flat user interface. This flat user interface is more attractive user interface in 2017. The application is gives more important to the system functionality. The application is to manage the student’s details, driver’s details, bus details, bus route details, bus fees details and more. The application has only one unit for admin. The admin can manage the entire application. The admin can login into the application by using username and password of the admin. The application is develop for big and small colleges. It is more user friendly for non-computer person. Even they can easily learn how to manage the application within hours. The application is more secure by the admin. The system will give an effective output for the VB.Net and SQL Server given as input to the system. The compiled java program given as input to the system, after scanning the program will generate different reports. The application generates the report for users. The admin can view and download the report of the data. The application deliver the excel format reports. Because, excel formatted reports is very easy to understand the income and expense of the college bus. This application is mainly develop for windows operating system users. In 2017, 73% of people enterprises are using windows operating system. So the application will easily install for all the windows operating system users. The application-developed size is very low. The application consumes very low space in disk. Therefore, the user can allocate very minimum local disk space for this application.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Courier management system project report.pdfKamal Acharya
It is now-a-days very important for the people to send or receive articles like imported furniture, electronic items, gifts, business goods and the like. People depend vastly on different transport systems which mostly use the manual way of receiving and delivering the articles. There is no way to track the articles till they are received and there is no way to let the customer know what happened in transit, once he booked some articles. In such a situation, we need a system which completely computerizes the cargo activities including time to time tracking of the articles sent. This need is fulfilled by Courier Management System software which is online software for the cargo management people that enables them to receive the goods from a source and send them to a required destination and track their status from time to time.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Automobile Management System Project Report.pdfKamal Acharya
The proposed project is developed to manage the automobile in the automobile dealer company. The main module in this project is login, automobile management, customer management, sales, complaints and reports. The first module is the login. The automobile showroom owner should login to the project for usage. The username and password are verified and if it is correct, next form opens. If the username and password are not correct, it shows the error message.
When a customer search for a automobile, if the automobile is available, they will be taken to a page that shows the details of the automobile including automobile name, automobile ID, quantity, price etc. “Automobile Management System” is useful for maintaining automobiles, customers effectively and hence helps for establishing good relation between customer and automobile organization. It contains various customized modules for effectively maintaining automobiles and stock information accurately and safely.
When the automobile is sold to the customer, stock will be reduced automatically. When a new purchase is made, stock will be increased automatically. While selecting automobiles for sale, the proposed software will automatically check for total number of available stock of that particular item, if the total stock of that particular item is less than 5, software will notify the user to purchase the particular item.
Also when the user tries to sale items which are not in stock, the system will prompt the user that the stock is not enough. Customers of this system can search for a automobile; can purchase a automobile easily by selecting fast. On the other hand the stock of automobiles can be maintained perfectly by the automobile shop manager overcoming the drawbacks of existing system.
1. 18.06 Problem Set 7 - Solutions
Due Wednesday, 07 November 2007 at 4 pm in 2-106.
Problem 1: (12=3+3+3+3) Consider the matrix A =
−1 3 −1 1
−3 5 1 −1
10 −10 −10 14
4 −4 −4 8
.
(a) If one eigenvector is v1 = 1 1 0 0
T
, find its eigenvalue λ1.
Solution Av1 = 2 2 0 0
T
= 2v1, thus λ1 = 2.
(b) Show that det(A) = 0. Give another eigenvalue λ2, and find the corresponding
eigenvector v2.
Solution Since det(A) = 0, and the determinant is the product of all eigenvalues,
we see that there must be a zero eigenvalue. So λ2 = 0. To find v2, we need to
solve the system Av2 = 0. By Gauss elimination, it is easy to see that one solution
is given by v2 = 2 1 1 0
T
(c) Given the eigenvalue λ3 = 4, write down a linear system which can be solved to
find the eigenvector v3.
Solution The system is Av3 = 4v3, or (A − 4I)v3 = 0:
−5 3 −1 1
−3 1 1 −1
10 −10 −14 14
4 −4 −4 4
v3 = 0.
The solution is v3 = 0 0 1 1
T
.
(d) What is the trace of A? Use this to find λ4.
Solution The trace of A is tr(A) = −1 + 5 − 10 + 8 = 2. Since
tr(A) = λ1 + λ2 + λ3 + λ4,
the previous parts tell us λ4 = 2 − (2 + 0 + 4) = −4.
1
2. Problem 2: (12=3+3+3+3) (a) Suppose n×n matrices A, B have the same eigen-
values λ1, · · · , λn, with the same independent eigenvectors x1, · · · , xn. Show that
A = B.
Solution Let S be the eigenvector matrix, Γ be the diagonal matrix consists of the
eigenvalues. Then we have A = SΛS−1
and also B = SΛS−1
. Thus A = B.
(b) Find the 2 × 2 matrix A having eigenvalues λ1 = 2, λ2 = 5 with corresponding
eigenvectors x1 =
1
0
and x2 =
1
1
.
Solution We have S =
1 1
0 1
and Λ =
2 0
0 5
. Thus
A = SΛS−1
=
1 1
0 1
2 0
0 5
1 −1
0 1
=
2 3
0 5
.
(c) Find two different 2 × 2 matrices A, B, both have the same eigenvalues λ1 =
λ2 = 2, and both have the same eigenvector (only one)
1
0
.
Solution Let A =
a b
c d
. Since
1
0
is an eigenvector with eigenvalue 2, we see
a b
c d
1
0
=
2
0
,
which implies a = 2, c = 0. Since a + d = tr(A) = 2 + 2, we see d = 2. Thus A =
2 b
0 2
. It is very easy to check any such matrix has two eigenvalues λ1 = λ2 = 2.
Finally the condition that A has only one eigenvector implies b = 0.
Similarly B has the same form. So we can take different values of b for A and
B. For example, A =
2 1
0 2
and B =
2 3
0 2
.
(d) Find a matrix which has two different sets of independent eigenvectors.
Solution For the identity matrix, any set of independent vectors is a set of inde-
pendent eigenvectors.
2
3. Problem 3: (12=3+3+3+3) Let A =
1 4
2 3
.
(a) Find all eigenvalues and corresponding eigenvectors of A.
Solution The characteristic equation is
λ2
− 4λ − 5 = 0,
thus the eigenvalues are λ1 = 5, λ2 = −1.
For λ1 = 5, the eigenvector v1 is a solution to
−4 4
2 −2
v1 = 0,
which implies v1 =
1
1
.
For λ1 = −1, the eigenvector v2 is a solution to
2 4
2 4
v2 = 0,
which implies v2 =
−2
1
.
(b) Calculate A100
(not by multiplying A 100 times!).
Solution We have
A =
1 −2
1 1
5 0
0 −1
1 −2
1 1
−1
,
thus
A100
=
1 −2
1 1
5 0
0 −1
100
1 −2
1 1
−1
=
1 −2
1 1
5100
0
0 1
1/3 2/3
−1/3 1/3
=
(5100
+ 2)/3 (2 · 5100
− 2)/3
(5100
− 1)/3 (2 · 5100
+ 1)/3
3
4. (c) Find all eigenvalues and corresponding eigenvectors of A3
− A + I.
Solution Use the formula above, we have
A3
− A + I =
1 −2
1 1
5 0
0 −1
3
−
5 0
0 −1
+
1 0
0 1
1 −2
1 1
−1
=
1 −2
1 1
125 0
0 −1
−
5 0
0 −1
+
1 0
0 1
1 −2
1 1
−1
=
1 −2
1 1
121 0
0 1
1 −2
1 1
−1
Thus the eigenvalues of A3
− A + I are
λ1 = 121 = 53
− 5 + 1, λ2 = 1 = (−1)3
− (−1) + 1.
The corresponding eigenvectors are still v1 =
1
1
and v2 =
−2
1
.
(d) For any polynomial function f, what are the eigenvalues and eigenvectors of
f(A)? Prove your statement.
Solution The eigenvalues of f(A) are f(5) and f(−1), with the same eigenvector
above. To prove this, we write
f(x) = anxn
+ an−1xn−1
+ · · · + a1x + a0.
Then
f(A) = anAn
+ an−1An−1
+ · · · + a1A + a0I.
If λ is an eigenvalue of A with eigenvector v, i.e. Av = λv, then we have
A2
v = A(Av) = A(λv) = λAv = λ2
v,
and in general by induction
Ak
v = λk
v.
This implies
f(A)v = anλn
v + an−1λn−1
v + · · · + a1λv + a0v = f(λ)v,
which implies that v is also an eigenvector of f(A), which corresponding to the
eigenvalue f(λ).
(Thanks to Brian: In the special case that f(5) = f(−1), any nonzero vector in
the plane will be an eigenvector. In this case it is easy to see that f(A) is in fact a
pure scalar matrix.)
4
5. Problem 4: (10=4+3+3) For simplicity, assume that A has n independent eigen-
vectors, thus diagonalizable. (However, the statement below also holds for general
matrices.)
(a) Since A is diagonalizable, we can write A = SΛS−1
as in class. Substitute this
into the characteristic polynomial
p(A) = (A − λ1I)(A − λ2I) · · · (A − λnI)
to show that p(A) = 0. (This is called the Cayley-Hamilton theorem.)
Solution We have
p(A) =(A − λ1I)(A − λ2I) · · · (A − λnI)
=(SΛS−1
− Sλ1IS−1
)(SΛS−1
− Sλ2I)S−1
· · · (SΛS−1
− SλnIS−1
)
=S(Λ − λ1I)S−1
S(Λ − λ2I)S−1
S · · · S−1
S(Λ − λnI)S
=S
0 0 · · · 0
0 λ2 − λ1 · · · 0
· · · · · · · · · · · ·
0 0 · · · λn − λ1
λ1 − λ2 0 · · · 0
0 0 · · · 0
· · · · · · · · · · · ·
0 0 · · · λn − λ2
· · ·
λ1 − λn 0 · · · 0
0 λ2 − λn · · · 0
· · · · · · · · · · · ·
0 0 · · · 0
S−1
=S
0 0 · · · 0
0 0 · · · 0
· · · · · · · · · · · ·
0 0 · · · 0
S−1
=0.
(b) Test the Cayley-Hamilton theorem for the matrix A =
1 1
1 0
. Then write A−1
as a polynomial function of A. [Hint: move the I term to one side of the equation
p(A) = 0 to write A · (something) = I.]
Solution The characteristic polynomial of A is p(λ) = λ2
− λ − 1. Thus we need
5
6. to check A2
− A − I = 0:
A2
− A − I =
1 1
1 0
2
−
1 1
1 0
−
1 0
0 1
=
2 1
1 1
−
1 1
1 0
−
1 0
0 1
=
0 0
0 0
.
From A2
− A − I = 0 we have A2
− A = I. In other words, A(A − I) = I. We
conclude that A−1
= A − I.
(c) Use the Cayley-Hamilton theorem above to show that, for any invertible matrix
A, A−1
can always be written as a polynomial of A. (Inverting using elimination is
usually much more practical, however!)
Solution Suppose A is invertible, then det A = 0. From Cayley-Hamilton theorem
we have
p(A) = (A − λ1I)(A − λ2I) · · · (A − λnI) = 0.
Write
p(x) = xn
+ an−1xn−1
+ · · · + a1x + a0,
then
a0 = λ1λ2 · · · λn = det(A) = 0,
and we have
p(A) = An
+ an−1An−1
+ · · · + a1A + a0I = 0.
Rewrite this as
−
1
a0
An
−
an−1
a0
An−1
− · · · −
a1
a0
A = I,
or
A(−
1
a0
An−1
−
an−1
a0
An−2
− · · · −
a1
a0
I) = I,
which implies
A−1
= −
1
a0
An−1
−
an−1
a0
An−2
− · · · −
a1
a0
I.
This completes the proof.
6
7. Problem 5: (10=5+5) (a) If A (an n × n matrix) has n nonnegative eigenvalues
λk and independent eigenvectors xk, and if we define “
√
A” as the matrix with
eigenvalues
√
λk and the same eigenvectors, show that (
√
A)2
= A.
Solution We can decompose A into A = SΛS−1
, where S is the matrix consists of
eigenvectors of A, and
Λ =
λ1 0 · · · 0
0 λ2 · · · 0
· · · · · · · · · · · ·
0 0 · · · λn
is the diagonal eigenvalue matrix. Then by definition,
√
A is the matrix with S
as its eigenvector matrix, and
√
λi its eigenvalues. In other words, we have
√
A =
S
√
ΛS−1
, where
√
Λ =
√
λ1 0 · · · 0
0
√
λ2 · · · 0
· · · · · · · · · · · ·
0 0 · · ·
√
λn
.
Obviously, we have
√
Λ
√
Λ = Λ, so
(
√
A)2
=
√
A
√
A = S
√
ΛS−1
S
√
ΛS−1
= SΛS−1
= A.
(b) Given
√
A from part (a), is the only other matrix whose square is A given by
−
√
A? Why or why not?
Solution No. We can change some (but not all) of the
√
λi above to −
√
λi, then
we will get a new matrix S
√
ΛS−1
, whose square is A, but it is neither
√
A, nor
−
√
A. So In general there are at least 2n
possible matrices whose square is A.
As an example, we can take
√
Λ to be
−
√
λ1 0 · · · 0
0
√
λ2 · · · 0
· · · · · · · · · · · ·
0 0 · · ·
√
λn
.
7
8. Problem 6: (10) If λ is an eigenvalue of A, is it also an eigenvalue of AT
? What
about the eigenvectors? Justify your answers.
Solution The eigenvalues of A must be the eigenvalues of AT
. In fact, the charac-
teristic equation for AT
is det(AT
− λI) = 0. Since
det(AT
− λI) = det(AT
− λI)T
= det(A − λI),
we see that the characteristic equation for AT
coincides with the characteristic equa-
tion for A. This implies that the eigenvalues of AT
are the same as the eigenvalues
of A.
However, the eigenvectors of AT
don’t have to be the same as eigenvectors of
A. In fact, the eigenvector of A corresponding to eigenvalue λ lies in the nullspace
of A − λI, while the eigenvector of AT
corresponding to the eigenvalue λ lies in the
left nullspace of A − λI, so they are different in general.
As an example, let
A =
1 0 0
1 1 0
0 0 1
,
the both A and AT
has eigenvalues λ1 = λ2 = λ3 = 1, the eigenvectors of A are
v1 =
0
1
0
, v2 =
0
0
1
,
while the eigenvectors of AT
are
v1 =
1
0
0
, v2 =
0
0
1
.
Finally we point out that the number of independent eigenvectors of AT
is the
same as the number of independent eigenvectors of A. In fact, for each eigenvalue
λ, the number of independent eigenvectors of AT
or A both equal n − rank(A − λI),
since rank(A − λI) = rank(AT
− λI).
8
9. Problem 7: (10=2+2+2+2+2) This problem refers to similar matrices in the
sense defined by section 6.6 (page 343) of the text.
(a) A, B, and C are square matrices where A is similar to B and B is similar to C.
Is A similar to C? Why or why not?
Solution Yes, A is similar to C.
By definition, A is similar to B means A = PBP−1
, and B is similar to C means
B = QCQ−1
. Thus we have A = PBP−1
= PQCQ−1
P−1
= (PQ)C(PQ)−1
. This
implies that A is similar to C.
(b) If A is similar to Q, where Q is an orthogonal matrix (QT
Q = I), is A orthogonal
too? Why or why not?
Solution No, A doesn’t have to be orthogonal.
Counterexample: It is easy to check that Q =
0 1
1 0
is orthogonal, while
A =
1 1
0 1
Q
1 1
0 1
−1
=
1 0
1 −1
is not a orthogonal matrix.
(c) If A is similar to U where U is triangular, is A triangular too? Why or why not?
Solution No.
One can use the same example above: A is triangular, Q is similar to A but Q
is not triangular.
(d) If A is similar to B where B is symmetric, is A symmetric too? Why or why
not?
Solution No.
One can use the same example in part (b): Q is symmetric, A is similar to Q
but A is not symmetric.
(e) For a given A, does the set of all matrices similar to A form a subspace of the
set of all matrices (under ordinary matrix addition)? Why or why not?
Solution No.
The zero matrix will not lie in this set. In fact, if the zero matrix is similar to
A, then A = P0P−1
= 0. In other words, the set is a subspace only if A is itself
zero matrix.
9
10. Problem 8: (10=4+3+3) The Pell numbers are the sequence p0 = 0, p1 = 1,
pn = 2pn−1 + pn−2 (n > 1).
(a) Analyzing the Pell numbers in the same way that we analyzed the Fibonacci
sequence, via eigenvectors and eigenvalues of a 2 × 2 matrix, find a closed-form
expression for pn.
Solution Rewrite the equation in matrix form, we have
pn+1
pn
=
2 1
1 0
pn
pn−1
.
The characteristic equation of the matrix above is
λ2
− 2λ − 1 = 0,
so the eigenvalues are
λ1 = 1 +
√
2, λ2 = 1 −
√
2.
The eigenvector corresponds to λ1 is given by
1 −
√
2 1
1 −1 −
√
2
v1 = 0 =⇒ v1 =
1√
2 − 1
.
Similarly the eigenvector corresponds to λ2 is given by
1 +
√
2 1
1 −1 +
√
2
v2 = 0 =⇒ v2 =
1 −
√
2
1
.
So we have
2 1
1 0
=
1 1 −
√
2√
2 − 1 1
1 +
√
2 0
0 1 −
√
2
1 1 −
√
2√
2 − 1 1
−1
.
Thus
2 1
1 0
n
=
1 1 −
√
2√
2 − 1 1
1 +
√
2 0
0 1 −
√
2
n
1 1 −
√
2√
2 − 1 1
−1
=
1
4 − 2
√
2
(1 +
√
2)n
+ (
√
2 − 1)2
(1 −
√
2)n
(
√
2 − 1)((1 +
√
2)n
− (1 −
√
2)n
)
(
√
2 − 1)((1 +
√
2)n
− (1 −
√
2)n
) (
√
2 − 1)2
(1 +
√
2)n
+ (1 −
√
2)n .
So pn = 1
2
√
2
((1 +
√
2)n
− (1 −
√
2)n
).
10
11. (b) Prove that the ratio (pn−1 + pn)/pn tends to
√
2 as n grows.
Solution Since −1 < 1 −
√
2 < 0, from the above expression we see that, for large
n,
pn =
1
2
√
2
((1 +
√
2)n
− (1 −
√
2)n
)
approaches (1 +
√
2)n
/2
√
2. Thus pn−1/pn tends to 1/(1 +
√
2) =
√
2 − 1, after
normalizing the denominator. This implies that (pn−1 + pn)/pn tends to
√
2 as n
grows to ∞.
(c) Using MATLAB or a calculator, evaluate the ratio from (b) up to n = 10 or
so, and subtract
√
2 to find the difference for each n. Compare this method of
computing
√
2 to Newton’s method from 18.01:1
starting with x = 1, repeatedly
replace x by (x + 2/x)/2. Which technique goes faster to
√
2?
Solution
We use MATLAB. The ratios from (b) are given by
>> x=0;y=1;z=y;
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
0.0858
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
-0.0142
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
0.0025
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
-4.2046e-04
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
7.2152e-05
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
1
The square root
√
y is the root of f(x) = x2
− y, and Newton’s method replaces x by x −
f(x)/f (x) = (x + y/x)/2. Actually, this technique for square roots was known to the Babylonians
3000 years ago.
11
12. -1.2379e-05
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
2.1239e-06
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
-3.6440e-07
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
6.2522e-08
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
-1.0727e-08
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
1.8405e-09
>> y=2*y+x;x=z;z=y;(x+y)/y-sqrt(2)
ans =
-3.1577e-10
while Euler method gives
>> x=1;
>> x=(x+2/x)/2;x-sqrt(2)
ans =
0.0858
>> x=(x+2/x)/2;x-sqrt(2)
ans =
0.0025
>> x=(x+2/x)/2;x-sqrt(2)
ans =
2.1239e-06
>> x=(x+2/x)/2;x-sqrt(2)
ans =
1.5947e-12
>> x=(x+2/x)/2;x-sqrt(2)
ans =
12
13. -2.2204e-16
>> x=(x+2/x)/2;x-sqrt(2)
ans =
-2.2204e-16
>> x=(x+2/x)/2;x-sqrt(2)
ans =
-2.2204e-16
>> x=(x+2/x)/2;x-sqrt(2)
ans =
-2.2204e-16
>> x=(x+2/x)/2;x-sqrt(2)
ans =
-2.2204e-16
>> x=(x+2/x)/2;x-sqrt(2)
ans =
-2.2204e-16
>> x=(x+2/x)/2;x-sqrt(2)
ans =
-2.2204e-16
The Newton’s method is faster. In fact, the difference between the Pell ratios
and
√
2 decreases exponentially (as the other eigenvector disappears), and as a
consquence the number of digits of accuracy (the log of the error) increases linearly
with the number of iterations (”linear convergence”). In contrast, Newton’s method
roughly squares the error or doubles the number of significant digits with each
iteration (”quadratic convergence”), which is far faster. (Of course, the error stops
improving once the machine precision is reached.)
13
14. Problem 9: (14=2+2+2+2+2+2+2) This is a MATLAB problem on symmetric
matrices.
(a) Use MATLAB to construct a random 4 × 4 symmetric matrix (A = rand(4,4);
A = A’ * A), and find its eigenvalues via the command eig(A).
Solution The codes
>> A=rand(4,4);A=A’*A
A =
2.3346 1.1384 2.5606 1.4507
1.1384 0.7860 1.2743 0.9531
2.5606 1.2743 2.8147 1.6487
1.4507 0.9531 1.6487 1.8123
>> eig(A)
ans =
0.0009
0.1441
0.7379
6.8648
(b) Construct 1000 random vectors as the columns of X via X = rand(4,1000)
- 0.5; and for each vector xk compute dk = xT
k Axk/xT
k xk via the command d =
diag(X’*A*X) ./ diag(X’*X); (which computes a vector d of these 1000 ratios).
Solution The codes
>> X=rand(4,1000)-0.5;
>> d=diag(X’*A*X)./diag(X’*X);
14
15. (c) Find the minimum ratio dk via min(d) and the maximum via max(d). How do
these compare to the eigenvalues? Try increasing from 1000 to 10000 above to see
if your hypothesis holds up.
Solution The codes
>> min(d),max(d)
ans =
0.0122
ans =
6.8117
It seems that the minimum dk is close to the smallest eigenvalue of A, and the
maximum dk is close to the largest eigenvalue of A. Moreover, the values of dk are
also bounded below/above by the smallest/largest eigenvalue.
We increase 1000 to 10000, which also implies the above judgement:
>> X=rand(4,10000)-0.5;
>> d=diag(X’*A*X)./diag(X’*X);
>> min(d),max(d)
ans =
0.0050
ans =
6.8451
15
16. (d) Find the eigenvectors of A via [S,L] = eig(A): L is the diagonal matrix Λ
of eigenvalues, and S is the matrix whose columns are the eigenvectors, so that
AS = SL. Does S have any special properties? (Hint: try looking at det(S) and
inv(S) compared to S.) (We will see in class, eventually, that these and other nice
properties come from A being symmetric.)
Solution The codes:
>> [S,L]=eig(A)
S =
0.7196 0.1621 0.3613 0.5705
0.0416 -0.9401 -0.1444 0.3061
-0.6924 0.1247 0.3268 0.6310
0.0321 0.2729 -0.8613 0.4274
L =
0.0009 0 0 0
0 0.1441 0 0
0 0 0.7379 0
0 0 0 6.8648
>> det(S)
ans =
-1
>> inv(S)
ans =
0.7196 0.0416 -0.6924 0.0321
0.1621 -0.9401 0.1247 0.2729
0.3613 -0.1444 0.3268 -0.8613
0.5705 0.3061 0.6310 0.4274
16
17. We can see that S−1
= ST
, which means that S is an orthogonal matrix. In
other words, for symmetric matrices, we can choose the eigenvectors such that they
form an orthonormal basis!
(e) If you wanted to pick a vector x to get the maximum possible d, what would x
and the resulting d be? What about the minimum?
Solution We have seen from part (c) that the maximum possible d is the maximum
eigenvalue. So we should take the vector x to be an eigenvector corresponding to
the maximum eigenvalue. Similarly, if we want get the minimum possible d, we need
to take x to be an eigenvector of the minimum eigenvalue.
(f) Repeat the process for a 2 × 2 complex asymmetric A and a complex X. This
time, plot the resulting (complex) dk values in the complex plane as black dots, and
the (complex) eigenvalues as red circles:
A = rand(2,2) + i*rand(2,2);
X = rand(2,1000)-0.5+i*(rand(2,1000)-0.5);
d = diag(X’*A*X) ./ diag(X’*X);
v = eig(A)
plot(real(d), imag(d), ’k.’, real(v), imag(v), ’ro’)
You should get an elliptical region with the eigenvalues as foci.
Solution The codes
>> A=rand(2,2)+i*rand(2,2);
>> X=rand(2,1000)-0.5+i*(rand(2,1000)-0.5);
>> d=diag(X’*A*X)./diag(X’*X);
>> v=eig(A)
v =
0.4377 + 0.2842i
1.2904 + 0.8951i
>> plot(real(d),imag(d),’k.’,real(v),imag(v),’ro’)
17
18. >> axis equal
The output graph is
0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 1: numerical range
(g) It is hard to see in part (f) that the eigenvalues are foci unless they are close
together, which is unlikely. Construct a random non-symmetric A with eigenvalues
1 + 1i and 1.05 + 0.9i by performing a random 2 × 2 similarity transformation on
D=diag([1+1i,1.05+0.9i]) (i.e. construct a random S and multiply A = inv(S)
* D * S), and then repeat (f) with this A.
Solution The codes
>> S=rand(2,2);A=inv(S)*diag([1+1*i,1.05+0.9*i])*S;
>> X=rand(2,1000)-0.5+i*(rand(2,1000)-0.5);
>> d=diag(X’*A*X)./diag(X’*X);
>> v=eig(A)
v =
18