 Eigenvalue problems occur in many areas of
science and engineering, such as structural
analysis
 Eigenvalues are also important in analyzing
numerical methods
 Theory and algorithms apply to complex matrices
as well as real matrices
 With complex matrices, we use conjugate
transpose, AH, instead of usual transpose, AT
Matrix expands or shrinks any vector lying in direction of eigenvector by scalar
factor
Expansion or contraction factor is given by corresponding eigenvalue
Eigenvalues and eigenvectors decompose complicated behavior of general
linear transformation into simpler actions
 Properties of eigenvalue problem affecting choice of
algorithm and software
◦ Are all eigenvalues needed, or only a few?
◦ Are only eigenvalues needed, or are corresponding eigenvectors
also needed?
◦ Is matrix real or complex?
◦ Is matrix relatively small and dense, or large and sparse?
◦ Does matrix have any special properties, such as symmetry, or is
it general matrix?
 Condition of eigenvalue problem is sensitivity of
eigenvalues and eigenvectors to changes in matrix
 Conditioning of eigenvalue problem is not same as
conditioning of solution to linear system for same matrix
 Different eigenvalues and eigenvectors are not
necessarily equally sensitive to perturbations in matrix
 Linear Algebra and Eigenvalues
 Orthogonal Matrices and Similarity
Transformations
 The Power Method
 Householder’s Method
 The QR Algorithm
 Singular Value Decomposition
 Survey of Methods and Software
The Power method is an iterative technique
used to determine the dominant eigenvalue
of a matrix—that is, the eigenvalue with the
largest magnitude. By modifying the method
slightly, it can also used to determine other
eigenvalues. One useful feature of the Power
method is that it produces not only an
eigenvalue, but also an associated eigenvector.
In fact,
the Power method is often applied to find an
eigenvector for an eigenvalue that is determined
by some other means.
* Iterative Solutions:
 Highest Eigenvalue: Power Method
 Lowest Eigenvalue: Inverse Power
Method
 Other Eigenvalues: Eigenvalue
Substitution
To apply the Power method, we assume that the n × n
matrix A has n eigenvalues λ1, λ2, . . . , λn with an
associated collection of linearly independent eigenvectors
{v(1), v(2),v(3), . . . , v(n)}. Moreover, we assume that A
has precisely one eigenvalue, λ1, that is largest
in magnitude, so that
|λ1| > |λ2| ≥ |λ3| ≥ ・・・ ≥ |λn| ≥ 0.
If x is any vector in Rn, the fact that {v(1), v(2), v(3), . . . ,
v(n)} is linearly independent implies that constants β1, β2,
. . . , βn exist with
Target: y=A 𝐱=𝜆𝐱
Start with all 1’s x vector: 𝒙 𝟎
=[1,1,……1] 𝑻
𝑦(1)=A𝑥(0)=λ(1) 𝑥(1) (Iteration number = 1)
λ(1)
= element in 𝑦1with highest absolute value
𝑥(1)
=
1
λ(1) 𝑦(1)
=
1
λ(1) A𝑥(0)
,
𝑦(2)
=A𝑥(1)
=
1
λ(1) 𝐴2
𝑥(0)
(Iteration number = 2)
λ(2)
= element in 𝑦2with highest absolute value
𝑥(2)
=
1
λ(2) 𝑦(2)
=
1
λ1λ2
𝐴2
𝑥(0)
………..
𝒚(𝒌)=A𝒙(𝒌−𝟏) → 𝒙(𝒌)=
𝟏
𝝀(𝒌) 𝒚(𝒌)→𝒙(𝒌) ≈
𝟏
𝝀(𝒌) 𝑨 𝒌 𝒙(𝟎)
Assume x = 𝑑1 𝑣1+𝑑2 𝑣2 + …..+𝑑 𝑛 𝑣 𝑛
Where 𝑣 𝑛 are linearly independent eigenvectors
Ax 𝐱= λ
Ax= 𝜆1
v1 d1+ 𝜆2
𝑑2 𝑣2 +……+ 𝜆 𝑛
𝑑 𝑛 𝑣 𝑛
𝐴2x=λ
2
1
v1 d1+λ
2
2
𝑑2 𝑣2 +……+λ
2
𝑛
𝑑 𝑛 𝑣 𝑛
After k iterations:
𝐴 𝑘x= λ
𝑘
1
v1 d1+λ
𝑘
2
𝑑2 𝑣2 +……+λ
𝑘
𝑛
𝑑 𝑛 𝑣 𝑛
𝟏
𝝀
𝒌
𝟏
𝑨 𝒌x = v 𝟏 d 𝟏+
𝜆 𝟐
𝜆 𝟏
𝒅 𝟐 𝒗 𝟐 +……+
𝝀 𝒌
𝜆 𝟏
𝒅 𝒏 𝒗 𝒏
If 𝜆1
is considerably higher than 𝜆2
…… 𝜆 𝑛
:
𝟏
𝒌
𝟏
𝑨 𝒌
x
Target: y= 𝐴−1 𝑥 = λ−1 𝑥=∝ 𝑥
Bx=∝ 𝑥
At iteration k:
𝑩 𝒌
∝
𝒌
𝟏
𝑨 𝒌x →𝑑1 𝑣1
Dominant ∝ is equivalent to smallest absolute λ Use LU
factorization to solve for y: A 𝒚 𝒌
=𝒙 𝒌−𝟏
Find dominant element in y(k) as ∝ Keep on, then least
𝜆 =∝−1
Find the smallest eigenvalue of the following matrix
A= → det(A) =−13
Solution : A=
3 − λ 7
4 5 − λ
→ λ1=9.38
λ2 =−1.38
𝐴−1
=
−5
13
7
13
4
13
−3
13
3 7
4 5
(in Magnitude, sign ignored)
 The power method can be used to compute the dominant
eigenvalue(real) and a corresponding eigenvector.
 Variants of the power method can compute the smallest
eigenvalue or the eigenvalue closest to a given number (shift).
 General projection methods consist in approximating the
eigenvectors of a matrix with vectors belonging to a subspace
of approximants with dimension smaller than the dimension of
the matrix.
 Subspace iteration method is a generalization of the power
method that computes a given number of dominant eigenvalues
and their corresponding eigenvectors.
Power method

Power method

  • 2.
     Eigenvalue problemsoccur in many areas of science and engineering, such as structural analysis  Eigenvalues are also important in analyzing numerical methods  Theory and algorithms apply to complex matrices as well as real matrices  With complex matrices, we use conjugate transpose, AH, instead of usual transpose, AT
  • 3.
    Matrix expands orshrinks any vector lying in direction of eigenvector by scalar factor Expansion or contraction factor is given by corresponding eigenvalue Eigenvalues and eigenvectors decompose complicated behavior of general linear transformation into simpler actions
  • 4.
     Properties ofeigenvalue problem affecting choice of algorithm and software ◦ Are all eigenvalues needed, or only a few? ◦ Are only eigenvalues needed, or are corresponding eigenvectors also needed? ◦ Is matrix real or complex? ◦ Is matrix relatively small and dense, or large and sparse? ◦ Does matrix have any special properties, such as symmetry, or is it general matrix?  Condition of eigenvalue problem is sensitivity of eigenvalues and eigenvectors to changes in matrix  Conditioning of eigenvalue problem is not same as conditioning of solution to linear system for same matrix  Different eigenvalues and eigenvectors are not necessarily equally sensitive to perturbations in matrix
  • 5.
     Linear Algebraand Eigenvalues  Orthogonal Matrices and Similarity Transformations  The Power Method  Householder’s Method  The QR Algorithm  Singular Value Decomposition  Survey of Methods and Software
  • 6.
    The Power methodis an iterative technique used to determine the dominant eigenvalue of a matrix—that is, the eigenvalue with the largest magnitude. By modifying the method slightly, it can also used to determine other eigenvalues. One useful feature of the Power method is that it produces not only an eigenvalue, but also an associated eigenvector. In fact, the Power method is often applied to find an eigenvector for an eigenvalue that is determined by some other means.
  • 7.
    * Iterative Solutions: Highest Eigenvalue: Power Method  Lowest Eigenvalue: Inverse Power Method  Other Eigenvalues: Eigenvalue Substitution
  • 8.
    To apply thePower method, we assume that the n × n matrix A has n eigenvalues λ1, λ2, . . . , λn with an associated collection of linearly independent eigenvectors {v(1), v(2),v(3), . . . , v(n)}. Moreover, we assume that A has precisely one eigenvalue, λ1, that is largest in magnitude, so that |λ1| > |λ2| ≥ |λ3| ≥ ・・・ ≥ |λn| ≥ 0. If x is any vector in Rn, the fact that {v(1), v(2), v(3), . . . , v(n)} is linearly independent implies that constants β1, β2, . . . , βn exist with
  • 9.
    Target: y=A 𝐱=𝜆𝐱 Startwith all 1’s x vector: 𝒙 𝟎 =[1,1,……1] 𝑻 𝑦(1)=A𝑥(0)=λ(1) 𝑥(1) (Iteration number = 1) λ(1) = element in 𝑦1with highest absolute value 𝑥(1) = 1 λ(1) 𝑦(1) = 1 λ(1) A𝑥(0) , 𝑦(2) =A𝑥(1) = 1 λ(1) 𝐴2 𝑥(0) (Iteration number = 2) λ(2) = element in 𝑦2with highest absolute value 𝑥(2) = 1 λ(2) 𝑦(2) = 1 λ1λ2 𝐴2 𝑥(0) ……….. 𝒚(𝒌)=A𝒙(𝒌−𝟏) → 𝒙(𝒌)= 𝟏 𝝀(𝒌) 𝒚(𝒌)→𝒙(𝒌) ≈ 𝟏 𝝀(𝒌) 𝑨 𝒌 𝒙(𝟎)
  • 10.
    Assume x =𝑑1 𝑣1+𝑑2 𝑣2 + …..+𝑑 𝑛 𝑣 𝑛 Where 𝑣 𝑛 are linearly independent eigenvectors Ax 𝐱= λ Ax= 𝜆1 v1 d1+ 𝜆2 𝑑2 𝑣2 +……+ 𝜆 𝑛 𝑑 𝑛 𝑣 𝑛 𝐴2x=λ 2 1 v1 d1+λ 2 2 𝑑2 𝑣2 +……+λ 2 𝑛 𝑑 𝑛 𝑣 𝑛 After k iterations: 𝐴 𝑘x= λ 𝑘 1 v1 d1+λ 𝑘 2 𝑑2 𝑣2 +……+λ 𝑘 𝑛 𝑑 𝑛 𝑣 𝑛 𝟏 𝝀 𝒌 𝟏 𝑨 𝒌x = v 𝟏 d 𝟏+ 𝜆 𝟐 𝜆 𝟏 𝒅 𝟐 𝒗 𝟐 +……+ 𝝀 𝒌 𝜆 𝟏 𝒅 𝒏 𝒗 𝒏 If 𝜆1 is considerably higher than 𝜆2 …… 𝜆 𝑛 : 𝟏 𝒌 𝟏 𝑨 𝒌 x
  • 11.
    Target: y= 𝐴−1𝑥 = λ−1 𝑥=∝ 𝑥 Bx=∝ 𝑥 At iteration k: 𝑩 𝒌 ∝ 𝒌 𝟏 𝑨 𝒌x →𝑑1 𝑣1 Dominant ∝ is equivalent to smallest absolute λ Use LU factorization to solve for y: A 𝒚 𝒌 =𝒙 𝒌−𝟏 Find dominant element in y(k) as ∝ Keep on, then least 𝜆 =∝−1
  • 12.
    Find the smallesteigenvalue of the following matrix A= → det(A) =−13 Solution : A= 3 − λ 7 4 5 − λ → λ1=9.38 λ2 =−1.38 𝐴−1 = −5 13 7 13 4 13 −3 13 3 7 4 5
  • 14.
  • 15.
     The powermethod can be used to compute the dominant eigenvalue(real) and a corresponding eigenvector.  Variants of the power method can compute the smallest eigenvalue or the eigenvalue closest to a given number (shift).  General projection methods consist in approximating the eigenvectors of a matrix with vectors belonging to a subspace of approximants with dimension smaller than the dimension of the matrix.  Subspace iteration method is a generalization of the power method that computes a given number of dominant eigenvalues and their corresponding eigenvectors.

Editor's Notes

  • #3 George Mason University, Department of Mathematical Sciences
  • #4 George Mason University, Department of Mathematical Sciences
  • #5 George Mason University, Department of Mathematical Sciences