Lesson 16  The Spectral Theorem and Applications
Upcoming SlideShare
Loading in...5
×
 

Lesson 16 The Spectral Theorem and Applications

on

  • 9,298 views

 

Statistics

Views

Total Views
9,298
Views on SlideShare
9,264
Embed Views
34

Actions

Likes
1
Downloads
149
Comments
5

6 Embeds 34

http://www.slideshare.net 20
http://localhost 8
http://a0.twimg.com 3
http://webcache.googleusercontent.com 1
http://translate.googleusercontent.com 1
https://bbhosted.cuny.edu 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • @waggonerc Should be equation (1)
    Are you sure you want to
    Your message goes here
    Processing…
  • slide 15, your \ref{} didn't compile successfully
    Are you sure you want to
    Your message goes here
    Processing…
  • Oh I see, thanks a lot for the clarification!
    Are you sure you want to
    Your message goes here
    Processing…
  • The 'v' in the first bullet point is any eigenvector but the 'v' in the third bullet point is any vector in R^n. Sorry about the confusing reuse of the same variable. :-)
    Are you sure you want to
    Your message goes here
    Processing…
  • slide 32: 'If A is diagonalizable, there are n linearly independent eigenvectors, so any v can be written as a linear combination of them' From my understanding v is just any eigenvector. This means that eigenvectors are linearly independent and can be written as a linear combination of each other.

    And as I understand it the definition of linearly independent vectors are that they can NOT be written as a linear combination of each other.

    So somewhere I've made a mistake or the slide is wrong. Please tell me where the problem is.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Lesson 16  The Spectral Theorem and Applications Lesson 16 The Spectral Theorem and Applications Presentation Transcript

  • Lesson 16 (S&H, Section 14.6) The Spectral Theorem and Applications Math 20 October 26, 2007 Announcements Welcome parents! Problem Set 6 on the website. Due October 31. OH: Mondays 1–2, Tuesdays 3–4, Wednesdays 1–3 (SC 323) Prob. Sess.: Sundays 6–7 (SC B-10), Tuesdays 1–2 (SC 116)
  • Outline Hatsumon Concept Review Eigenbusiness Diagonalization The Spectral Theorem The split case The symmetric case Iterations Applications Back to Fibonacci Markov chains
  • A famous math problem “A certain man had one pair of rabbits together in a certain enclosed place, and one wishes to know how many are created from the pair in one year when it is the nature of them in a single month to bear another pair, and in the second month those born to bear also. Because the abovewritten pair Leonardo of Pisa in the first month bore, you (1170s or 1180s–1250) will double it; there will be a/k/a Fibonacci two pairs in one month.”
  • Diagram of rabbits f (0) = 1
  • Diagram of rabbits f (0) = 1 f (1) = 1
  • Diagram of rabbits f (0) = 1 f (1) = 1 f (2) = 2
  • Diagram of rabbits f (0) = 1 f (1) = 1 f (2) = 2 f (3) = 3
  • Diagram of rabbits f (0) = 1 f (1) = 1 f (2) = 2 f (3) = 3 f (4) = 5
  • Diagram of rabbits f (0) = 1 f (1) = 1 f (2) = 2 f (3) = 3 f (4) = 5 f (5) = 8
  • An equation for the rabbits Let f (k) be the number of pairs of rabbits in month k. Each new month we have The same rabbits as last month Every pair of rabbits at least one month old producing a new pair of rabbits
  • An equation for the rabbits Let f (k) be the number of pairs of rabbits in month k. Each new month we have The same rabbits as last month Every pair of rabbits at least one month old producing a new pair of rabbits So f (k) = f (k − 1) + f (k − 2)
  • Some fibonacci numbers k f (k) 0 1 1 1 2 2 3 3 4 5 5 8 Question 6 13 Can we find an explicit formula for f (k)? 7 21 8 34 9 55 10 89 11 144 12 233
  • Outline Hatsumon Concept Review Eigenbusiness Diagonalization The Spectral Theorem The split case The symmetric case Iterations Applications Back to Fibonacci Markov chains
  • Concept Review Definition Let A be an n × n matrix. The number λ is called an eigenvalue of A if there exists a nonzero vector x ∈ Rn such that Ax = λx. (1) Every nonzero vector satisfying (??) is called an eigenvector of A associated with the eigenvalue λ.
  • Diagonalization Procedure Find the eigenvalues and eigenvectors. Arrange the eigenvectors in a matrix P and the corresponding eigenvalues in a diagonal matrix D. If you have “enough” eigenvectors (that is, one for each column of A), the original matrix is diagonalizable and equal to PDP−1 . Pitfalls: Repeated eigenvalues Nonreal eigenvalues
  • Outline Hatsumon Concept Review Eigenbusiness Diagonalization The Spectral Theorem The split case The symmetric case Iterations Applications Back to Fibonacci Markov chains
  • Question Under what conditions on A would you be able to guarantee that A is diagonalizable?
  • Theorem (Baby Spectral Theorem) Suppose An×n has n distinct real eigenvalues. Then A is diagonalizable.
  • Theorem (Spectral Theorem for Symmetric Matrices) Suppose An×n is symmetric, that is, A = A. Then A is diagonalizable. In fact, the eigenvectors can be chosen to be pairwise orthogonal with length one, which means that P−1 = P . Thus a symmetric matrix can be diagonalized as A = PDP ,
  • Powers of diagonalizable matrices Remember if A is diagonalizable then Ak = (PDP−1 )k = (PDP−1 )(PDP−1 ) · · · (PDP−1 ) k −1 −1 −1 = PD(P P)D(P P) · · · D(P P)DP−1 = PDk P−1
  • Another way to look at it If v is an eigenvector corresponding to eigenvalue λ, then Ak v =
  • Another way to look at it If v is an eigenvector corresponding to eigenvalue λ, then Ak v = λ k v
  • Another way to look at it If v is an eigenvector corresponding to eigenvalue λ, then Ak v = λ k v If v1 , . . . vn are eigenvectors with eigenvalues λ1 , . . . , λn , then Ak (c1 v1 + · · · + cn vn )
  • Another way to look at it If v is an eigenvector corresponding to eigenvalue λ, then Ak v = λ k v If v1 , . . . vn are eigenvectors with eigenvalues λ1 , . . . , λn , then Ak (c1 v1 + · · · + cn vn ) = c1 λk v1 + · · · + cn λk vn 1 n
  • Another way to look at it If v is an eigenvector corresponding to eigenvalue λ, then Ak v = λ k v If v1 , . . . vn are eigenvectors with eigenvalues λ1 , . . . , λn , then Ak (c1 v1 + · · · + cn vn ) = c1 λk v1 + · · · + cn λk vn 1 n If A is diagonalizable, there are n linearly independent eigenvectors, so any v can be written as a linear combination of them.
  • Outline Hatsumon Concept Review Eigenbusiness Diagonalization The Spectral Theorem The split case The symmetric case Iterations Applications Back to Fibonacci Markov chains
  • Setting up the Fibonacci sequence Recall the Fibonacci sequence defined by f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1
  • Setting up the Fibonacci sequence Recall the Fibonacci sequence defined by f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1 Let’s let g (k) = f (k + 1). Then g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k).
  • Setting up the Fibonacci sequence Recall the Fibonacci sequence defined by f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1 Let’s let g (k) = f (k + 1). Then g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k). f (k) So if y(k) = , we have g (k) f (k + 1) g (k) 0 1 y(k + 1) = = = y(k) g (k + 1) f (k) + g (k) 1 1
  • Setting up the Fibonacci sequence Recall the Fibonacci sequence defined by f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1 Let’s let g (k) = f (k + 1). Then g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k). f (k) So if y(k) = , we have g (k) f (k + 1) g (k) 0 1 y(k + 1) = = = y(k) g (k + 1) f (k) + g (k) 1 1 So if A is this matrix, then y(k) =
  • Setting up the Fibonacci sequence Recall the Fibonacci sequence defined by f (k + 2) = f (k) + f (k + 1), f (0) = 1, f (1) = 1 Let’s let g (k) = f (k + 1). Then g (k + 1) = f (k + 2) = f (k) + f (k + 1) = f (k) + g (k). f (k) So if y(k) = , we have g (k) f (k + 1) g (k) 0 1 y(k + 1) = = = y(k) g (k + 1) f (k) + g (k) 1 1 So if A is this matrix, then y(k) = Ak y(0).
  • Diagonalize 0 1 The eigenvalues of A = are found by solving 1 1 −λ 1 0= = (−λ)(1 − λ) − 1 1 1−λ = λ2 − λ − 1
  • Diagonalize 0 1 The eigenvalues of A = are found by solving 1 1 −λ 1 0= = (−λ)(1 − λ) − 1 1 1−λ = λ2 − λ − 1 The roots are √ √ 1+ 5 1− 5 ϕ= ϕ= ¯ 2 2
  • Diagonalize 0 1 The eigenvalues of A = are found by solving 1 1 −λ 1 0= = (−λ)(1 − λ) − 1 1 1−λ = λ2 − λ − 1 The roots are √ √ 1+ 5 1− 5 ϕ= ϕ= ¯ 2 2 Notice that ϕ + ϕ = 1, ¯ ϕϕ = −1 ¯ (These facts make later calculations simpler.)
  • Eigenvectors We row reduce to find the eigenvectors: −ϕ 1 −ϕ 1 −ϕ ¯ −ϕ 1 A − ϕI = = 1 1−ϕ 1 ϕ ←+ ¯ − 0 0 1 So is an eigenvector for A corresponding to the eigenvalue ϕ. ϕ
  • Eigenvectors We row reduce to find the eigenvectors: −ϕ 1 −ϕ 1 −ϕ ¯ −ϕ 1 A − ϕI = = 1 1−ϕ 1 ϕ ←+ ¯ − 0 0 1 So is an eigenvector for A corresponding to the eigenvalue ϕ. ϕ 1 Similarly, is an eigenvector for A corresponding to the ϕ¯ eigenvalue ϕ. ¯
  • Eigenvectors We row reduce to find the eigenvectors: −ϕ 1 −ϕ 1 −ϕ ¯ −ϕ 1 A − ϕI = = 1 1−ϕ 1 ϕ ←+ ¯ − 0 0 1 So is an eigenvector for A corresponding to the eigenvalue ϕ. ϕ 1 Similarly, is an eigenvector for A corresponding to the ϕ¯ eigenvalue ϕ. So now we know that ¯ 1 1 y(k) = c1 ϕk + c2 ϕk ¯ ϕ ϕ ¯
  • What are the constants? To find c1 and c2 , we solve 1 1 1 1 1 c1 = c1 + c2 = 1 ϕ ¯ ϕ ϕ ϕ ¯ c2 −1 c1 1 1 1 1 ϕ −1 ¯ =⇒ = = c2 ϕ ϕ ¯ 1 ϕ−ϕ ¯ −ϕ 1 1 ϕ−1 ¯ 1 = ϕ−ϕ ϕ+1 ¯ 1 1 ϕ =√ 5 −ϕ ¯
  • Finally Putting this all together we have ϕ 1 ϕ ¯ 1 y(k) = √ ϕk − √ ϕk¯ 5 ϕ 5 ϕ ¯ f (k) 1 ϕk+1 − ϕk+1 ¯ =√ k+2 − ϕk+2 g (k) 5 ϕ ¯
  • Finally Putting this all together we have ϕ 1 ϕ ¯ 1 y(k) = √ ϕk − √ ϕk¯ 5 ϕ 5 ϕ ¯ f (k) 1 ϕk+1 − ϕk+1 ¯ =√ k+2 − ϕk+2 g (k) 5 ϕ ¯ So √ √   k+1 k+1 1 1+ 5 1− 5 f (k) = √  −  5 2 2
  • Markov Chains Recall the setup: T is a transition matrix giving the probabilities of switching from any state to any of the other states.
  • Markov Chains Recall the setup: T is a transition matrix giving the probabilities of switching from any state to any of the other states. We seek a steady-state vector, i.e., a probability vector u such that Tu = u.
  • Markov Chains Recall the setup: T is a transition matrix giving the probabilities of switching from any state to any of the other states. We seek a steady-state vector, i.e., a probability vector u such that Tu = u. This is nothing more than an eigenvector of eigenvalue 1!
  • Theorem If T is a regular doubly-stochastic matrix, then 1 is an eigenvalue for T all other eigenvalues of T have absolute value less than 1.
  • Let u be an eigenvector of eigenvalue 1, scaled so it’s a probability vector. Let v2 , . . . , vn be eigenvectors corresponding to the other eigenvalues λ2 , . . . , λn . Then for any initial state x(0), we have x(k) = Ak x(0) = Ak (c1 u + c2 λ2 v2 + · · · + cn λn vn ) = c1 u + c2 λk v2 + · · · + cn λk vn 2 n So x(k) → c1 u Since each x(k) is a probability vector, c1 = 1. Hence x(k) → c1 u