Toeplitz (2)

633 views
510 views

Published on

matrix

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
633
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Toeplitz (2)

  1. 1. Toeplitz and Circulant Matrices: A review
  2. 2. Toeplitz and Circulant Matrices: A review Robert M. Gray Deptartment of Electrical Engineering Stanford University Stanford 94305, USA rmgray@stanford.edu
  3. 3. ContentsChapter 1 Introduction 11.1 Toeplitz and Circulant Matrices 11.2 Examples 51.3 Goals and Prerequisites 9Chapter 2 The Asymptotic Behavior of Matrices 112.1 Eigenvalues 112.2 Matrix Norms 142.3 Asymptotically Equivalent Sequences of Matrices 172.4 Asymptotically Absolutely Equal Distributions 24Chapter 3 Circulant Matrices 313.1 Eigenvalues and Eigenvectors 323.2 Matrix Operations on Circulant Matrices 34Chapter 4 Toeplitz Matrices 37 v
  4. 4. vi CONTENTS4.1 Sequences of Toeplitz Matrices 374.2 Bounds on Eigenvalues of Toeplitz Matrices 414.3 Banded Toeplitz Matrices 434.4 Wiener Class Toeplitz Matrices 48Chapter 5 Matrix Operations on Toeplitz Matrices 615.1 Inverses of Toeplitz Matrices 625.2 Products of Toeplitz Matrices 675.3 Toeplitz Determinants 70Chapter 6 Applications to Stochastic Time Series 736.1 Moving Average Processes 746.2 Autoregressive Processes 776.3 Factorization 80Acknowledgements 83References 85
  5. 5. Abstract   t0 t−1 t−2 · · · t−(n−1)  t1 t0 t−1      t2 . .   t1 t0 .    . ..  .  . .  tn−1 ··· t0 The fundamental theorems on the asymptotic behavior of eigenval-ues, inverses, and products of banded Toeplitz matrices and Toeplitzmatrices with absolutely summable elements are derived in a tutorialmanner. Mathematical elegance and generality are sacrificed for con-ceptual simplicity and insight in the hope of making these results avail-able to engineers lacking either the background or endurance to attackthe mathematical literature on the subject. By limiting the generalityof the matrices considered, the essential ideas and results can be con-veyed in a more intuitive manner without the mathematical machineryrequired for the most general cases. As an application the results areapplied to the study of the covariance matrices and their factors oflinear models of discrete time random processes. vii
  6. 6. 1 Introduction1.1 Toeplitz and Circulant MatricesA Toeplitz matrix is an n × n matrix Tn = [tk,j ; k, j = 0, 1, . . . , n − 1]where tk,j = tk−j , i.e., a matrix of the form   t0 t−1 t−2 · · · t−(n−1)  t1 t0 t−1     . .  Tn =  t2  t1 t0 . .  (1.1)  . ..  .  . .  tn−1 ··· t0Such matrices arise in many applications. For example, suppose that   x0  x1  x = (x0 , x1 , . . . , xn−1 )′ =    . .   .  xn−1 1
  7. 7. 2 Introductionis a column vector (the prime denotes transpose) denoting an “input”and that tk is zero for k < 0. Then the vector   t0 0 0 · · · 0  x0   t1 t0 0 x1      .  x  .  y = Tn x =  t2 t1 t0 .  2      .  .   . ..  .  . . .  tn−1 · · · t0 xn−1   x0 t0  t x +t x   1 0 0 1  2 i=0 t2−i xi   =    . .  .     n−1 i=0 tn−1−i xiwith entries k yk = tk−i xi i=0represents the the output of the discrete time causal time-invariant filterh with “impulse response” tk . Equivalently, this is a matrix and vectorformulation of a discrete-time convolution of a discrete time input witha discrete time filter. As another example, suppose that {Xn } is a discrete time ran-dom process with mean function given by the expectations mk =E(Xk ) and covariance function given by the expectations KX (k, j) =E[(Xk − mk )(Xj − mj )]. Signal processing theory such as predic-tion, estimation, detection, classification, regression, and communca-tions and information theory are most thoroughly developed underthe assumption that the mean is constant and that the covarianceis Toeplitz, i.e., KX (k, j) = KX (k − j), in which case the processis said to be weakly stationary. (The terms “covariance stationary”and “second order stationary” also are used when the covariance isassumed to be Toeplitz.) In this case the n × n covariance matricesKn = [KX (k, j); k, j = 0, 1, . . . , n − 1] are Toeplitz matrices. Muchof the theory of weakly stationary processes involves applications of
  8. 8. 1.1. Toeplitz and Circulant Matrices 3Toeplitz matrices. Toeplitz matrices also arise in solutions to differen-tial and integral equations, spline functions, and problems and methodsin physics, mathematics, statistics, and signal processing. A common special case of Toeplitz matrices — which will resultin significant simplification and play a fundamental role in developingmore general results — results when every row of the matrix is a rightcyclic shift of the row above it so that tk = t−(n−k) = tk−n for k =1, 2, . . . , n − 1. In this case the picture becomes   t0 t−1 t−2 · · · t−(n−1)  t−(n−1) t0 t−1     . .  Cn =  t−(n−2) t−(n−1) t0 . .  (1.2)  . . ..   . .  t−1 t−2 ··· t0A matrix of this form is called a circulant matrix. Circulant matricesarise, for example, in applications involving the discrete Fourier trans-form (DFT) and the study of cyclic codes for error correction. A great deal is known about the behavior of Toeplitz matrices— the most common and complete references being Grenander andSzeg¨ [16] and Widom [33]. A more recent text devoted to the subject ois B¨ttcher and Silbermann [5]. Unfortunately, however, the necessary olevel of mathematical sophistication for understanding reference [16]is frequently beyond that of one species of applied mathematician forwhom the theory can be quite useful but is relatively little understood.This caste consists of engineers doing relatively mathematical (for anengineering background) work in any of the areas mentioned. This ap-parent dilemma provides the motivation for attempting a tutorial intro-duction on Toeplitz matrices that proves the essential theorems usingthe simplest possible and most intuitive mathematics. Some simple andfundamental methods that are deeply buried (at least to the untrainedmathematician) in [16] are here made explicit. The most famous and arguably the most important result describingToeplitz matrices is Szeg¨’s theorem for sequences of Toeplitz matrices o{Tn } which deals with the behavior of the eigenvalues as n goes toinfinity. A complex scalar α is an eigenvalue of a matrix A if there is a
  9. 9. 4 Introductionnonzero vector x such that Ax = αx, (1.3)in which case we say that x is a (right) eigenvector of A. If A is Hermi-tian, that is, if A∗ = A, where the asterisk denotes conjugate transpose,then the eigenvalues of the matrix are real and hence α∗ = α, wherethe asterisk denotes the conjugate in the case of a complex scalar.When this is the case we assume that the eigenvalues {αi } are orderedin a nondecreasing manner so that α0 ≥ α1 ≥ α2 · · · . This eases theapproximation of sums by integrals and entails no loss of generality.Szeg¨’s theorem deals with the asymptotic behavior of the eigenvalues o{τn,i ; i = 0, 1, . . . , n − 1} of a sequence of Hermitian Toeplitz matricesTn = [tk−j ; k, j = 0, 1, 2, . . . , n − 1]. The theorem requires that severaltechnical conditions be satisfied, including the existence of the Fourierseries with coefficients tk related to each other by ∞ f (λ) = tk eikλ ; λ ∈ [0, 2π] (1.4) k=−∞ 2π 1 tk = f (λ)e−ikλ dλ. (1.5) 2π 0Thus the sequence {tk } determines the function f and vice versa, hencethe sequence of matrices is often denoted as Tn (f ). If Tn (f ) is Hermi-tian, that is, if Tn (f )∗ = Tn (f ), then t−k = t∗ and f is real-valued. k Under suitable assumptions the Szeg¨ theorem states that o n−1 2π 1 1 lim F (τn,k ) = F (f (λ)) dλ (1.6) n→∞ n 2π 0 k=0for any function F that is continuous on the range of f . Thus, forexample, choosing F (x) = x results in n−1 2π 1 1 lim τn,k = f (λ) dλ, (1.7) n→∞ n 2π 0 k=0so that the arithmetic mean of the eigenvalues of Tn (f ) converges tothe integral of f . The trace Tr(A) of a matrix A is the sum of its
  10. 10. 1.2. Examples 5diagonal elements, which in turn from linear algebra is the sum of theeigenvalues of A if the matrix A is Hermitian. Thus (1.7) implies that 2π 1 1 lim Tr(Tn (f )) = f (λ) dλ. (1.8) n→∞ n 2π 0Similarly, for any power s n−1 2π 1 s 1 lim τn,k = f (λ)s dλ. (1.9) n→∞ n 2π 0 k=0 If f is real and such that the eigenvalues τn,k ≥ m > 0 for all n, k,then F (x) = ln x is a continuous function on [m, ∞) and the Szeg¨ otheorem can be applied to show that n−1 2π 1 1 lim ln τn,i = ln f (λ) dλ. (1.10) n→∞ n 2π 0 i=0From linear algebra, however, the determinant of a matrix Tn (f ) isgiven by the product of its eigenvalues, n−1 det(Tn (f )) = τn,i , i=0so that (1.10) becomes n−1 1 lim ln det(Tn (f ))1/n = lim ln τn,i n→∞ n→∞ n i=0 2π 1 = ln f (λ) dλ. (1.11) 2π 0As we shall later see, if f has a lower bound m > 0, than indeed all theeigenvalues will share the lower bound and the above derivation applies.Determinants of Toeplitz matrices are called Toeplitz determinants and(1.11) describes their limiting behavior.1.2 ExamplesA few examples from statistical signal processing and information the-ory illustrate the the application of the theorem. These are described
  11. 11. 6 Introductionwith a minimum of background in order to highlight how the asymp-totic eigenvalue distribution theorem allows one to evaluate results forprocesses using results from finite-dimensional vectors.The differential entropy rate of a Gaussian processSuppose that {Xn ; n = 0, 1, . . .} is a random process described byprobability density functions fX n (xn ) for the random vectors X n =(X0 , X1 , . . . , Xn−1 ) defined for all n = 0, 1, 2, . . .. The Shannon differ-ential entropy h(X n ) is defined by the integral h(X n ) = − fX n (xn ) ln fX n (xn ) dxnand the differential entropy rate of the random process is defined bythe limit 1 h(X) = lim h(X n ) n→∞ nif the limit exists. (See, for example, Cover and Thomas[7].) A stationary zero mean Gaussian random process is completely de-scribed by its mean correlation function rk,j = rk−j = E[Xk Xj ] or,equivalently, by its power spectral density function f , the Fourier trans-form of the covariance function: ∞ f (λ) = rn einλ , n=−∞ 2π 1 rk = f (λ)e−iλk dλ 2π 0For a fixed positive integer n, the probability density function is 1 n′ −1 n e− 2 x Rn x fX n (xn ) = , (2π)n/2 det(Rn )1/2where Rn is the n × n covariance matrix with entries rk−j . A straight-forward multidimensional integration using the properties of Gaussianrandom vectors yields the differential entropy 1 h(X n ) = ln(2πe)n detRn . 2
  12. 12. 1.2. Examples 7The problem at hand is to evaluate the entropy rate 1 1 1 h(X) = lim h(X n ) = ln(2πe) + lim ln det(Rn ). n→∞ n 2 n→∞ nThe matrix Rn is the Toeplitz matrix Tn generated by the power spec-tral density f and det(Rn ) is a Toeplitz determinant and we have im-mediately from (1.11) that 2π 1 1 h(X) = log 2πe ln f (λ) dλ . (1.12) 2 2π 0This is a typical use of (1.6) to evaluate the limit of a sequence of finite-dimensional qualities, in this case specified by the determinants of of asequence of Toeplitz matrices.The Shannon rate-distortion function of a Gaussian processAs a another example of the application of (1.6), consider the eval-uation of the rate-distortion function of Shannon information theoryfor a stationary discrete time Gaussian random process with 0 mean,covariance KX (k, j) = tk−j , and power spectral density f (λ) given by(1.4). The rate-distortion function characterizes the optimal tradeoff ofdistortion and bit rate in data compression or source coding systems.The derivation details can be found, e.g., in Berger [3], Section 4.5,but the point here is simply to provide an example of an application of(1.6). The result is found by solving an n-dimensional optimization interms of the eigenvalues τn,k of Tn (f ) and then taking limits to obtainparametric expressions for distortion and rate: n−1 1 Dθ = lim min(θ, τn,k ) n→∞ n k=0 n−1 1 1 τn,k Rθ = lim max(0, ln ). n→∞ n 2 θ k=0
  13. 13. 8 IntroductionThe theorem can be applied to turn this limiting sum involving eigen-values into an integral involving the power spectral density: 2π Dθ = min(θ, f (λ)) dλ 0 2π 1 f (λ) Rθ = max 0, ln dλ. 0 2 θAgain an infinite dimensional problem is solved by first solving a finitedimensional problem involving the eigenvalues of matrices, and thenusing the asymptotic eigenvalue theorem to find an integral expressionfor the limiting result.One-step prediction errorAnother application with a similar development is the one-step predic-tion error problem. Suppose that Xn is a weakly stationary randomprocess with covariance tk−j . A classic problem in estimation theory isto find the best linear predictor based on the previous n values of Xi ,i = 0, 1, 2, . . . , n − 1, n ˆ Xn = ai Xn−i , i=1 ˆin the sense of minimizing the mean squared error E[(Xn −Xn )2 ] over allchoices of coefficients ai . It is well known (see, e.g., [14]) that the min-imum is given by the ratio of Toeplitz determinants det Tn+1 / det Tn .The question is to what this ratio converges in the limit as n goes to∞. This is not quite in a form suitable for application of the theorem, 1/nbut we have already evaluated the limit of detTn in (1.11) and forlarge n we have that 2π 1 (det Tn )1/n ≈ exp ln f (λ) dλ ≈ (det Tn+1 )1/(n+1) 2π 0and hence in particular that (det Tn+1 )1/(n+1) ≈ (det Tn )1/nso that 2π det Tn+1 1 ≈ (det Tn )1/n → exp ln f (λ) dλ , det Tn 2π 0
  14. 14. 1.3. Goals and Prerequisites 9providing the desired limit. These arguments can be made exact, butit is hoped they make the point that the asymptotic eigenvalue distri-bution theorem for Hermitian Toeplitz matrices can be quite useful forevaluating limits of solutions to finite-dimensional problems.Further examplesThe Toeplitz distribution theorems have also found application in morecomplicated information theoretic evaluations, including the channelcapacity of Gaussian channels [30, 29] and the rate-distortion functionsof autoregressive sources [11]. The examples described here were chosenbecause they were in the author’s area of competence, but similar appli- TMcations crop up in a variety of areas. A Google search using the titleof this document shows diverse applications of the eigenvalue distribu-tion theorem and related results, including such areas of coding, spec-tral estimation, watermarking, harmonic analysis, speech enhancement,interference cancellation, image restoration, sensor networks for detec-tion, adaptive filtering, graphical models, noise reduction, and blindequalization.1.3 Goals and PrerequisitesThe primary goal of this work is to prove a special case of Szeg¨’s oasymptotic eigenvalue distribution theorem in Theorem 4.2. The as-sumptions used here are less general than Szeg¨’s, but this permits omore straightforward proofs which require far less mathematical back-ground. In addition to the fundamental theorems, several related re-sults that naturally follow but do not appear to be collected togetheranywhere are presented. We do not attempt to survey the fields of ap-plications of these results, as such a survey would be far beyond theauthor’s stamina and competence. A few applications are noted by wayof examples. The essential prerequisites are a knowledge of matrix theory, an en-gineer’s knowledge of Fourier series and random processes, and calculus(Riemann integration). A first course in analysis would be helpful, but itis not assumed. Several of the occasional results required of analysis are
  15. 15. 10 Introductionusually contained in one or more courses in the usual engineering cur-riculum, e.g., the Cauchy-Schwarz and triangle inequalities. Hopefullythe only unfamiliar results are a corollary to the Courant-Fischer the-orem and the Weierstrass approximation theorem. The latter is an in-tuitive result which is easily believed even if not formally proved. Moreadvanced results from Lebesgue integration, measure theory, functionalanalysis, and harmonic analysis are not used. Our approach is to relate the properties of Toeplitz matrices to thoseof their simpler, more structured special case — the circulant or cyclicmatrix. These two matrices are shown to be asymptotically equivalentin a certain sense and this is shown to imply that eigenvalues, inverses,products, and determinants behave similarly. This approach providesa simplified and direct path to the basic eigenvalue distribution andrelated theorems. This method is implicit but not immediately appar-ent in the more complicated and more general results of Grenander inChapter 7 of [16]. The basic results for the special case of a bandedToeplitz matrix appeared in [12], a tutorial treatment of the simplestcase which was in turn based on the first draft of this work. The re-sults were subsequently generalized using essentially the same simplemethods, but they remain less general than those of [16]. As an application several of the results are applied to study certainmodels of discrete time random processes. Two common linear modelsare studied and some intuitively satisfying results on covariance matri-ces and their factors are given. We sacrifice mathematical elegance and generality for conceptualsimplicity in the hope that this will bring an understanding of theinteresting and useful properties of Toeplitz matrices to a wider audi-ence, specifically to those who have lacked either the background or thepatience to tackle the mathematical literature on the subject.
  16. 16. 2 The Asymptotic Behavior of MatricesWe begin with relevant definitions and a prerequisite theorem and pro-ceed to a discussion of the asymptotic eigenvalue, product, and inversebehavior of sequences of matrices. The major use of the theorems of thischapter is to relate the asymptotic behavior of a sequence of compli-cated matrices to that of a simpler asymptotically equivalent sequenceof matrices.2.1 EigenvaluesAny complex matrix A can be written as A = U RU ∗ , (2.1)where the asterisk ∗ denotes conjugate transpose, U is unitary, i.e.,U −1 = U ∗ , and R = {rk,j } is an upper triangular matrix ([18], p.79). The eigenvalues of A are the principal diagonal elements of R. IfA is normal, i.e., if A∗ A = AA∗ , then R is a diagonal matrix, whichwe denote as R = diag(αk ; k = 0, 1, . . . , n − 1) or, more simply, R =diag(αk ). If A is Hermitian, then it is also normal and its eigenvaluesare real. A matrix A is nonnegative definite if x∗ Ax ≥ 0 for all nonzero vec- 11
  17. 17. 12 The Asymptotic Behavior of Matricestors x. The matrix is positive definite if the inequality is strict for allnonzero vectors x. (Some books refer to these properties as positivedefinite and strictly positive definite, respectively.) If a Hermitian ma-trix is nonnegative definite, then its eigenvalues are all nonnegative. Ifthe matrix is positive definite, then the eigenvalues are all (strictly)positive. The extreme values of the eigenvalues of a Hermitian matrix H canbe characterized in terms of the Rayleigh quotient RH (x) of the matrixand a complex-valued vector x defined by RH (x) = (x∗ Hx)/(x∗ x). (2.2)As the result is both important and simple to prove, we state and proveit formally. The result will be useful in specifying the interval containingthe eigenvalues of a Hermitian matrix. Usually in books on matrix theory it is proved as a corollary tothe variational description of eigenvalues given by the Courant-Fischertheorem (see, e.g., [18], p. 116, for the case of real symmetric matrices),but the following result is easily demonstrated directly.Lemma 2.1. Given a Hermitian matrix H, let ηM and ηm be themaximum and minimum eigenvalues of H, respectively. Then ηm = min RH (x) = min z ∗ Hz (2.3) x ∗ z:z z=1 ηM = max RH (x) = max z ∗ Hz. (2.4) x ∗ z:z z=1Proof. Suppose that em and eM are eigenvectors corresponding to theminimum and maximum eigenvalues ηm and ηM , respectively. ThenRH (em ) = ηm and RH (eM ) = ηM and therefore ηm ≥ min RH (x) (2.5) x ηM ≤ max RH (x). (2.6) xSince H is Hermitian we can write H = U AU ∗ , where U is unitary and
  18. 18. 2.1. Eigenvalues 13A is the diagonal matrix of the eigenvalues ηk , and therefore x∗ Hx x∗ U AU ∗ x = x∗ x x∗ x n 2 y ∗ Ay k=1 |yk | ηk = = n , y∗ y k=1 |yk | 2where y = U ∗ x and we have taken advantage of the fact that U isunitary so that x∗ x = y ∗ y. But for all vectors y, this ratio is boundbelow by ηm and above by ηM and hence for all vectors x ηm ≤ RH (x) ≤ ηM (2.7)which with (2.5–2.6) completes the proof of the left-hand equalities ofthe lemma. The right-hand equalities are easily seen to hold since if xminimizes (maximizes) the Rayleigh quotient, then the normalized vec-tor x/x∗ x satisfies the constraint of the minimization (maximization)to the right, hence the minimum (maximum) of the Rayleigh quotionmust be bigger (smaller) than the constrained minimum (maximum)to the right. Conversely, if x achieves the rightmost optimization, thenthe same x yields a Rayleigh quotient of the the same optimum value.2 The following lemma is useful when studying non-Hermitian ma-trices and products of Hermitian matrices. First note that if A is anarbitrary complex matrix, then the matrix A∗ A is both Hermitian andnonnegative definite. It is Hermitian because (A∗ A)∗ = A∗ A and it isnonnegative definite since if for any complex vector x we define thecomplex vector y = Ax, then n x∗ (A∗ A)x = y ∗ y = |yk |2 ≥ 0. k=1Lemma 2.2. Let A be a matrix with eigenvalues αk . Define the eigen-values of the Hermitian nonnegative definite matrix A∗ A to be λk ≥ 0.Then n−1 n−1 λk ≥ |αk |2 , (2.8) k=0 k=0with equality iff (if and only if) A is normal.
  19. 19. 14 The Asymptotic Behavior of MatricesProof. The trace of a matrix is the sum of the diagonal elements of amatrix. The trace is invariant to unitary operations so that it also isequal to the sum of the eigenvalues of a matrix, i.e., n−1 n−1 Tr{A∗ A} = (A∗ A)k,k = λk . (2.9) k=0 k=0From (2.1), A = U RU ∗ and hence n−1 n−1 Tr{A∗ A} = Tr{R∗ R} = |rj,k |2 k=0 j=0 n−1 = |αk |2 + |rj,k |2 k=0 k=j n−1 ≥ |αk |2 (2.10) k=0Equation (2.10) will hold with equality iff R is diagonal and hence iffA is normal. 2 Lemma 2.2 is a direct consequence of Shur’s theorem ([18], pp. 229-231) and is also proved in [16], p. 106.2.2 Matrix NormsTo study the asymptotic equivalence of matrices we require a metricon the space of linear space of matrices. A convenient metric for ourpurposes is a norm of the difference of two matrices. A norm N (A) onthe space of n × n matrices satisfies the following properties: (1) N (A) ≥ 0 with equality if and only if A = 0, is the all zero matrix. (2) For any two matrices A and B, N (A + B) ≤ N (A) + N (B). (2.11) (3) For any scalar c and matrix A, N (cA) = |c|N (A).
  20. 20. 2.2. Matrix Norms 15The triangle inequality in (2.11) will be used often as is the followingdirect consequence: N (A − B) ≥ |N (A) − N (B)|. (2.12) Two norms — the operator or strong norm and the Hilbert-Schmidtor weak norm (also called the Frobenius norm or Euclidean norm whenthe scaling term is removed) — will be used here ([16], pp. 102–103). Let A be a matrix with eigenvalues αk and let λk ≥ 0 be the eigen-values of the Hermitian nonnegative definite matrix A∗ A. The strongnorm A is defined by A = max RA∗ A (x)1/2 = max [z ∗ A∗ Az]1/2 . (2.13) x ∗ z:z z=1From Lemma 2.1 2 ∆ A = max λk = λM . (2.14) kThe strong norm of A can be bound below by letting eM be the normal-ized eigenvector of A corresponding to αM , the eigenvalue of A havinglargest absolute value: 2 A = max z ∗ A∗ Az ≥ (e∗ A∗ )(AeM ) = |αM |2 . M (2.15) z:z z=1 ∗If A is itself Hermitian, then its eigenvalues αk are real and the eigen-values λk of A∗ A are simply λk = α2 . This follows since if e(k) is an keigenvector of A with eigenvalue αk , then A∗ Ae(k) = αk A∗ e(k) = αk e(k) . 2Thus, in particular, if A is Hermitian then A = max |αk | = |αM |. (2.16) k The weak norm (or Hilbert-Schmidt norm) of an n × n matrixA = [ak,j ] is defined by  1/2 n−1 n−1 1 |A| =  |ak,j |2  n k=0 j=0 n−1 1/2 1 1 = ( Tr[A∗ A])1/2 = λk . (2.17) n n k=0
  21. 21. 16 The Asymptotic Behavior of Matrices √The quantity n|A| is sometimes called the Frobenius norm or Eu-clidean norm. From Lemma 2.2 we have n−1 1 |A|2 ≥ |αk |2 , with equality iff A is normal. (2.18) n k=0 The Hilbert-Schmidt norm is the “weaker” of the two norms since n−1 2 1 A = max λk ≥ λk = |A|2 . (2.19) k n k=0 A matrix is said to be bounded if it is bounded in both norms. The weak norm is usually the most useful and easiest to handle ofthe two, but the strong norm provides a useful bound for the productof two matrices as shown in the next lemma.Lemma 2.3. Given two n × n matrices G = {gk,j } and H = {hk,j },then |GH| ≤ G |H|. (2.20)Proof. Expanding terms yields 1 |GH|2 = | gi,k hk,j |2 n i j k 1 ∗ ∗ = gi,k gi,m hk,j hm,j n m i j k 1 = h∗ G∗ Ghj , j (2.21) n jwhere hj is the j th column of H. From (2.13), h∗ G∗ Ghj j 2 ≤ G h∗ hj jand therefore 1 |GH|2 ≤ G 2 h∗ hj = G j 2 |H|2 . n j 2 Lemma 2.3 is the matrix equivalent of (7.3a) of ([16], p. 103). Notethat the lemma does not require that G or H be Hermitian.
  22. 22. 2.3. Asymptotically Equivalent Sequences of Matrices 172.3 Asymptotically Equivalent Sequences of MatricesWe will be considering sequences of n × n matrices that approximateeach other as n becomes large. As might be expected, we will use theweak norm of the difference of two matrices as a measure of the “dis-tance” between them. Two sequences of n × n matrices {An } and {Bn }are said to be asymptotically equivalent if (1) An and Bn are uniformly bounded in strong (and hence in weak) norm: An , Bn ≤ M < ∞, n = 1, 2, . . . (2.22) and (2) An − Bn = Dn goes to zero in weak norm as n → ∞: lim |An − Bn | = lim |Dn | = 0. n→∞ n→∞Asymptotic equivalence of the sequences {An } and {Bn } will be ab-breviated An ∼ Bn . We can immediately prove several properties of asymptotic equiva-lence which are collected in the following theorem.Theorem 2.1. Let {An } and {Bn } be sequences of matrices witheigenvalues {αn , i} and {βn , i}, respectively. (1) If An ∼ Bn , then lim |An | = lim |Bn |. (2.23) n→∞ n→∞ (2) If An ∼ Bn and Bn ∼ Cn , then An ∼ Cn . (3) If An ∼ Bn and Cn ∼ Dn , then An Cn ∼ Bn Dn . (4) If An ∼ Bn and A−1 , Bn n −1 ≤ K < ∞, all n, then An−1 ∼ B −1 . n (5) If An Bn ∼ Cn and A−1 ≤ K < ∞, then Bn ∼ A−1 Cn . n n (6) If An ∼ Bn , then there are finite constants m and M such that m ≤ αn,k , βn,k ≤ M , n = 1, 2, . . . k = 0, 1, . . . , n − 1. (2.24)
  23. 23. 18 The Asymptotic Behavior of MatricesProof. (1) Eq. (2.23) follows directly from (2.12). −→ (2) |An −Cn | = |An −Bn +Bn −Cn | ≤ |An −Bn |+|Bn −Cn | n→∞ 0 (3) Applying Lemma 2.3 yields |An Cn − Bn Dn | = |An Cn − An Dn + An Dn − Bn Dn | ≤ An |Cn − Dn |+ Dn |An − Bn | −→ n→∞ 0. (4) n −1 |A−1 − Bn | = |Bn Bn A−1 − Bn An A−1 | −1 n −1 n −1 ≤ Bn · A−1 n ·|Bn − An | −→ n→∞ 0. (5) Bn − A−1 Cn n = A−1 An Bn − A−1 Cn n n ≤ A−1 n |An Bn − Cn | −→ n→∞ 0. (6) If An ∼ Bn then they are uniformly bounded in strong norm by some finite number M and hence from (2.15), |αn,k | ≤ M and |βn,k | ≤ M and hence −M ≤ αn,k , βn,k ≤ M . So the result holds for m = −M and it may hold for larger m, e.g., m = 0 if the matrices are all nonnegative definite. 2 The above results will be useful in several of the later proofs. Asymp-totic equality of matrices will be shown to imply that eigenvalues, prod-ucts, and inverses behave similarly. The following lemma provides aprelude of the type of result obtainable for eigenvalues and will itselfserve as the essential part of the more general results to follow. It showsthat if the weak norm of the difference of the two matrices is small, thenthe sums of the eigenvalues of each must be close.
  24. 24. 2.3. Asymptotically Equivalent Sequences of Matrices 19Lemma 2.4. Given two matrices A and B with eigenvalues {αk } and{βk }, respectively, then n−1 n−1 1 1 | αk − βk | ≤ |A − B|. n n k=0 k=0Proof: Define the difference matrix D = A − B = {dk,j } so that n−1 n−1 αk − βk = Tr(A) − Tr(B) k=0 k=0 = Tr(D).Applying the Cauchy-Schwarz inequality (see, e.g., [22], p. 17) to Tr(D)yields n−1 2 n−1 2 |Tr(D)| = dk,k ≤n |dk,k |2 k=0 k=0 n−1 n−1 ≤ n |dk,j |2 = n2 |D|2 . (2.25) k=0 j=0Taking the square root and dividing by n proves the lemma. 2 An immediate consequence of the lemma is the following corollary.Corollary 2.1. Given two sequences of asymptotically equivalent ma-trices {An } and {Bn } with eigenvalues {αn,k } and {βn,k }, respectively,then n−1 1 lim (αn,k − βn,k ) = 0, (2.26) n→∞ n k=0and hence if either limit exists individually, n−1 n−1 1 1 lim αn,k = lim βn,k . (2.27) n→∞ n n→∞ n k=0 k=0Proof. Let Dn = {dk,j } = An − Bn . Eq. (2.27) is equivalent to 1 lim Tr(Dn ) = 0. (2.28) n→∞ n
  25. 25. 20 The Asymptotic Behavior of MatricesDividing by n2 , and taking the limit, results in 1 −→ 0 ≤ | Tr(Dn )|2 ≤ |Dn |2 n→∞ 0 (2.29) nfrom the lemma, which implies (2.28) and hence (2.27). 2 The previous corollary can be interpreted as saying the sample orarithmetic means of the eigenvalues of two matrices are asymptoticallyequal if the matrices are asymptotically equivalent. It is easy to seethat if the matrices are Hermitian, a similar result holds for the meansof the squared eigenvalues. From (2.12) and (2.18), |Dn | ≥ | |An | − |Bn | | n−1 n−1 1 1 = α2 n,k − 2 βn,k n n k=0 k=0 −→ n→∞ 0 −→if |Dn | n→∞ 0, yielding the following corollary.Corollary 2.2. Given two sequences of asymptotically equivalent Her-mitian matrices {An } and {Bn } with eigenvalues {αn,k } and {βn,k },respectively, then n−1 1 lim n,k 2 (α2 − βn,k ) = 0, (2.30) n→∞ n k=0and hence if either limit exists individually, n−1 n−1 1 1 lim α2 = lim n,k 2 βn,k . (2.31) n→∞ n n→∞ n k=0 k=0 Both corollaries relate limiting sample (arithmetic) averages ofeigenvalues or moments of an eigenvalue distribution rather than in-dividual eigenvalues. Equations (2.27) and (2.31) are special cases ofthe following fundamental theorem of asymptotic eigenvalue distribu-tion.
  26. 26. 2.3. Asymptotically Equivalent Sequences of Matrices 21Theorem 2.2. Let {An } and {Bn } be asymptotically equivalent se-quences of matrices with eigenvalues {αn,k } and {βn,k }, respectively. sThen for any positive integer s the sequences of matrices {An } and s } are also asymptotically equivalent,{Bn n−1 1 lim n,k s (αs − βn,k ) = 0, (2.32) n→∞ n k=0and hence if either separate limit exists, n−1 n−1 1 1 lim αs = lim n,k s βn,k . (2.33) n→∞ n n→∞ n k=0 k=0Proof. Let An = Bn + Dn as in the proof of Corollary 2.1 and consider s ∆As − Bn = ∆n . Since the eigenvalues of As are αs , (2.32) can be n n n,kwritten in terms of ∆n as 1 lim Tr(∆n ) = 0. (2.34) n→∞ nThe matrix ∆n is a sum of several terms each being a product of Dn ’sand Bn ’s, but containing at least one Dn (to see this use the binomialtheorem applied to matrices to expand As ). Repeated application of nLemma 2.3 thus gives −→ |∆n | ≤ K|Dn | n→∞ 0, (2.35)where K does not depend on n. Equation (2.35) allows us to applyCorollary 2.1 to the matrices As and Dn to obtain (2.34) and hence n s(2.32). 2 Theorem 2.2 is the fundamental theorem concerning asymptoticeigenvalue behavior of asymptotically equivalent sequences of matri-ces. Most of the succeeding results on eigenvalues will be applicationsor specializations of (2.33). Since (2.33) holds for any positive integer s we can add sums corre-sponding to different values of s to each side of (2.33). This observationleads to the following corollary.
  27. 27. 22 The Asymptotic Behavior of MatricesCorollary 2.3. Suppose that {An } and {Bn } are asymptoticallyequivalent sequences of matrices with eigenvalues {αn,k } and {βn,k },respectively, and let f (x) be any polynomial. Then n−1 1 lim (f (αn,k ) − f (βn,k )) = 0 (2.36) n→∞ n k=0and hence if either limit exists separately, n−1 n−1 1 1 lim f (αn,k ) = lim f (βn,k ) . (2.37) n→∞ n n→∞ n k=0 k=0Proof. Suppose that f (x) = m as xs . Then summing (2.32) over s s=0yields (2.36). If either of the two limits exists, then (2.36) implies thatboth exist and that they are equal. 2 Corollary 2.3 can be used to show that (2.37) can hold for any ana-lytic function f (x) since such functions can be expanded into complexTaylor series, which can be viewed as polynomials with a possibly in-finite number of terms. Some effort is needed, however, to justify theinterchange of limits, which can be accomplished if the Taylor seriesconverges uniformly. If An and Bn are Hermitian, however, then a muchstronger result is possible. In this case the eigenvalues of both matricesare real and we can invoke the Weierstrass approximation theorem ([6],p. 66) to immediately generalize Corollary 2.3. This theorem, our onereal excursion into analysis, is stated below for reference.Theorem 2.3. (Weierstrass) If F (x) is a continuous complex functionon [a, b], there exists a sequence of polynomials pn (x) such that lim pn (x) = F (x) n→∞uniformly on [a, b]. Stated simply, any continuous function defined on a real intervalcan be approximated arbitrarily closely and uniformly by a polynomial.Applying Theorem 2.3 to Corollary 2.3 immediately yields the followingtheorem:
  28. 28. 2.3. Asymptotically Equivalent Sequences of Matrices 23Theorem 2.4. Let {An } and {Bn } be asymptotically equivalent se-quences of Hermitian matrices with eigenvalues {αn,k } and {βn,k }, re-spectively. From Theorem 2.1 there exist finite numbers m and M suchthat m ≤ αn,k , βn,k ≤ M , n = 1, 2, . . . k = 0, 1, . . . , n − 1. (2.38)Let F (x) be an arbitrary function continuous on [m, M ]. Then n−1 1 lim (F (αn,k ) − F (βn,k )) = 0, (2.39) n→∞ n k=0and hence if either of the limits exists separately, n−1 n−1 1 1 lim F (αn,k ) = lim F (βn,k ) (2.40) n→∞ n n→∞ n k=0 k=0 Theorem 2.4 is the matrix equivalent of Theorem 7.4a of [16]. Whentwo real sequences {αn,k ; k = 0, 1, . . . , n−1} and {βn,k ; k = 0, 1, . . . , n−1} satisfy (2.38) and (2.39), they are said to be asymptotically equallydistributed ([16], p. 62, where the definition is attributed to Weyl). As an example of the use of Theorem 2.4 we prove the followingcorollary on the determinants of asymptotically equivalent sequencesof matrices.Corollary 2.4. Let {An } and {Bn } be asymptotically equivalent se-quences of Hermitian matrices with eigenvalues {αn,k } and {βn,k }, re-spectively, such that αn,k , βn,k ≥ m > 0. Then if either limit exists, lim (det An )1/n = lim (det Bn )1/n . (2.41) n→∞ n→∞Proof. From Theorem 2.4 we have for F (x) = ln x n−1 n−1 1 1 lim ln αn,k = lim ln βn,k n→∞ n n→∞ n k=0 k=0and hence n−1 n−1 1 1 lim exp ln αn,k = lim exp ln βn,k n→∞ n n→∞ n k=0 k=0
  29. 29. 24 The Asymptotic Behavior of Matricesor equivalently 1 1 lim exp[ ln det An ] = lim exp[ ln det Bn ], n→∞ n n→∞ nfrom which (2.41) follows. 2 With suitable mathematical care the above corollary can be ex-tended to cases where αn,k , βn,k > 0 provided additional constraintsare imposed on the matrices. For example, if the matrices are assumedto be Toeplitz matrices, then the result holds even if the eigenvalues canget arbitrarily small but remain strictly positive. (See the discussion onp. 66 and in Section 3.1 of [16] for the required technical conditions.)The difficulty with allowing the eigenvalues to approach 0 is that theirlogarithms are not bounded. Furthermore, the function ln x is not con-tinuous at x = 0, so Theorem 2.4 does not apply. Nonetheless, it ispossible to say something about the asymptotic eigenvalue distributionin such cases and this issue is revisited in Theorem 5.2(d). In this section the concept of asymptotic equivalence of matrices wasdefined and its implications studied. The main consequences are the be-havior of inverses and products (Theorem 2.1) and eigenvalues (Theo-rems 2.2 and 2.4). These theorems do not concern individual entries inthe matrices or individual eigenvalues, rather they describe an “aver- −1 −1age” behavior. Thus saying An ∼ Bn means that |A−1 − Bn | n→∞ 0 −1 −→ nand says nothing about convergence of individual entries in the matrix.In certain cases stronger results on a type of elementwise convergenceare possible using the stronger norm of Baxter [1, 2]. Baxter’s resultsare beyond the scope of this work.2.4 Asymptotically Absolutely Equal DistributionsIt is possible to strengthen Theorem 2.4 and some of the interim re-sults used in its derivation using reasonably elementary methods. Thekey additional idea required is the Wielandt-Hoffman theorem [34], aresult from matrix theory that is of independent interest. The theoremis stated and a proof following Wilkinson [35] is presented for com-pleteness. This section can be skipped by readers not interested in thestronger notion of equal eigenvalue distributions as it is not neededin the sequel. The bounds of Lemmas 2.5 and 2.5 are of interest in
  30. 30. 2.4. Asymptotically Absolutely Equal Distributions 25their own right and are included as they strengthen the the traditionalbounds.Theorem 2.5. (Wielandt-Hoffman theorem) Given two Hermitianmatrices A and B with eigenvalues αk and βk , respectively, then n−1 1 |αk − βk |2 ≤ |A − B|2 . n k=0Proof: Since A and B are Hermitian, we can write them as A =U diag(αk )U ∗ , B = W diag(βk )W ∗ , where U and W are unitary. Sincethe weak norm is not effected by multiplication by a unitary matrix, |A − B| = |U diag(αk )U ∗ − W diag(βk )W ∗ | = |diag(αk )U ∗ − U ∗ W diag(βk )W ∗ | = |diag(αk )U ∗ W − U ∗ W diag(βk )| = |diag(αk )Q − Qdiag(βk )|,where Q = U ∗ W = {qi,j } is also unitary. The (i, j) entry in the matrixdiag(αk )Q − Qdiag(βk ) is (αi − βj )qi,j and hence n−1 n−1 n−1 n−1 1 ∆ |A − B|2 = |αi − βj |2 |qi,j |2 = |αi − βj |2 pi,j (2.42) n i=0 j=0 i=0 j=0where we have defined pi,j = (1/n)|qi,j |2 . Since Q is unitary, we alsohave that n−1 n−1 |qi,j |2 = |qi,j |2 = 1 (2.43) i=0 j=0or n−1 n−1 1 pi,j = pi,j = . (2.44) n i=0 j=0This can be interpreted in probability terms: pi,j = (1/n)|qi,j |2 is aprobability mass function or pmf on {0, 1, . . . , n − 1}2 with uniformmarginal probability mass functions. Recall that it is assumed that the
  31. 31. 26 The Asymptotic Behavior of Matriceseigenvalues are ordered so that α0 ≥ α1 ≥ α2 ≥ · · · and β0 ≥ β1 ≥β2 ≥ · · · . We claim that for all such matrices P satisfying (2.44), the right-hand side of (2.42) is minimized by P = (1/n)I, where I is the identitymatrix, so that n−1 n−1 n−1 |αi − βj |2 pi,j ≥ |αi − βi |2 , i=0 j=0 i=0which will prove the result. To see this suppose the contrary. Let ℓbe the smallest integer in {0, 1, . . . , n − 1} such that P has a nonzeroelement off the diagonal in either row ℓ or in column ℓ. If there is anonzero element in row ℓ off the diagonal, say pℓ,a then there must alsobe a nonzero element in column ℓ off the diagonal, say pb,ℓ in order forthe constraints (2.44) to be satisfied. Since ℓ is the smallest such value,ℓ < a and ℓ < b. Let x be the smaller of pl,a and pb,l . Form a newmatrix P ′ by adding x to pℓ,ℓ and pb,a and subtracting x from pb,ℓ andpℓ,a . The new matrix still satisfies the constraints and it has a zero ineither position (b, ℓ) or (ℓ, a). Furthermore the norm of P ′ has changedfrom that of P by an amount x (αℓ − βℓ )2 + (αb − βa )2 − (αℓ − βa )2 − (αb − βℓ )2 = −x(αℓ − αb )(βℓ − βa ) ≤ 0since ℓ > b, ℓ > a, the eigenvalues are nonincreasing, and x is posi-tive. Continuing in this fashion all nonzero offdiagonal elements can bezeroed out without increasing the norm, proving the result. 2 From the Cauchy-Schwarz inequality n−1 n−1 n−1 n−1 |αk − βk | ≤ (αk − βk )2 12 = n (αk − βk )2 , k=0 k=0 k=0 k=0which with the Wielandt-Hoffman theorem yields the followingstrengthening of Lemma 2.4, n−1 n−1 1 1 |αk − βk | ≤ (αk − βk )2 ≤ |An − Bn |, n n k=0 k=0
  32. 32. 2.4. Asymptotically Absolutely Equal Distributions 27which we formalize as the following lemma.Lemma 2.5. Given two Hermitian matrices A and B with eigenvaluesαn and βn in nonincreasing order, respectively, then n−1 1 |αk − βk | ≤ |A − B|. n k=0Note in particular that the absolute values are outside the sum inLemma 2.4 and inside the sum in Lemma 2.5. As was done in theweaker case, the result can be used to prove a stronger version of The-orem 2.4. This line of reasoning, using the Wielandt-Hoffman theorem,was pointed out by William F. Trench who used special cases in hispaper [23]. Similar arguments have become standard for treating eigen-value distributions for Toeplitz and Hankel matrices. See, for example,[32, 9, 4]. The following theorem provides the derivation. The specificstatement result and its proof follow from a private communicationfrom William F. Trench. See also [31, 24, 25, 26, 27, 28].Theorem 2.6. Let An and Bn be asymptotically equivalent sequencesof Hermitian matrices with eigenvalues αn,k and βn,k in nonincreasingorder, respectively. From Theorem 2.1 there exist finite numbers m andM such that m ≤ αn,k , βn,k ≤ M , n = 1, 2, . . . k = 0, 1, . . . , n − 1. (2.45)Let F (x) be an arbitrary function continuous on [m, M ]. Then n−1 1 lim |F (αn,k ) − F (βn,k )| = 0. (2.46) n→∞ n k=0 The theorem strengthens the result of Theorem 2.4 because ofthe magnitude inside the sum. Following Trench [24] in this case theeigenvalues are said to be asymptotically absolutely equally distributed.Proof: From Lemma 2.5 1 |αn,k − βn,k | ≤ |An − Bn |, (2.47) n k=0
  33. 33. 28 The Asymptotic Behavior of Matriceswhich implies (2.46) for the case F (r) = r. For any nonnegative integerj |αj − βn,k | ≤ j max(|m|, |M |)j−1 |αn,k − βn,k |. n,k j (2.48)By way of explanation consider a, b ∈ [m, M ]. Simple long divisionshows that j aj − bj = aj−l bl−1 a−b l=1so that aj − bj |aj − bj | | | = a−b |a − b| j = | aj−l bl−1 | l=1 j ≤ |aj−l bl−1 | l=1 j = |a|j−l |b|l−1 l=1 ≤ j max(|m|, |M |)j−1 ,which proves (2.48). This immediately implies that (2.46) holds forfunctions of the form F (r) = r j for positive integers j, which in turnmeans the result holds for any polynomial. If F is an arbitrary contin-uous function on [m, M ], then from Theorem 2.3 given ǫ > 0 there is apolynomial P such that |P (u) − F (u)| ≤ ǫ, u ∈ [m, M ].
  34. 34. 2.4. Asymptotically Absolutely Equal Distributions 29Using the triangle inequality, n−11 |F (αn,k ) − F (βn,k )|n k=0 n−1 1 = |F (αn,k ) − P (αn,k ) + P (αn,k ) − P (βn,k ) + P (βn,k ) − F (βn,k )| n k=0 n−1 n−1 1 1 ≤ |F (αn,k ) − P (αn,k )| + |P (αn,k ) − P (βn,k )| n n k=0 k=0 n−1 1 + |P (βn,k ) − F (βn,k )| n k=0 n−1 1 ≤ 2ǫ + |P (αn,k ) − P (βn,k )| n k=0As n → ∞ the remaining sum goes to 0, which proves the theoremsince ǫ can be made arbitrarily small. 2
  35. 35. 3 Circulant MatricesA circulant matrix C is a Toeplitz matrix having the form   c0 c1 c2 ··· cn−1  . .    cn−1 c0 c1 c2 .   .. .   C=  cn−1 c0 c1 ,  (3.1) . . .. .. .. . . .     . c2    c1  c1 ··· cn−1 c0where each row is a cyclic shift of the row above it. The structure canalso be characterized by noting that the (k, j) entry of C, Ck,j , is givenby Ck,j = c(j−k) mod n .The properties of circulant matrices are well known and easily derived([18], p. 267,[8]). Since these matrices are used both to approximate andexplain the behavior of Toeplitz matrices, it is instructive to presentone version of the relevant derivations here. 31
  36. 36. 32 Circulant Matrices3.1 Eigenvalues and EigenvectorsThe eigenvalues ψk and the eigenvectors y (k) of C are the solutions of Cy = ψ y (3.2)or, equivalently, of the n difference equations m−1 n−1 cn−m+k yk + ck−m yk = ψ ym ; m = 0, 1, . . . , n − 1. (3.3) k=0 k=mChanging the summation dummy variable results in n−1−m n−1 ck yk+m + ck yk−(n−m) = ψ ym ; m = 0, 1, . . . , n − 1. (3.4) k=0 k=n−mOne can solve difference equations as one solves differential equations —by guessing an intuitive solution and then proving that it works. Sincethe equation is linear with constant coefficients a reasonable guess isyk = ρk (analogous to y(t) = esτ in linear time invariant differentialequations). Substitution into (3.4) and cancellation of ρm yields n−1−m n−1 ck ρk + ρ−n ck ρk = ψ. k=0 k=n−mThus if we choose ρ−n = 1, i.e., ρ is one of the n distinct complex nthroots of unity, then we have an eigenvalue n−1 ψ= ck ρk (3.5) k=0with corresponding eigenvector ′ y = n−1/2 1, ρ, ρ2 , . . . , ρn−1 , (3.6)where the prime denotes transpose and the normalization is chosen togive the eigenvector unit energy. Choosing ρm as the complex nth rootof unity, ρm = e−2πim/n , we have eigenvalue n−1 ψm = ck e−2πimk/n (3.7) k=0
  37. 37. 3.1. Eigenvalues and Eigenvectors 33and eigenvector 1 ′ y (m) = √ 1, e−2πim/n , · · · , e−2πim(n−1)/n . nThus from the definition of eigenvalues and eigenvectors, Cy (m) = ψm y (m) , m = 0, 1, . . . , n − 1. (3.8)Equation (3.7) should be familiar to those with standard engineeringbackgrounds as simply the discrete Fourier transform (DFT) of thesequence {ck }. Thus we can recover the sequence {ck } from the ψk bythe Fourier inversion formula. In particular, n−1 n−1 n−1 1 1 ψm e2πiℓm = ck e−2πimk/n e2πiℓm/n n n m=0 m=0 k=0 n−1 n−1 1 = ck e2πi(ℓ−k)m/n = cℓ , (3.9) n m=0 k=0where we have used the orthogonality of the complex exponentials: n−1 n k mod n = 0 e2πimk/n = nδk mod n = , (3.10) m=0 0 otherwisewhere δ is the Kronecker delta, 1 m=0 δm = . 0 otherwiseThus the eigenvalues of a circulant matrix comprise the DFT of thefirst row of the circulant matrix, and conversely first row of a circulantmatrix is the inverse DFT of the eigenvalues. Eq. (3.8) can be written as a single matrix equation CU = U Ψ, (3.11)where U = [y (0) |y (1) | · · · |y (n−1) ] = n−1/2 [e−2πimk/n ; m, k = 0, 1, . . . , n − 1]
  38. 38. 34 Circulant Matricesis the matrix composed of the eigenvectors as columns, andΨ = diag(ψk ) is the diagonal matrix with diagonal elementsψ0 , ψ1 , . . . , ψn−1 . Furthermore, (3.10) implies that U is unitary. Byway of details, denote that the (k, j)th element of U U ∗ by ak,j andobserve that ak,j will be the product of the kth row of U , which is √{e−2πimk/n / n; m = 0, 1, . . . , n−1}, times the jth column of U ∗ , which √is {e2πimj/n / n; m = 0, 1, . . . , n − 1} so that n−1 1 ak,j = e2πim(j−k)/n = δ(k−j) mod n n m=0and hence U U ∗ = I. Similarly, U ∗ U = I. Thus (3.11) implies that C = U ΨU ∗ (3.12) Ψ = U ∗ CU. (3.13)Since C is unitarily similar to a diagonal matrix it is normal.3.2 Matrix Operations on Circulant MatricesThe following theorem summarizes the properties derived in the previ-ous section regarding eigenvalues and eigenvectors of circulant matricesand provides some easy implications.Theorem 3.1. Every circulant matrix C has eigenvectors y (m) = 1 ′√ n 1, e−2πim/n , · · · , e−2πim(n−1)/n , m = 0, 1, . . . , n − 1, and corre-sponding eigenvalues n−1 ψm = ck e−2πimk/n k=0and can be expressed in the form C = U ΨU ∗ , where U has the eigen-vectors as columns in order and Ψ is diag(ψk ). In particular all circulantmatrices share the same eigenvectors, the same matrix U works for allcirculant matrices, and any matrix of the form C = U ΨU ∗ is circulant. Let C = {ck−j } and B = {bk−j } be circulant n × n matrices witheigenvalues n−1 n−1 −2πimk/n ψm = ck e , βm = bk e−2πimk/n , k=0 k=0
  39. 39. 3.2. Matrix Operations on Circulant Matrices 35respectively. Then (1) C and B commute and CB = BC = U γU ∗ , where γ = diag(ψm βm ), and CB is also a circulant matrix. (2) C + B is a circulant matrix and C + B = U ΩU ∗ , where Ω = {(ψm + βm )δk−m } (3) If ψm = 0; m = 0, 1, . . . , n − 1, then C is nonsingular and C −1 = U Ψ−1 U ∗ .Proof. We have C = U ΨU ∗ and B = U ΦU ∗ where Ψ = diag(ψm ) andΦ = diag(βm ). (1) CB = U ΨU ∗ U ΦU ∗ = U ΨΦU ∗ = U ΦΨU ∗ = BC. Since ΨΦ is diagonal, the first part of the theorem implies that CB is circulant. (2) C + B = U (Ψ + Φ)U ∗ . (3) If Ψ is nonsingular, then CU Ψ−1 U ∗ = U ΨU ∗ U Ψ−1 U ∗ = U ΨΨ−1 U ∗ = U U ∗ = I. 2 Circulant matrices are an especially tractable class of matrices sinceinverses, products, and sums are also circulant matrices and hence bothstraightforward to construct and normal. In addition the eigenvaluesof such matrices can easily be found exactly and the same eigenvectorswork for all circulant matrices. We shall see that suitably chosen sequences of circulant matricesasymptotically approximate sequences of Toeplitz matrices and henceresults similar to those in Theorem 3.1 will hold asymptotically forsequences of Toeplitz matrices.
  40. 40. 4 Toeplitz Matrices4.1 Sequences of Toeplitz MatricesGiven the simplicity of sums, products, eigenvalues,, inverses, and de-terminants of circulant matrices, an obvious approach to the study ofasymptotic properties of sequences of Toeplitz matrices is to approxi-mate them by sequences asymptotically equivalent of circulant matricesand then applying the results developed thus far. Such results are mosteasily derived when strong assumptions are placed on the sequence ofToeplitz matrices which keep the structure of the matrices simple andallow them to be well approximated by a natural and simple sequenceof related circulant matrices. Increasingly general results require corre-sponding increasingly complicated constructions and proofs. Consider the infinite sequence {tk } and define the correspondingsequence of n × n Toeplitz matrices Tn = [tk−j ; k, j = 0, 1, . . . , n − 1] asin (1.1). Toeplitz matrices can be classified by the restrictions placed onthe sequence tk . The simplest class results if there is a finite m for whichtk = 0, |k| > m, in which case Tn is said to be a banded Toeplitz matrix.A banded Toeplitz matrix has the appearance of the of (4.1), possessinga finite number of diagonals with nonzero entries and zeros everywhere 37
  41. 41. 38 Toeplitz Matriceselse, so that the nonzero entries lie within a “band” including the maindiagonal:   t0 t−1 · · · t−m  t  1 t0    .  .   . 0    .. ..    . .    tm     ..   .  Tn =  .    tm · · · t1 t0 t−1 · · · t−m  ..   .     .. ..     . . t−m    . .  .     0 t0 t−1     tm · · · t1 t0 (4.1) In the more general case where the tk are not assumed to be zerofor large k, there are two common constraints placed on the infinitesequence {tk ; k = . . . , −2, −1, 0, 1, 2, . . .} which defines all of the ma-trices Tn in the sequence. The most general is to assume that the tkare square summable, i.e., that ∞ |tk |2 < ∞. (4.2) k=−∞Unfortunately this case requires mathematical machinery beyond thatassumed here; i.e., Lebesgue integration and a relatively advancedknowledge of Fourier series. We will make the stronger assumption thatthe tk are absolutely summable, i.e., that ∞ |tk | < ∞. (4.3) k=−∞Note that (4.3) is indeed a stronger constraint than (4.2) since ∞ ∞ 2 2 |tk | ≤ |tk | . (4.4) k=−∞ k=−∞
  42. 42. 4.1. Sequences of Toeplitz Matrices 39 The assumption of absolute summability greatly simplifies themathematics, but does not alter the fundamental concepts of Toeplitzand circulant matrices involved. As the main purpose here is tutorialand we wish chiefly to relay the flavor and an intuitive feel for theresults, we will confine interest to the absolutely summable case. Themain advantage of (4.3) over (4.2) is that it ensures the existence andof the Fourier series f (λ) defined by ∞ n ikλ f (λ) = tk e = lim tk eikλ . (4.5) n→∞ k=−∞ k=−nNot only does the limit in (4.5) converge if (4.3) holds, it convergesuniformly for all λ, that is, we have that n −n−1 ∞ f (λ) − tk eikλ = tk eikλ + tk eikλ k=−n k=−∞ k=n+1 −n−1 ∞ ≤ tk eikλ + tk eikλ , k=−∞ k=n+1 −n−1 ∞ ≤ |tk | + |tk | k=−∞ k=n+1where the right-hand side does not depend on λ and it goes to zero asn → ∞ from (4.3). Thus given ǫ there is a single N , not depending onλ, such that n f (λ) − tk eikλ ≤ ǫ , all λ ∈ [0, 2π] , if n ≥ N. (4.6) k=−nFurthermore, if (4.3) holds, then f (λ) is Riemann integrable and the tkcan be recovered from f from the ordinary Fourier inversion formula: 2π 1 tk = f (λ)e−ikλ dλ. (4.7) 2π 0As a final useful property of this case, f (λ) is a continuous function ofλ ∈ [0, 2π] except possibly at a countable number of points.
  43. 43. 40 Toeplitz Matrices A sequence of Toeplitz matrices Tn = [tk−j ] for which the tk areabsolutely summable is said to be in the Wiener class,. Similarly, afunction f (λ) defined on [0, 2π] is said to be in the Wiener class if ithas a Fourier series with absolutely summable Fourier coefficients. Itwill often be of interest to begin with a function f in the Wiener classand then define the sequence of of n × n Toeplitz matrices 2π 1 Tn (f ) = f (λ)e−i(k−j)λ dλ ; k, j = 0, 1, · · · , n − 1 , (4.8) 2π 0which will then also be in the Wiener class. The Toeplitz matrix Tn (f )will be Hermitian if and only if f is real. More specifically, Tn (f ) =Tn (f ) if and only if tk−j = t∗ for all k, j or, equivalently, t∗ = t−k all ∗ j−k kk. If t∗ = t−k , however, k ∞ ∞ ∗ f (λ) = t∗ e−ikλ k = t−k e−ikλ k=−∞ k=−∞ ∞ = tk eikλ = f (λ), k=−∞so that f is real. Conversely, if f is real, then 2π 1 t∗ = k f ∗ (λ)eikλ dλ 2π 0 2π 1 = f (λ)eikλ dλ = t−k . 2π 0 It will be of interest to characterize the maximum and minimummagnitude of the eigenvalues of Toeplitz matrices and how these relateto the maximum and minimum values of the corresponding functions f .Problems arise, however, if the function f has a maximum or minimumat an isolated point. To avoid such difficulties we define the essentialsupremum Mf = ess supf of a real valued function f as the smallestnumber a for which f (x) ≤ a except on a set of total length or mea-sure 0. In particular, if f (x) > a only at isolated points x and not onany interval of nonzero length, then Mf ≤ a. Similarly, the essentialinfimum mf = ess inff is defined as the largest value of a for which
  44. 44. 4.2. Bounds on Eigenvalues of Toeplitz Matrices 41f (x) ≥ a except on a set of total length or measure 0. The key ideahere is to view Mf and mf as the maximum and minimum values of f ,where the extra verbiage is to avoid technical difficulties arising fromthe values of f on sets that do not effect the integrals. Functions f inthe Wiener class are bounded since ∞ ∞ ikλ |f (λ)| ≤ |tk e |≤ |tk | (4.9) k=−∞ k=−∞so that ∞ m|f | , M|f | ≤ |tk |. (4.10) k=−∞4.2 Bounds on Eigenvalues of Toeplitz MatricesIn this section Lemma 2.1 is used to obtain bounds on the eigenvalues ofHermitian Toeplitz matrices and an upper bound bound to the strongnorm for general Toeplitz matrices.Lemma 4.1. Let τn,k be the eigenvalues of a Toeplitz matrix Tn (f ).If Tn (f ) is Hermitian, then mf ≤ τn,k ≤ Mf . (4.11)Whether or not Tn (f ) is Hermitian, Tn (f ) ≤ 2M|f | , (4.12)so that the sequence of Toeplitz matrices {Tn (f )} is uniformly boundedover n if the essential supremum of |f | is finite.Proof. From Lemma 2.1, max τn,k = max(x∗ Tn (f )x)/(x∗ x) (4.13) k x min τn,k = min(x∗ Tn (f )x)/(x∗ x) k x
  45. 45. 42 Toeplitz Matricesso that n−1 n−1 x∗ Tn (f )x = tk−j xk x∗ j k=0 j=0 n−1 n−1 2π 1 = 2π f (λ)ei(k−j)λ dλ xk x∗ j (4.14) k=0 j=0 0 2 2π n−1 1 = 2π xk eikλ f (λ) dλ 0 k=0and likewise n−1 2π n−1 1 x∗ x = |xk |2 = | xk eikλ |2 dλ. (4.15) 2π 0 k=0 k=0Combining (4.14)–(4.15) results in n−1 2 2π f (λ) xk eikλ dλ 0 k=0 x∗ Tn (f )x mf ≤ 2 = ≤ Mf , (4.16) 2π n−1 x∗ x xk eikλ dλ 0 k=0which with (4.13) yields (4.11). We have already seen in (2.16) that if Tn (f ) is Hermitian, then ∆ Tn (f ) = maxk |τn,k | = |τn,M |. Since |τn,M | ≤ max(|Mf |, |mf |) ≤M|f | , (4.12) holds for Hermitian matrices. Suppose that Tn (f ) is notHermitian or, equivalently, that f is not real. Any function f can bewritten in terms of its real and imaginary parts, f = fr +ifi , where bothfr and fi are real. In particular, fr = (f + f ∗ )/2 and fi = (f − f ∗ )/2i.From the triangle inequality for norms, Tn (f ) = Tn (fr + ifi ) = Tn (fr ) + iTn (fi ) ≤ Tn (fr ) + Tn (fi ) ≤ M|fr | + M|fi | .
  46. 46. 4.3. Banded Toeplitz Matrices 43Since |(f ± f ∗ )/2 ≤ (|f |+ |f ∗ |)/2 ≤ M|f | , M|fr | + M|fi | ≤ 2M|f | , proving(4.12). 2 Note for later use that the weak norm of a Toeplitz matrix takes aparticularly simple form. Let Tn (f ) = {tk−j }, then by collecting equalterms we have n−1 n−1 1 |Tn (f )|2 = |tk−j |2 n k=0 j=0 n−1 1 = (n − |k|)|tk |2 n k=−(n−1) n−1 = (1 − |k|/n)|tk |2 . (4.17) k=−(n−1) We are now ready to put all the pieces together to study the asymp-totic behavior of Tn (f ). If we can find an asymptotically equivalentsequence of circulant matrices, then all of the results regarding cir-culant matrices and asymptotically equivalent sequences of matricesapply. The main difference between the derivations for simple sequenceof banded Toeplitz matrices and the more general case is the sequenceof circulant matrices chosen. Hence to gain some feel for the matrixchosen, we first consider the simpler banded case where the answer isobvious. The results are then generalized in a natural way.4.3 Banded Toeplitz MatricesLet Tn be a sequence of banded Toeplitz matrices of order m + 1, thatis, ti = 0 unless |i| ≤ m. Since we are interested in the behavior or Tnfor large n we choose n >> m. As is easily seen from (4.1), Tn lookslike a circulant matrix except for the upper left and lower right-handcorners, i.e., each row is the row above shifted to the right one place.We can make a banded Toeplitz matrix exactly into a circulant if we fillin the upper right and lower left corners with the appropriate entries.

×