• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Week 6 Linear Algebra II
 

Week 6 Linear Algebra II

on

  • 104 views

 

Statistics

Views

Total Views
104
Views on SlideShare
102
Embed Views
2

Actions

Likes
0
Downloads
17
Comments
0

1 Embed 2

http://www.slideee.com 2

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Week 6 Linear Algebra II Week 6 Linear Algebra II Presentation Transcript

    • AMATH 460: Mathematical Methods for Quantitative Finance 6. Linear Algebra II Kjell Konis Acting Assistant Professor, Applied Mathematics University of Washington Kjell Konis (Copyright © 2013) 6. Linear Algebra II 1 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 2 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 3 / 53
    • Transposes Let A be an m × n matrix, the transpose of A is denoted by AT The columns of AT are the rows of A If the dimension of A is m × n then the dimension of AT is n × m  A= 1 2 3 4 5 6  1 4   AT =  2 5  3 6 Properties (AT )T = A The transpose of A + B is AT + B T The transpose of AB is (AB)T = B T AT The transpose of A−1 is (A−1 )T = (AT )−1 Kjell Konis (Copyright © 2013) 6. Linear Algebra II 4 / 53
    • Symmetric Matrices A symmetric matrix satisfies A = AT Implies that aij = aji   1 2 3   A =  2 5 4  = AT 3 4 9 A diagonal matrix is automatically symmetric   1 0 0   D =  0 5 0  = DT 0 0 9 Kjell Konis (Copyright © 2013) 6. Linear Algebra II 5 / 53
    • Products R T R, RR T and LDLT Let R be any m × n matrix The matrix A = R T R is a symmetric matrix AT = (R T R)T = R T (R T )T = R T R = A The matrix A = RR T is also a symmetric matrix AT = (RR T )T = (R T )T R T = RR T = A Many problems that start with a rectangular matrix R end up with R T R or RR T or both! Kjell Konis (Copyright © 2013) 6. Linear Algebra II 6 / 53
    • Permutation Matrices An n × n permutation matrix P has the rows of I in any order There are 6 possible 3 × 3 permutation matrices   1 1 I=  1  P31 =  1 1     P32 =  1 1     P32 P21 =   1 P21 =  1   1      1 1   1 1  P21 P32 =  1   1 1  1    P −1 is the same as P T Example        1 0 0 1 1 1        P32  2  =  0 0 1   2  =  3  3 0 1 0 3 2 Kjell Konis (Copyright © 2013) 6. Linear Algebra II     1 1     P32  3  =  2  2 3 7 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 8 / 53
    • Spaces of Vectors The space Rn consists of all column vectors with n components For example, R3 all column vectors with 3 components and is called 3-dimensional space The space R2 is the xy plane: the two components of the vector are the x and y coordinates and the tail starts at the origin (0, 0) Two essential vector operations go on inside the vector space: Add two vectors in Rn Multiply a vector in Rn by a scalar The result lies in the same vector space Rn Kjell Konis (Copyright © 2013) 6. Linear Algebra II 9 / 53
    • Subspaces A subspace of a vector space is a set of vectors (including the zero vector) that satisfies two requirements: If u and w are vectors in the subspace and c is any scalar, then (i) (ii) u + w is in the subspace cw is in the subspace Some subspaces of R3 L Any line through (0, 0, 0), e.g., the x axis P Any plane through (0, 0, 0), e.g., the xy plane R3 The whole space Z The zero vector Kjell Konis (Copyright © 2013) 6. Linear Algebra II 10 / 53
    • The Column Space of A The column space of a matrix A consists of all linear combinations of its columns The linear combinations are vectors that can be written as Ax Ax = b is solvable if and only if b is in the column space of A Let A be The The The an m × n matrix columns of A have m components columns of A live in Rm column space of A is a subspace of Rm The column space of A is denoted by R(A) R stands for range Kjell Konis (Copyright © 2013) 6. Linear Algebra II 11 / 53
    • The Nullspace of A The nullspace of A consists of all solutions to Ax = 0 The nullspace of A is denoted by N(A) Example: elimination       1 2 3 1 2 3 1 2 3       8 → 0 0 2 → 0 0 2 2 4 3 6 11 0 0 2 0 0 0 The pivot variables are x1 and x3 The free variable is x2 (column 2 has no pivot) The number of pivots is called the rank of A Kjell Konis (Copyright © 2013) 6. Linear Algebra II 12 / 53
    • Linear Independence A sequence of vectors v1 , v2 , . . . , vn is linearly independent if the only linear combination that gives the zero vector is 0v1 + 0v2 + . . . + 0vn The columns of A are independent if the only solution to Ax = 0 is the zero vector Elimination produces no free variables The rank of A is equal to n The nullspace N(A) contains only the zero vector A set of vectors spans a space if their linear combinations fill the space A basis for a vector space is a sequence of vectors that (i) (ii) are linearly independent span the space The dimension of a vector space is the number of vectors in every basis Kjell Konis (Copyright © 2013) 6. Linear Algebra II 13 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 14 / 53
    • Sample Variance and Covariance Sample mean of the elements of a vector x of length m x= ¯ 1 m m xi i=1 Sample variance of the elements of x Var(x ) = 1 m−1 m (xi − x )2 ¯ i=1 Let y be vector of length m Sample covariance of x and y Cov(x , y ) = Kjell Konis (Copyright © 2013) 1 m−1 m (xi − x )(yi − y ) ¯ ¯ i=1 6. Linear Algebra II 15 / 53
    • Sample Variance Let e be a column vector of m ones eTx 1 = m m m 1xi = i=1 m 1 m xi = x ¯ i=1 x is a scalar ¯ Let e x be a column vector repeating x m times ¯ ¯ eTx =x −ex ¯ m The i th element of x is ˜ Let x = x − e ˜ xi = xi − x ˜ ¯ Take another look at the sample variance 1 Var(x ) = m−1 Kjell Konis (Copyright © 2013) m 1 (xi − x ) = ¯ m−1 i=1 2 6. Linear Algebra II m i=1 xi2 = ˜ x Tx ˜ ˜ x 2 ˜ = m−1 m−1 16 / 53
    • Sample Covariance A similar result holds for the sample covariance Let y = y − e ˜ eTy =y −ey ¯ m The sample covariance becomes Cov(x , y ) = 1 m−1 m (xi − x )(yi − y ) = ¯ ¯ i=1 1 m−1 m xi yi = ˜˜ i=1 x Ty ˜ ˜ m−1 Observe that Var(x ) = Cov(x , x ) Proceed with Cov(x , x ) and treat Var(x ) as a special case Kjell Konis (Copyright © 2013) 6. Linear Algebra II 17 / 53
    • Variance-Covariance Matrix Suppose x and y are the columns of a matrix R  |  R = x |   |  y |  | | ˜ ˜ ˜ R = x y  | | The sample variance-covariance matrix is    Cov(x , x ) Cov(R) =  Cov(x , y )  Cov(y , x ) Cov(y , y )  =  T T  ˜ ˜ ˜ ˜ ˜ ˜ 1 x x x y  R TR   = m − 1 y Tx y Ty m−1 ˜ ˜ ˜ ˜ The sample variance-covariance matrix is symmetric Kjell Konis (Copyright © 2013) 6. Linear Algebra II 18 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 19 / 53
    • Computing Covariance Matrices Take a closer look at how x was computed ˜ x =x −ex =x −e ˜ ¯ eTx ee T ee T =x− x= I− x m m m A What is A? The outer product (ee T ) is an m × m matrix I is the m × m identity matrix A is an m × m matrix Premultiplication by A turns x into x ˜ Can think of matrix multiplication as one matrix acting on another Kjell Konis (Copyright © 2013) 6. Linear Algebra II 20 / 53
    • Computing Covariance Matrices Next, consider what happens when we premultiply R by A Think of R in block structure where each column is a block  |  AR = A  x |   | |   y  = A x | |    | | |    ˜ ˜ ˜ Ay  = x y  = R | | | ˜ Expression for the variance-covariance matrix no longer needs R Cov(R) = 1 ˜T ˜ 1 (AR)T (AR) R R= n−1 m−1 Since R has 2 columns, Cov(R) is a 2 × 2 matrix In general, R may have n columns =⇒ Cov(R) is an n × n matrix Still use the same m × m matrix A Kjell Konis (Copyright © 2013) 6. Linear Algebra II 21 / 53
    • Computing Covariance Matrices Take another look at formula for variance-covariance matrix Rule for transpose of a product Cov(R) = 1 1 R T AT AR (AR)T (AR) = m−1 m−1 Consider the product ee T AT A = I− m T I− ee T m T = I − ee T m = IT − [e T ]T e T m I− = I− ee T m ee T m Kjell Konis (Copyright © 2013) T I− I− ee T m ee T m 6. Linear Algebra II Since AT = A, A is symmetric 22 / 53
    • Computing Covariance Matrices Continuing . . . AT A = I− ee T m I− ee T m = I− = I2 − ee T ee T − + m m = I −2 = I −2 2 = A2 ee T e(e T e)e T + m m2 ee T eme T + m m2 = I− ee T m ee T m ee T m ee T m =A A matrix satisfying A2 = A is called idempotent Kjell Konis (Copyright © 2013) 6. Linear Algebra II 23 / 53
    • Computing Covariance Matrices Can simplify the expression for the sample variance-covariance matrix Cov(R) = 1 1 R T AT A R = m−1 m−1 RT A R n×m m×m m×n scalar How to order the operations . . . 1×n  R T AR = R T I − T ee m R = R T R −  1 e eT m  R   1×m m×n m×n     1 1   = R T R − e M1  = R T  R − M2  = Cov(R) m m m×1 1×n Kjell Konis (Copyright © 2013) 6. Linear Algebra II n×m m×n m×n n×n 24 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 25 / 53
    • Orthogonal Matrices Two vectors q, w ∈ Rm are orthogonal if their inner product is zero q, w = 0 Consider a set of m vectors {q1 , . . . , qm } where qj ∈ Rm 0 Assume that the vectors {q1 , . . . , qm } are pairwise orthogonal qj Let qj = ˜ be a unit vector in the same direction as qj qj The vectors {˜a , . . . , qm } are orthonormal q ˜ Let Q be a matrix with columns {˜j }, consider the product QQ T q  — q1 ˜T —   |    T Q Q =  · · · · · · · · ·   q1  ˜ — qm ˜T Kjell Konis (Copyright © 2013) — |   . . | q T q1 ˜ ˜ .   1   . . q  = . ˜m   . . | qiT qj ˜ ˜ . 6. Linear Algebra II qiT qj ˜ ˜ .. . qm qm ˜T ˜    =I  26 / 53
    • Orthogonal Matrices A square matrix Q is orthogonal if Q T Q = I and QQ T = I Orthogonal matrices represent rotations and reflections Example:    cos(θ) − sin(θ)   Q= sin(θ) cos(θ) Rotates a vector in the xy plane through the angle θ    cos(θ) QTQ =   =  sin(θ)   cos(θ) − sin(θ)  − sin(θ) cos(θ)  sin(θ) cos(θ)  cos2 (θ) + sin2 (θ) − cos(θ) sin(θ) + sin(θ) cos(θ)   − sin(θ) cos(θ) + cos(θ) sin(θ) Kjell Konis (Copyright © 2013)  6. Linear Algebra II sin2 (θ) + cos2 (θ)  =I 27 / 53
    • Properties of Orthogonal Matrices The definition Q T Q = QQ T = I implies Q −1 = Q T    cos(θ) QT =  sin(θ)  − sin(θ) cos(θ)    cos(−θ) − sin(−θ)  = sin(−θ) cos(−θ)  Cosine is an even function and sine is an odd function Multiplication by an orthogonal matrix Q preserves dot products (Qx ) · (Qy ) = (Qx )T (Qy ) = x T Q T Qy = x T Iy = x T y = x · y Multiplication by an orthogonal matrix Q leaves lengths unchanged Qx = x Kjell Konis (Copyright © 2013) 6. Linear Algebra II 28 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 29 / 53
    • Singular Value Factorization So far . . . If Q is orthogonal then Q −1 = Q T if D is diagonal then D −1 is   d1  D=   0   .. 0  . dm     D −1  1/d1  =   0  .. 0  . 1/dm     Orthogonal and diagonal matrices have nice properties Wouldn’t it be nice if any matrix could be expressed as a product of diagonal and orthogonal matrices . . . Kjell Konis (Copyright © 2013) 6. Linear Algebra II 30 / 53
    • Singular Value Factorization Every m × n matrix A can be factored into A = UΣV T where Picture for m > n U is an m × m orthogonal matrix whose columns are the left singular vectors of A Σ is an m × n diagonal matrix containing the singular values of A Convention: σ1 ≥ σ2 ≥ · · · ≥ σn ≥ 0 V is an n × n orthogonal matrix whose columns contain the right singular vectors of A Kjell Konis (Copyright © 2013) 6. Linear Algebra II 31 / 53
    • Multiplication 328 7 Linear Transformations Every invertible 2 × 2 matrix transforms the unit circle into an ellipse GA Gli I Figure 76 T U and V are rotations and reflections. T is a stretching matrix, −v1 − σ 0 0 0 v2 = U 1 = u1 u2 T −v2 − 0 σ σ2 I that disappears. Multiplying 2Z 1 gives u and n as before. Av2 = UΣV v2 = UΣ This time it is V V We have an ordinary factorization of the symmetric matrix A .4 T The columns of U Kjell Konis (Copyright © 2013) T 6. Linear Algebra II 32 / 53
    • Multiplication Visualize: Ax U Σ V = UΣV T x rotate right 45◦ stretch x coord by 1.5, stretch y coord by 2 rotate right 22.5◦ 1 1 x := √ 2 1 Kjell Konis (Copyright © 2013) x := V T x 6. Linear Algebra II x := Σx x := Ux 33 / 53
    • Example q q > load("R.RData") > library(MASS) > eqscplot(R) 5 BAC Returns (%) 0 svdR U <S <V <- <- svd(R) svdR$u diag(svdR$d) svdR$v q −5 > > > > q q q q q qq q q q q qq q q qq q q q q qq qq q q q q q q q qq q qq qq q qq q q qqqq qq q q q q q q q qq q qq qq q q q q q qq q q q qqqqq q q q q qq q qq qqq q qq q q q q q q q q q q q q qq q q q qq qqq qqq q q qq q qq q q qq q q q q q q q q q q qqq q q q q q qq q q qq q q qq q q q q qq qqq q q q q q qq q q q q q qq qqq q q qq q qq q q q q q q qqq q q q q q qq qq q q q qq q q q q q q q q q q qq q q q > all.equal(U %*% S %*% t(V), R) [1] TRUE q −5 0 C Returns (%) 5 > arrows(0, 0, V[1,1], V[2,1]) > arrows(0, 0, V[1,2], V[2,2]) Kjell Konis (Copyright © 2013) 6. Linear Algebra II 34 / 53
    • Example (continued) q q 5 BAC Returns (%) 0 > w <- V[, 2] * S[2, 2] > w <- w / sqrt(m - 1) > arrows(0, 0, w[1], w[2]) q q q q q qq q qq q qq q q qq q q q q qq qq q q q q q q q qq q qq qq q qq q q qqqq qq q q q q q q q qq q qq qq q q q q q qq q q q qq q qqqq q q q q q qq qqq q qq q q q q q q q q q q q q qq q q q qq qqq qqq q q qq q qq q q qq q q q q q q q q q q qqq q q q q q qq q q qq q q qq q q q q qq qqq q q q q q q q q q q q q q q q q qqq q q q qq q q q qq q qqq q q q q q qq qq q q q qq q q q q q q q q q q q q q q q q −5 > u <- V[, 1] * S[1, 1] > u <- u / sqrt(m - 1) > arrows(0, 0, u[1], u[2]) q −5 Kjell Konis (Copyright © 2013) 6. Linear Algebra II 0 C Returns (%) 5 35 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 36 / 53
    • Eigenvalues and Eigenvectors ˜ Let R be an m × n matrix and let R = I − ee T m R ˜ ˜ Let R = UΣV T be the singular value factorization of R Recall that 1 m−1 1 m−1 = T 1 m−1 V Σ = Kjell Konis (Copyright © 2013) = = Cov(R) T T 1 m−1 V Σ ΣV ˜ ˜ R TR UΣV T T UΣV T U T U ΣV T 6. Linear Algebra II 37 / 53
    • Eigenvalues and Eigenvectors Remember that Σ is a diagonal m × n matrix    Σ=   σ1  ..  .     σn  σ1 .. T  Σ Σ= =⇒ . σn   σ1      n×m  .. .     σn  m×n 2 2 ΣT Σ is a diagonal matrix with σ1 ≥ · · · ≥ σn along the diagonal Let  Λ= Kjell Konis (Copyright © 2013) T 1 m−1 Σ Σ  λ1 = 2 σ1 m−1 =   6. Linear Algebra II  .. . λn = 2 σn m−1     38 / 53
    • Eigenvalues and Eigenvectors Substitute Λ into the expression for the covariance matrix of R Cov(R) = T T 1 m−1 V Σ ΣV = V ΛV T Let ej be a unit vector in the j th coordinate direction Multiply a right singular vector vj by Cov(R) Cov(R) vj = V ΛV T vj = V Λ V T vj = V Λ ej Recall that a matrix times a vector is a linear combination of the columns .     . | | .     Av = v1  a1  + · · · + v1  a1  =⇒ Λej = 1 λj  = λj ej   . | | . . Kjell Konis (Copyright © 2013) 6. Linear Algebra II 39 / 53
    • Eigenvalues and Eigenvectors Substituting Λej = λj ej . . . Cov(R) vj = V ΛV T vj = V Λ V T vj = V Λ ej = V λj ej = λj Vej = λj vj In summary ˜ vj is a right singular vector of R ˜ ˜ Cov(R) = R T R Cov(R) vj = λj vj Cov(R) vj same direction as vj , length scaled by factor λj In general: let A be a square matrix and consider the product Ax Certain special vectors x are in the same direction as Ax These vectors are called eigenvectors Equation: Ax = λx x ; the number λx is the eigenvalue Kjell Konis (Copyright © 2013) 6. Linear Algebra II 40 / 53
    • Diagonalizing a Matrix Suppose an n × n matrix A has n linearly independent eigenvectors Let S be a matrix whose columns are the n eigenvectors of A  λ1 S −1 AS = Λ =    ..   . λn The matrix A is diagonalized Useful representations of a diagonalized matrix AS = SΛ S −1 AS = Λ A = SΛS −1 Diagonalization requires that A have n eigenvectors Side note: invertibility requires nonzero eigenvalues Kjell Konis (Copyright © 2013) 6. Linear Algebra II 41 / 53
    • The Spectral Theorem Returning to the motivating example . . . ˜ ˜ ˜ Let A = R T R where R is an m × n matrix A is symmetric Spectral Theorem Every symmetric matrix A = AT has the factorization QΛQ T with real diagonal Λ and orthogonal matrix Q: A = QΛQ −1 = QΛQ T with Q −1 = Q T Caveat A nonsymmetric matrix can easily produce λ and x that are complex Kjell Konis (Copyright © 2013) 6. Linear Algebra II 42 / 53
    • Positive Definite Matrices The symmetric matrix A is positive definite if x T Ax > 0 for every nonzero vector x 2 × 2 case: x T Ax = [ x1 x2 ] a b b c x1 2 2 = ax1 + 2bx1 x2 + cx2 > 0 x2 The scalar value x T Ax is a quadratic function of x1 and x2 2 2 f (x1 , x2 ) = ax1 + 2b x1 x2 + cx2 f has a minimum of 0 at (0, 0) and is positive everywhere else 1 × 1 a is a positive number 2 × 2 A is a positive definite matrix Kjell Konis (Copyright © 2013) 6. Linear Algebra II 43 / 53
    • R Example q > eigR <- eigen(var(R)) > S <- eigR$vectors > lambda <- eigR$values q q q q q q qq q qq q qq q q qq q q q q qq qq q q q q q q q qq q qq qq q qq q q qqqq qq q q q q q q q qq q qq qq q q q q q qq q q q qq q qqqq q q q q q qq qqq q qq q q q q q q q q q q q q qq q q q qq qqq qqq q q qq q qq q q qq q q q q q q q q q q qqq q q q q q qq q q qq q q qq q q q q qq qqq q q q q q q q q q q q q q q q q qqq q q q qq q q q qq q qqq q q q q q qq qq q q q qq q q q q q q q q q q q q q q > w <- sqrt(lambda[2]) * S[,2] > arrows(0, 0, w[1], w[2]) q −5 > u <- sqrt(lambda[1]) * S[,1] > arrows(0, 0, u[1], u[2]) BAC Returns (%) 0 5 q q −5 Kjell Konis (Copyright © 2013) 6. Linear Algebra II 0 C Returns (%) 5 44 / 53
    • Outline 1 Transposes and Permutations 2 Vector Spaces and Subspaces 3 Variance-Covariance Matrices 4 Computing Covariance Matrices 5 Orthogonal Matrices 6 Singular Value Factorization 7 Eigenvalues and Eigenvectors 8 Solving Least Squares Problems Kjell Konis (Copyright © 2013) 6. Linear Algebra II 45 / 53
    • Least Squares 0.20 Citigroup Returns vs. S&P 500 Returns (Monthly - 2010) Set of m points (xi , yi ) Want to find best-fit line y = α + βx ˆ ˆ ˆ 0.15 q m Citigroup Returns 0.00 0.05 0.10 q q q q q q −0.10 qq q q q q q q q yi − yi ˆ Criterion: q 2 should i=1 q be minimum qq q q ~ = α + βx y ˆ Choose α and β so that ˆ m yi − (α + βxi ) q 2 i=1 q −0.05 minimized when α=α ˆ ˆ β=β 0.00 0.05 S&P 500 Returns Kjell Konis (Copyright © 2013) 6. Linear Algebra II 46 / 53
    • Least Squares What does the column picture look like? Let y = (y1 , y2 , . . . , ym ) Let x = (x1 , x2 , . . . , xm ) Let e be a column vector of m ones Can write y as a linear combination ˜         y1 ˜ x1 1 x1 | α  .       .   .  = α e  + β  .  =  .  β . . .  y = .  ˜ .    .  . | ym ˜ xm 1 xm = Xβ Want to minimize m yi − yi ˜ 2 = y −y ˜ 2 = y − Xβ 2 i=1 Kjell Konis (Copyright © 2013) 6. Linear Algebra II 47 / 53
    • QR Factorization Let A be an m × n matrix with linearly independent columns Full QR Factorization: A can be written as the product of an m × m orthogonal matrix Q an m × n upper triangular matrix R (upper triangular means rij = 0 when i > j) A = QR Want to minimize y − Xβ 2 = y − QRβ 2 Recall: orthogonal transformation leaves vector lengths unchanged y − Xβ 2 = y − QRβ Kjell Konis (Copyright © 2013) 2 = Q T (y − QRβ) 6. Linear Algebra II 2 = Q T y − Rβ 2 48 / 53
    • Least Squares Let u = Q T y  u1   r11 r12        u2   0 u − Rβ =   −      u3   0  .   . . . . .   α    r22  β  0   .  . .     =    u1 − (r11 α + r12 β)  u2 − r22 β        u3 . . . α and β effect only the first n elements of the vector Want to minimize u − Rβ 2 = u1 − (r11 α + r12 β) 2 + u2 − r22 β 2 m + ui2 i=(n+1) Kjell Konis (Copyright © 2013) 6. Linear Algebra II 49 / 53
    • Least Squares ˆ ˜ˆ ˜ Can find α and β by solving the linear system R β = u ˆ      ˆ  r11 r12   α   u1     =   ˆ 0 r22 β u2 ˜ R first n rows of R, u first n elements of u ˜ System is already upper triangular, solve using back substitution Kjell Konis (Copyright © 2013) 6. Linear Algebra II 50 / 53
    • R Example First, get the data > > > > library(quantmod) getSymbols(c("C", "ˆGSPC")) citi <- c(coredata(monthlyReturn(C["2010"]))) sp500 <- c(coredata(monthlyReturn(GSPC["2010"]))) The x variable is sp500, bind a column of ones to get matrix X > X <- cbind(1, sp500) Compute QR factorization of X and extract the Q and R matrices > qrX <- qr(X) > Q <- qr.Q(qrX, complete = TRUE) > R <- qr.R(qrX, complete = TRUE) Kjell Konis (Copyright © 2013) 6. Linear Algebra II 51 / 53
    • R Example 0.20 Compute u = Q T y q 0.15 > u <- t(Q) %*% citi Compare with built-in least squares fitting function > coef(lsfit(sp500, citi)) Intercept X 0.01708494 1.33208984 Kjell Konis (Copyright © 2013) 6. Linear Algebra II q q q ^ ^ ^ y = α + βx q q q qq q −0.10 > backsolve(R[1:2,1:2], u[1:2]) [1] 0.01708494 1.33208984 Citigroup Returns 0.00 0.05 0.10 q ˆ Solve for α and β ˆ q q q q q q q qq q q ~ = α + βx y q q −0.05 0.00 0.05 S&P 500 Returns 52 / 53
    • http://computational-finance.uw.edu Kjell Konis (Copyright © 2013) 6. Linear Algebra II 53 / 53