Linear Algebra Review
1
n-dimensional vector
• An n-dimensional vector v is denoted as
follows:
• The transpose vT is denoted as follows:
Inner (or dot) product
• Given vT = (x1, x2, . . . , xn) and wT = (y1, y2, . . . , yn),
their dot product defined as follows:
or
(scalar)
Orthogonal / Orthonormal vectors
• A set of vectors x1, x2, . . . , xn is orthogonal if
• A set of vectors x1, x2, . . . , xn is orthonormal if
k
Linear combinations
• A vector v is a linear combination of the
vectors v1, ..., vk if:
where c1, ..., ck are constants.
Example: any vector in R3 can be expressed
as a linear combinations of the unit vectors
i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1)
k
j
i
Space spanning
• A set of vectors S=(v1, v2, . . . , vk ) span some
space W if every vector v in W can be written
as a linear combination of the vectors in S
Example: the unit vectors i, j, and k span R3
k
j
i
Linear dependence
• A set of vectors v1, ..., vk are linearly dependent
if at least one of them (e.g., vj) can be written
as a linear combination of the rest:
(i.e., vj does not appear on the
right side of the above equation)
Linear independence
• A set of vectors v1, ..., vk is linearly independent
if no vector vj can be represented as a linear
combination of the remaining vectors, i.e. :
Example:
c1=c2=0
Vector basis
• A set of vectors v1, ..., vk forms a basis in
some vector space W if:
(1) (v1, ..., vk) span W
(2) (v1, ..., vk) are linearly independent
Some standard bases:
R2 R3 Rn
Orthogonal vector basis
• Basis vectors might not be orthogonal.
• Any set of basis vectors (v1, ..., vk) can be
transformed to an orthogonal basis using the
Gram-Schmidt orthogonalization algorithm.
• Normalizing the basis vectors to “unit” length will
yield an orthonormal basis.
• More useful in practice since they simplify
calculations.
Vector Expansion/Projection
• Suppose v1, v2, . . . , vn is an orthogonal base
in W, then any v є W can be represented in
this basis as follows:
• The xi of the expansion can be computed as follows:
(vector expansion or projection)
(coefficients of expansion or projection)
where:
Note: if the basis is orthonormal, then vi.vi=1
Vector basis (cont’d)
• Why do we care about set of basis vectors?
– Given a set of basis vectors, each vector can be
represented (i.e., projected) “uniquely” in this basis.
• Do vector spaces have a unique vector
basis?
– No, simply translate/rotate the basis vectors to obtain a
new basis!
– Some sets of basis vectors are preferred than others
though.
– We will see this when we discuss Principal Components
Analysis (PCA).
Matrix Operations
• Matrix addition/subtraction
– Add/Subtract corresponding elements.
– Matrices must be of same size.
• Matrix multiplication
Condition: n = q
m x n q x p m x p
n
Diagonal Matrices
Special case: Identity matrix
Matrix Transpose
Symmetric Matrices
Example:
Determinants
2 x 2
3 x 3
n x n
Properties:
(expanded along 1st column)
(expanded along kth column)
Matrix Inverse
• The inverse of a matrix A, denoted as A-1, has the
property:
A A-1 = A-1A = I
• A-1 exists only if
• Definitions
– Singular matrix: A-1 does not exist
– Ill-conditioned matrix: A is “close” to being singular
Matrix Inverse (cont’d)
• Properties of the inverse:
Matrix trace
Properties:
Rank of matrix
• Defined as the size of the largest square sub-matrix
of A that has a non-zero determinant.
Example: has rank 3
Rank of matrix (cont’d)
• Alternatively, it can be defined as the maximum
number of linearly independent columns (or
rows) of A.
i.e., rank is not 4!
Example:
Rank of matrix (cont’d)
• Useful properties:
Eigenvalues and Eigenvectors
• The vector v is an eigenvector of matrix A and
λ is an eigenvalue of A if:
Geometric interpretation: the linear transformation
implied by A cannot change the direction of the
eigenvectors v, only their magnitude.
(assume v is non-zero)
Computing λ and v
• To compute the eigenvalues λ of a matrix A,
find the roots of the characteristic polynomial.
• The eigenvectors can then be computed:
Example:
Properties of λ and v
• Eigenvalues and eigenvectors are only
defined for square matrices.
• Eigenvectors are not unique (e.g., if v is an
eigenvector, so is kv) 
• Suppose λ1, λ2, ..., λn are the eigenvalues of
A, then:
Matrix diagonalization
• Given an n x n matrix A, find P such that:
P-1AP=Λ where Λ is diagonal
• Solution: Set P = [v1 v2 . . . vn], where v1,v2 ,. . .
vn are the eigenvectors of A: eigenvalues of A
P-1AP=Λ
Matrix diagonalization (cont’d)
Example:
P-1AP=Λ
• If A is diagonalizable, then the corresponding
eigenvectors v1,v2 ,. . . vn form a basis in Rn
• If A is also symmetric, its eigenvalues are real
and the corresponding eigenvectors are
orthogonal.
Matrix diagonalization (cont’d)
• An n x n matrix A is diagonalizable iff
rank(P)=n, where P-1AP=Λ.
– i.e., A has n linearly independent eigenvectors.
• Theorem: If the eigenvalues of A are all
distinct, then the corresponding eigenvectors
are linearly independent (i.e., A is
diagonalizable).
Are all n x n matrices
diagonalizable?
Matrix decomposition
• If A is diagonalizable, then A can be
decomposed as follows:
Matrix decomposition (cont’d)
• Matrix decomposition can be simplified in
the case of symmetric matrices (i.e.,
orthogonal eigenvectors):
P-1=PT
A=PDPT=

LinearAlgebraReview.ppt

  • 1.
  • 2.
    n-dimensional vector • Ann-dimensional vector v is denoted as follows: • The transpose vT is denoted as follows:
  • 3.
    Inner (or dot)product • Given vT = (x1, x2, . . . , xn) and wT = (y1, y2, . . . , yn), their dot product defined as follows: or (scalar)
  • 4.
    Orthogonal / Orthonormalvectors • A set of vectors x1, x2, . . . , xn is orthogonal if • A set of vectors x1, x2, . . . , xn is orthonormal if k
  • 5.
    Linear combinations • Avector v is a linear combination of the vectors v1, ..., vk if: where c1, ..., ck are constants. Example: any vector in R3 can be expressed as a linear combinations of the unit vectors i = (1, 0, 0), j = (0, 1, 0), and k = (0, 0, 1) k j i
  • 6.
    Space spanning • Aset of vectors S=(v1, v2, . . . , vk ) span some space W if every vector v in W can be written as a linear combination of the vectors in S Example: the unit vectors i, j, and k span R3 k j i
  • 7.
    Linear dependence • Aset of vectors v1, ..., vk are linearly dependent if at least one of them (e.g., vj) can be written as a linear combination of the rest: (i.e., vj does not appear on the right side of the above equation)
  • 8.
    Linear independence • Aset of vectors v1, ..., vk is linearly independent if no vector vj can be represented as a linear combination of the remaining vectors, i.e. : Example: c1=c2=0
  • 9.
    Vector basis • Aset of vectors v1, ..., vk forms a basis in some vector space W if: (1) (v1, ..., vk) span W (2) (v1, ..., vk) are linearly independent Some standard bases: R2 R3 Rn
  • 10.
    Orthogonal vector basis •Basis vectors might not be orthogonal. • Any set of basis vectors (v1, ..., vk) can be transformed to an orthogonal basis using the Gram-Schmidt orthogonalization algorithm. • Normalizing the basis vectors to “unit” length will yield an orthonormal basis. • More useful in practice since they simplify calculations.
  • 11.
    Vector Expansion/Projection • Supposev1, v2, . . . , vn is an orthogonal base in W, then any v є W can be represented in this basis as follows: • The xi of the expansion can be computed as follows: (vector expansion or projection) (coefficients of expansion or projection) where: Note: if the basis is orthonormal, then vi.vi=1
  • 12.
    Vector basis (cont’d) •Why do we care about set of basis vectors? – Given a set of basis vectors, each vector can be represented (i.e., projected) “uniquely” in this basis. • Do vector spaces have a unique vector basis? – No, simply translate/rotate the basis vectors to obtain a new basis! – Some sets of basis vectors are preferred than others though. – We will see this when we discuss Principal Components Analysis (PCA).
  • 13.
    Matrix Operations • Matrixaddition/subtraction – Add/Subtract corresponding elements. – Matrices must be of same size. • Matrix multiplication Condition: n = q m x n q x p m x p n
  • 14.
  • 15.
  • 16.
  • 17.
    Determinants 2 x 2 3x 3 n x n Properties: (expanded along 1st column) (expanded along kth column)
  • 18.
    Matrix Inverse • Theinverse of a matrix A, denoted as A-1, has the property: A A-1 = A-1A = I • A-1 exists only if • Definitions – Singular matrix: A-1 does not exist – Ill-conditioned matrix: A is “close” to being singular
  • 19.
    Matrix Inverse (cont’d) •Properties of the inverse:
  • 20.
  • 21.
    Rank of matrix •Defined as the size of the largest square sub-matrix of A that has a non-zero determinant. Example: has rank 3
  • 22.
    Rank of matrix(cont’d) • Alternatively, it can be defined as the maximum number of linearly independent columns (or rows) of A. i.e., rank is not 4! Example:
  • 23.
    Rank of matrix(cont’d) • Useful properties:
  • 24.
    Eigenvalues and Eigenvectors •The vector v is an eigenvector of matrix A and λ is an eigenvalue of A if: Geometric interpretation: the linear transformation implied by A cannot change the direction of the eigenvectors v, only their magnitude. (assume v is non-zero)
  • 25.
    Computing λ andv • To compute the eigenvalues λ of a matrix A, find the roots of the characteristic polynomial. • The eigenvectors can then be computed: Example:
  • 26.
    Properties of λand v • Eigenvalues and eigenvectors are only defined for square matrices. • Eigenvectors are not unique (e.g., if v is an eigenvector, so is kv)  • Suppose λ1, λ2, ..., λn are the eigenvalues of A, then:
  • 27.
    Matrix diagonalization • Givenan n x n matrix A, find P such that: P-1AP=Λ where Λ is diagonal • Solution: Set P = [v1 v2 . . . vn], where v1,v2 ,. . . vn are the eigenvectors of A: eigenvalues of A P-1AP=Λ
  • 28.
  • 29.
    • If Ais diagonalizable, then the corresponding eigenvectors v1,v2 ,. . . vn form a basis in Rn • If A is also symmetric, its eigenvalues are real and the corresponding eigenvectors are orthogonal. Matrix diagonalization (cont’d)
  • 30.
    • An nx n matrix A is diagonalizable iff rank(P)=n, where P-1AP=Λ. – i.e., A has n linearly independent eigenvectors. • Theorem: If the eigenvalues of A are all distinct, then the corresponding eigenvectors are linearly independent (i.e., A is diagonalizable). Are all n x n matrices diagonalizable?
  • 31.
    Matrix decomposition • IfA is diagonalizable, then A can be decomposed as follows:
  • 32.
    Matrix decomposition (cont’d) •Matrix decomposition can be simplified in the case of symmetric matrices (i.e., orthogonal eigenvectors): P-1=PT A=PDPT=