SlideShare a Scribd company logo
Machine Learning for Data Mining
Linear Algebra Review
Andres Mendez-Vazquez
May 14, 2015
1 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
2 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
3 / 50
What is a Vector?
A ordered tuple of numbers
x =






x1
x2
...
xn






Expressing a magnitude and a direction
4 / 50
What is a Vector?
A ordered tuple of numbers
x =






x1
x2
...
xn






Expressing a magnitude and a direction
Magnitude
Direction
4 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
5 / 50
Vector Spaces
Definition
A vector is an element of a vector space
Vector Space V
It is a set that contains all linear combinations of its elements:
1 If x, y ∈ V then x + y ∈ V .
2 If x ∈ V then αx ∈ V for any scalar α.
3 There exists 0 ∈ V then x + 0 = x for any x ∈ V .
A subspace
It is a subset of a vector space that is also a vector space
6 / 50
Vector Spaces
Definition
A vector is an element of a vector space
Vector Space V
It is a set that contains all linear combinations of its elements:
1 If x, y ∈ V then x + y ∈ V .
2 If x ∈ V then αx ∈ V for any scalar α.
3 There exists 0 ∈ V then x + 0 = x for any x ∈ V .
A subspace
It is a subset of a vector space that is also a vector space
6 / 50
Vector Spaces
Definition
A vector is an element of a vector space
Vector Space V
It is a set that contains all linear combinations of its elements:
1 If x, y ∈ V then x + y ∈ V .
2 If x ∈ V then αx ∈ V for any scalar α.
3 There exists 0 ∈ V then x + 0 = x for any x ∈ V .
A subspace
It is a subset of a vector space that is also a vector space
6 / 50
Vector Spaces
Definition
A vector is an element of a vector space
Vector Space V
It is a set that contains all linear combinations of its elements:
1 If x, y ∈ V then x + y ∈ V .
2 If x ∈ V then αx ∈ V for any scalar α.
3 There exists 0 ∈ V then x + 0 = x for any x ∈ V .
A subspace
It is a subset of a vector space that is also a vector space
6 / 50
Vector Spaces
Definition
A vector is an element of a vector space
Vector Space V
It is a set that contains all linear combinations of its elements:
1 If x, y ∈ V then x + y ∈ V .
2 If x ∈ V then αx ∈ V for any scalar α.
3 There exists 0 ∈ V then x + 0 = x for any x ∈ V .
A subspace
It is a subset of a vector space that is also a vector space
6 / 50
Classic Example
Euclidean Space R3
7 / 50
Span
Definition
The span of any set of vectors {x1, x2, ..., xn} is defined as:
span (x1, x2, ..., xn) = α1x1 + α2x2 + ... + αnxn
What Examples can you Imagine?
Give it a shot!!!
8 / 50
Span
Definition
The span of any set of vectors {x1, x2, ..., xn} is defined as:
span (x1, x2, ..., xn) = α1x1 + α2x2 + ... + αnxn
What Examples can you Imagine?
Give it a shot!!!
8 / 50
Subspaces of Rn
A line through the origin in Rn
A plane in Rn
9 / 50
Subspaces of Rn
A line through the origin in Rn
A plane in Rn
9 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
10 / 50
Linear Independence and Basis of Vector Spaces
Fact 1
A vector x is a linearly independent of a set of vectors {x1, x2, ..., xn} if it
does not lie in their span.
Fact 2
A set of vectors is linearly independent if every vector is linearly
independent of the rest.
The Rest
1 A basis of a vector space V is a linearly independent set of vectors
whose span is equal to V
2 If the basis has d vectors then the vector space V has dimensionality
d.
11 / 50
Linear Independence and Basis of Vector Spaces
Fact 1
A vector x is a linearly independent of a set of vectors {x1, x2, ..., xn} if it
does not lie in their span.
Fact 2
A set of vectors is linearly independent if every vector is linearly
independent of the rest.
The Rest
1 A basis of a vector space V is a linearly independent set of vectors
whose span is equal to V
2 If the basis has d vectors then the vector space V has dimensionality
d.
11 / 50
Linear Independence and Basis of Vector Spaces
Fact 1
A vector x is a linearly independent of a set of vectors {x1, x2, ..., xn} if it
does not lie in their span.
Fact 2
A set of vectors is linearly independent if every vector is linearly
independent of the rest.
The Rest
1 A basis of a vector space V is a linearly independent set of vectors
whose span is equal to V
2 If the basis has d vectors then the vector space V has dimensionality
d.
11 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
12 / 50
Norm of a Vector
Definition
A norm u measures the magnitud of the vector.
Properties
1 Homogeneity: αx = α x .
2 Triangle inequality: x + y ≤ x + y .
3 Point Separation x = 0 if and only if x = 0.
Examples
1 Manhattan or 1-norm : x 1 = d
i=1 |xi|.
2 Euclidean or 2-norm : x 2 = d
i=1 x2
i .
13 / 50
Norm of a Vector
Definition
A norm u measures the magnitud of the vector.
Properties
1 Homogeneity: αx = α x .
2 Triangle inequality: x + y ≤ x + y .
3 Point Separation x = 0 if and only if x = 0.
Examples
1 Manhattan or 1-norm : x 1 = d
i=1 |xi|.
2 Euclidean or 2-norm : x 2 = d
i=1 x2
i .
13 / 50
Norm of a Vector
Definition
A norm u measures the magnitud of the vector.
Properties
1 Homogeneity: αx = α x .
2 Triangle inequality: x + y ≤ x + y .
3 Point Separation x = 0 if and only if x = 0.
Examples
1 Manhattan or 1-norm : x 1 = d
i=1 |xi|.
2 Euclidean or 2-norm : x 2 = d
i=1 x2
i .
13 / 50
Norm of a Vector
Definition
A norm u measures the magnitud of the vector.
Properties
1 Homogeneity: αx = α x .
2 Triangle inequality: x + y ≤ x + y .
3 Point Separation x = 0 if and only if x = 0.
Examples
1 Manhattan or 1-norm : x 1 = d
i=1 |xi|.
2 Euclidean or 2-norm : x 2 = d
i=1 x2
i .
13 / 50
Norm of a Vector
Definition
A norm u measures the magnitud of the vector.
Properties
1 Homogeneity: αx = α x .
2 Triangle inequality: x + y ≤ x + y .
3 Point Separation x = 0 if and only if x = 0.
Examples
1 Manhattan or 1-norm : x 1 = d
i=1 |xi|.
2 Euclidean or 2-norm : x 2 = d
i=1 x2
i .
13 / 50
Norm of a Vector
Definition
A norm u measures the magnitud of the vector.
Properties
1 Homogeneity: αx = α x .
2 Triangle inequality: x + y ≤ x + y .
3 Point Separation x = 0 if and only if x = 0.
Examples
1 Manhattan or 1-norm : x 1 = d
i=1 |xi|.
2 Euclidean or 2-norm : x 2 = d
i=1 x2
i .
13 / 50
Examples
Example 1-norm and 2-norm
14 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
15 / 50
Inner Product
Definition
The inner product between u and v
u, v =
n
i=1
uivi.
It is the projection of one vector onto the other one
Remark: It is related to the Euclidean norm: u, u = u 2
2.
16 / 50
Inner Product
Definition
The inner product between u and v
u, v =
n
i=1
uivi.
It is the projection of one vector onto the other one
16 / 50
Properties
Meaning
The inner product is a measure of correlation between two vectors, scaled
by the norms of the vectors
if u · v > 0, u and v are aligned
17 / 50
Properties
Meaning
The inner product is a measure of correlation between two vectors, scaled
by the norms of the vectors
if u · v > 0, u and v are aligned
17 / 50
Properties
The inner product is a measure of correlation between two vectors,
scaled by the norms of the vectors
18 / 50
Properties
The inner product is a measure of correlation between two vectors,
scaled by the norms of the vectors
19 / 50
Properties
The inner product is a measure of correlation between two vectors,
scaled by the norms of the vectors
20 / 50
Definitions involving the norm
Orthonormal
The vectors in orthonormal basis have unit Euclidean norm and are
orthonorgonal.
To express a vector x in an orthonormal basis
For example, given x = α1b1 + α2b2
x, b1 = α1b1 + α2b2, b1
= α1 b1, b1 + α2 b2, b1
= α1 + 0
Likewise, x, b2 = α2
21 / 50
Definitions involving the norm
Orthonormal
The vectors in orthonormal basis have unit Euclidean norm and are
orthonorgonal.
To express a vector x in an orthonormal basis
For example, given x = α1b1 + α2b2
x, b1 = α1b1 + α2b2, b1
= α1 b1, b1 + α2 b2, b1
= α1 + 0
Likewise, x, b2 = α2
21 / 50
Definitions involving the norm
Orthonormal
The vectors in orthonormal basis have unit Euclidean norm and are
orthonorgonal.
To express a vector x in an orthonormal basis
For example, given x = α1b1 + α2b2
x, b1 = α1b1 + α2b2, b1
= α1 b1, b1 + α2 b2, b1
= α1 + 0
Likewise, x, b2 = α2
21 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
22 / 50
Linear Operator
Definition
A linear operator L : U → V is a map from a vector space U to another
vector space V satisfies:
L (u1 + u2) = L (u1) + L (u2)
Something Notable
If the dimension n of U and m of V are finite, L can be represented by
m × n matrix:
A =





a11 a12 · · · a1n
a21 a22 · · · a2n
· · ·
am1 am2 · · · amn





23 / 50
Linear Operator
Definition
A linear operator L : U → V is a map from a vector space U to another
vector space V satisfies:
L (u1 + u2) = L (u1) + L (u2)
Something Notable
If the dimension n of U and m of V are finite, L can be represented by
m × n matrix:
A =





a11 a12 · · · a1n
a21 a22 · · · a2n
· · ·
am1 am2 · · · amn





23 / 50
Thus, product of
The product of two linear operator can be seen as the multiplication
of two matrices
AB =





a11 a12 · · · a1n
a21 a22 · · · a2n
· · ·
am1 am2 · · · amn










b11 b12 · · · b1p
b21 b22 · · · b2p
· · ·
bn1 bn2 · · · bnp





=





n
i=1 a1ibi1
n
i=1 a1ibi2 · · · n
i=1 a1ibip
n
i=1 a2ibi1
n
i=1 a2ibi2 · · · n
i=1 a2ibip
· · ·
n
i=1 amibi1
n
i=1 amibi2 · · · n
i=1 amibip





Note: if A is m × n and B is n × p, then AB is m × p.
24 / 50
Thus, product of
The product of two linear operator can be seen as the multiplication
of two matrices
AB =





a11 a12 · · · a1n
a21 a22 · · · a2n
· · ·
am1 am2 · · · amn










b11 b12 · · · b1p
b21 b22 · · · b2p
· · ·
bn1 bn2 · · · bnp





=





n
i=1 a1ibi1
n
i=1 a1ibi2 · · · n
i=1 a1ibip
n
i=1 a2ibi1
n
i=1 a2ibi2 · · · n
i=1 a2ibip
· · ·
n
i=1 amibi1
n
i=1 amibi2 · · · n
i=1 amibip





Note: if A is m × n and B is n × p, then AB is m × p.
24 / 50
Transpose of a Matrix
The transpose of a matrix is obtained by flipping the rows and
columns
AT
=





a11 a21 · · · an1
a12 a22 · · · an2
· · ·
a1m a2m · · · anm





Which the following properties
AT
T
= A
(A + B)T
= AT + BT
(AB)T
= BT AT
Not only that, we have the inner product
u, v = uT
v
25 / 50
Transpose of a Matrix
The transpose of a matrix is obtained by flipping the rows and
columns
AT
=





a11 a21 · · · an1
a12 a22 · · · an2
· · ·
a1m a2m · · · anm





Which the following properties
AT
T
= A
(A + B)T
= AT + BT
(AB)T
= BT AT
Not only that, we have the inner product
u, v = uT
v
25 / 50
Transpose of a Matrix
The transpose of a matrix is obtained by flipping the rows and
columns
AT
=





a11 a21 · · · an1
a12 a22 · · · an2
· · ·
a1m a2m · · · anm





Which the following properties
AT
T
= A
(A + B)T
= AT + BT
(AB)T
= BT AT
Not only that, we have the inner product
u, v = uT
v
25 / 50
Transpose of a Matrix
The transpose of a matrix is obtained by flipping the rows and
columns
AT
=





a11 a21 · · · an1
a12 a22 · · · an2
· · ·
a1m a2m · · · anm





Which the following properties
AT
T
= A
(A + B)T
= AT + BT
(AB)T
= BT AT
Not only that, we have the inner product
u, v = uT
v
25 / 50
Transpose of a Matrix
The transpose of a matrix is obtained by flipping the rows and
columns
AT
=





a11 a21 · · · an1
a12 a22 · · · an2
· · ·
a1m a2m · · · anm





Which the following properties
AT
T
= A
(A + B)T
= AT + BT
(AB)T
= BT AT
Not only that, we have the inner product
u, v = uT
v
25 / 50
As always, we have the identity operator
The identity operator in matrix multiplication is defined as
I =





1 0 · · · 0
0 1 · · · 0
· · ·
0 0 · · · 1





With properties
For any matrix A, AI = A.
I is the identity operator for the matrix product.
26 / 50
As always, we have the identity operator
The identity operator in matrix multiplication is defined as
I =





1 0 · · · 0
0 1 · · · 0
· · ·
0 0 · · · 1





With properties
For any matrix A, AI = A.
I is the identity operator for the matrix product.
26 / 50
As always, we have the identity operator
The identity operator in matrix multiplication is defined as
I =





1 0 · · · 0
0 1 · · · 0
· · ·
0 0 · · · 1





With properties
For any matrix A, AI = A.
I is the identity operator for the matrix product.
26 / 50
Column Space, Row Space and Rank
Let A be an m × n matrix
We have the following spaces...
Column space
Span of the columns of A.
Linear subspace of Rm.
Row space
Span of the rows of A.
Linear subspace of Rn.
27 / 50
Column Space, Row Space and Rank
Let A be an m × n matrix
We have the following spaces...
Column space
Span of the columns of A.
Linear subspace of Rm.
Row space
Span of the rows of A.
Linear subspace of Rn.
27 / 50
Column Space, Row Space and Rank
Let A be an m × n matrix
We have the following spaces...
Column space
Span of the columns of A.
Linear subspace of Rm.
Row space
Span of the rows of A.
Linear subspace of Rn.
27 / 50
Column Space, Row Space and Rank
Let A be an m × n matrix
We have the following spaces...
Column space
Span of the columns of A.
Linear subspace of Rm.
Row space
Span of the rows of A.
Linear subspace of Rn.
27 / 50
Important facts
Something Notable
The column and row space of any matrix have the same dimension.
The rank
The dimension is the rank of the matrix.
28 / 50
Important facts
Something Notable
The column and row space of any matrix have the same dimension.
The rank
The dimension is the rank of the matrix.
28 / 50
Range and Null Space
Range
Set of vectors equal to Au for some u ∈ Rn.
Range (A) = {x|x = Au for some u ∈ Rn
}
It is a linear subspace of Rm and also called the column space of A.
Null Space
We have the following definition
Null Space (A) = {u|Au = 0}
It is a linear subspace of Rm.
29 / 50
Range and Null Space
Range
Set of vectors equal to Au for some u ∈ Rn.
Range (A) = {x|x = Au for some u ∈ Rn
}
It is a linear subspace of Rm and also called the column space of A.
Null Space
We have the following definition
Null Space (A) = {u|Au = 0}
It is a linear subspace of Rm.
29 / 50
Range and Null Space
Range
Set of vectors equal to Au for some u ∈ Rn.
Range (A) = {x|x = Au for some u ∈ Rn
}
It is a linear subspace of Rm and also called the column space of A.
Null Space
We have the following definition
Null Space (A) = {u|Au = 0}
It is a linear subspace of Rm.
29 / 50
Range and Null Space
Range
Set of vectors equal to Au for some u ∈ Rn.
Range (A) = {x|x = Au for some u ∈ Rn
}
It is a linear subspace of Rm and also called the column space of A.
Null Space
We have the following definition
Null Space (A) = {u|Au = 0}
It is a linear subspace of Rm.
29 / 50
Important fact
Something Notable
Every vector in the null space is orthogonal to the rows of A.
The null space and row space of a matrix are orthogonal.
30 / 50
Important fact
Something Notable
Every vector in the null space is orthogonal to the rows of A.
The null space and row space of a matrix are orthogonal.
30 / 50
Range and Column Space
We have another interpretation of the matrix-vector product
Au = (A1 A2 · · · An)






u1
u2
...
un






=u1A1 + u2A2 + · · · + unAn
Thus
The result is a linear combination of the columns of A.
Actually, the range is the column space.
31 / 50
Range and Column Space
We have another interpretation of the matrix-vector product
Au = (A1 A2 · · · An)






u1
u2
...
un






=u1A1 + u2A2 + · · · + unAn
Thus
The result is a linear combination of the columns of A.
Actually, the range is the column space.
31 / 50
Range and Column Space
We have another interpretation of the matrix-vector product
Au = (A1 A2 · · · An)






u1
u2
...
un






=u1A1 + u2A2 + · · · + unAn
Thus
The result is a linear combination of the columns of A.
Actually, the range is the column space.
31 / 50
Matrix Inverse
Something Notable
For an n × n matrix A: rank + dim(null space) = n.
if dim(null space)= 0 then A is full rank.
In this case, the action of the matrix is invertible.
The inversion is also linear and consequently can be represented by
another matrix A−1.
A−1 is the only matrix such that A−1A = AA−1 = I.
32 / 50
Matrix Inverse
Something Notable
For an n × n matrix A: rank + dim(null space) = n.
if dim(null space)= 0 then A is full rank.
In this case, the action of the matrix is invertible.
The inversion is also linear and consequently can be represented by
another matrix A−1.
A−1 is the only matrix such that A−1A = AA−1 = I.
32 / 50
Matrix Inverse
Something Notable
For an n × n matrix A: rank + dim(null space) = n.
if dim(null space)= 0 then A is full rank.
In this case, the action of the matrix is invertible.
The inversion is also linear and consequently can be represented by
another matrix A−1.
A−1 is the only matrix such that A−1A = AA−1 = I.
32 / 50
Matrix Inverse
Something Notable
For an n × n matrix A: rank + dim(null space) = n.
if dim(null space)= 0 then A is full rank.
In this case, the action of the matrix is invertible.
The inversion is also linear and consequently can be represented by
another matrix A−1.
A−1 is the only matrix such that A−1A = AA−1 = I.
32 / 50
Matrix Inverse
Something Notable
For an n × n matrix A: rank + dim(null space) = n.
if dim(null space)= 0 then A is full rank.
In this case, the action of the matrix is invertible.
The inversion is also linear and consequently can be represented by
another matrix A−1.
A−1 is the only matrix such that A−1A = AA−1 = I.
32 / 50
Orthogonal Matrices
Definition
An orthogonal matrix U satisfies UT U = I.
Properties
U has orthonormal columns.
In addition
Applying an orthogonal matrix to two vectors does not change their inner
product:
Uu, Uv = (Uu)T
Uv
=uT
UT
Uv
=uT
v
= u, v
33 / 50
Orthogonal Matrices
Definition
An orthogonal matrix U satisfies UT U = I.
Properties
U has orthonormal columns.
In addition
Applying an orthogonal matrix to two vectors does not change their inner
product:
Uu, Uv = (Uu)T
Uv
=uT
UT
Uv
=uT
v
= u, v
33 / 50
Orthogonal Matrices
Definition
An orthogonal matrix U satisfies UT U = I.
Properties
U has orthonormal columns.
In addition
Applying an orthogonal matrix to two vectors does not change their inner
product:
Uu, Uv = (Uu)T
Uv
=uT
UT
Uv
=uT
v
= u, v
33 / 50
Example
A classic one
Matrices representing rotations are orthogonal.
34 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
35 / 50
Trace and Determinant
Definition (Trace)
The trace is the sum of the diagonal elements of a square matrix.
Definition (Determinant)
The determinant of a square matrix A, denoted by |A|, is defined as
det (A) =
n
j=1
(−1)i+j
aijMij
where Mij is determinant of matrix A without the row i and column j.
36 / 50
Trace and Determinant
Definition (Trace)
The trace is the sum of the diagonal elements of a square matrix.
Definition (Determinant)
The determinant of a square matrix A, denoted by |A|, is defined as
det (A) =
n
j=1
(−1)i+j
aijMij
where Mij is determinant of matrix A without the row i and column j.
36 / 50
Special Case
For a 2 × 2 matrix A =
a b
c d
|A| = ad − bc
The absolute value of |A|is the area of the parallelogram given by the
rows of A
37 / 50
Special Case
For a 2 × 2 matrix A =
a b
c d
|A| = ad − bc
The absolute value of |A|is the area of the parallelogram given by the
rows of A
37 / 50
Properties of the Determinant
Basic Properties
|A| = AT
|AB| = |A| |B|
|A| = 0 if and only if A is not invertible
If A is invertible, then A−1 = 1
|A| .
38 / 50
Properties of the Determinant
Basic Properties
|A| = AT
|AB| = |A| |B|
|A| = 0 if and only if A is not invertible
If A is invertible, then A−1 = 1
|A| .
38 / 50
Properties of the Determinant
Basic Properties
|A| = AT
|AB| = |A| |B|
|A| = 0 if and only if A is not invertible
If A is invertible, then A−1 = 1
|A| .
38 / 50
Properties of the Determinant
Basic Properties
|A| = AT
|AB| = |A| |B|
|A| = 0 if and only if A is not invertible
If A is invertible, then A−1 = 1
|A| .
38 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
39 / 50
Eigenvalues and Eigenvectors
Eigenvalues
An eigenvalue λ of a square matrix A satisfies:
Au = λu
for some vector , which we call an eigenvector.
Properties
Geometrically the operator A expands when (λ > 1) or contracts (λ < 1)
eigenvectors, but does not rotate them.
Null Space relation
If u is an eigenvector of A, it is in the null space of A − λI, which is
consequently not invertible.
40 / 50
Eigenvalues and Eigenvectors
Eigenvalues
An eigenvalue λ of a square matrix A satisfies:
Au = λu
for some vector , which we call an eigenvector.
Properties
Geometrically the operator A expands when (λ > 1) or contracts (λ < 1)
eigenvectors, but does not rotate them.
Null Space relation
If u is an eigenvector of A, it is in the null space of A − λI, which is
consequently not invertible.
40 / 50
Eigenvalues and Eigenvectors
Eigenvalues
An eigenvalue λ of a square matrix A satisfies:
Au = λu
for some vector , which we call an eigenvector.
Properties
Geometrically the operator A expands when (λ > 1) or contracts (λ < 1)
eigenvectors, but does not rotate them.
Null Space relation
If u is an eigenvector of A, it is in the null space of A − λI, which is
consequently not invertible.
40 / 50
More properties
Given the previous relation
The eigenvalues of A are the roots of the equation |A − λI| = 0
Remark: We do not calculate the eigenvalues this way
Something Notable
Eigenvalues and eigenvectors can be complex valued, even if all the entries
of A are real.
41 / 50
More properties
Given the previous relation
The eigenvalues of A are the roots of the equation |A − λI| = 0
Remark: We do not calculate the eigenvalues this way
Something Notable
Eigenvalues and eigenvectors can be complex valued, even if all the entries
of A are real.
41 / 50
Eigendecomposition of a Matrix
Given
Let A be an n × n square matrix with n linearly independent eigenvectors
p1, p2, ..., pn and eigenvalues λ1, λ2, ..., λn
We define the matrices
P = (p1 p2 · · · pn)
Λ =





λ1 0 · · · 0
0 λ2 · · · 0
· · ·
0 0 · · · λn





42 / 50
Eigendecomposition of a Matrix
Given
Let A be an n × n square matrix with n linearly independent eigenvectors
p1, p2, ..., pn and eigenvalues λ1, λ2, ..., λn
We define the matrices
P = (p1 p2 · · · pn)
Λ =





λ1 0 · · · 0
0 λ2 · · · 0
· · ·
0 0 · · · λn





42 / 50
Properties
We have that A satisfies
AP = PΛ
In addition
P is full rank.
Thus, inverting it yields the eigendecomposition
A = PΛP−1
43 / 50
Properties
We have that A satisfies
AP = PΛ
In addition
P is full rank.
Thus, inverting it yields the eigendecomposition
A = PΛP−1
43 / 50
Properties
We have that A satisfies
AP = PΛ
In addition
P is full rank.
Thus, inverting it yields the eigendecomposition
A = PΛP−1
43 / 50
Properties of the Eigendecomposition
We have that
Not all matrices are diagonalizable/eigendecomposition. Example
1 1
0 1
Trace (A) = Trace (Λ) = n
i=1 λi
|A| = |Λ| = Πn
i=1λi
The rank of A is equal to the number of nonzero eigenvalues.
If λ is anonzero eigenvalue of A, 1
λ is an eigenvalue of A−1 with the
same eigenvector.
The eigendecompositon allows to compute matrix powers efficiently:
Am = PΛP−1 m
= PΛP−1PΛP−1PΛP−1 . . . PΛP−1 =
PΛmP−1
44 / 50
Properties of the Eigendecomposition
We have that
Not all matrices are diagonalizable/eigendecomposition. Example
1 1
0 1
Trace (A) = Trace (Λ) = n
i=1 λi
|A| = |Λ| = Πn
i=1λi
The rank of A is equal to the number of nonzero eigenvalues.
If λ is anonzero eigenvalue of A, 1
λ is an eigenvalue of A−1 with the
same eigenvector.
The eigendecompositon allows to compute matrix powers efficiently:
Am = PΛP−1 m
= PΛP−1PΛP−1PΛP−1 . . . PΛP−1 =
PΛmP−1
44 / 50
Properties of the Eigendecomposition
We have that
Not all matrices are diagonalizable/eigendecomposition. Example
1 1
0 1
Trace (A) = Trace (Λ) = n
i=1 λi
|A| = |Λ| = Πn
i=1λi
The rank of A is equal to the number of nonzero eigenvalues.
If λ is anonzero eigenvalue of A, 1
λ is an eigenvalue of A−1 with the
same eigenvector.
The eigendecompositon allows to compute matrix powers efficiently:
Am = PΛP−1 m
= PΛP−1PΛP−1PΛP−1 . . . PΛP−1 =
PΛmP−1
44 / 50
Properties of the Eigendecomposition
We have that
Not all matrices are diagonalizable/eigendecomposition. Example
1 1
0 1
Trace (A) = Trace (Λ) = n
i=1 λi
|A| = |Λ| = Πn
i=1λi
The rank of A is equal to the number of nonzero eigenvalues.
If λ is anonzero eigenvalue of A, 1
λ is an eigenvalue of A−1 with the
same eigenvector.
The eigendecompositon allows to compute matrix powers efficiently:
Am = PΛP−1 m
= PΛP−1PΛP−1PΛP−1 . . . PΛP−1 =
PΛmP−1
44 / 50
Properties of the Eigendecomposition
We have that
Not all matrices are diagonalizable/eigendecomposition. Example
1 1
0 1
Trace (A) = Trace (Λ) = n
i=1 λi
|A| = |Λ| = Πn
i=1λi
The rank of A is equal to the number of nonzero eigenvalues.
If λ is anonzero eigenvalue of A, 1
λ is an eigenvalue of A−1 with the
same eigenvector.
The eigendecompositon allows to compute matrix powers efficiently:
Am = PΛP−1 m
= PΛP−1PΛP−1PΛP−1 . . . PΛP−1 =
PΛmP−1
44 / 50
Properties of the Eigendecomposition
We have that
Not all matrices are diagonalizable/eigendecomposition. Example
1 1
0 1
Trace (A) = Trace (Λ) = n
i=1 λi
|A| = |Λ| = Πn
i=1λi
The rank of A is equal to the number of nonzero eigenvalues.
If λ is anonzero eigenvalue of A, 1
λ is an eigenvalue of A−1 with the
same eigenvector.
The eigendecompositon allows to compute matrix powers efficiently:
Am = PΛP−1 m
= PΛP−1PΛP−1PΛP−1 . . . PΛP−1 =
PΛmP−1
44 / 50
Eigendecomposition of a Symmetric Matrix
When A symmetric, we have
If A = AT then A is symmetric.
The eigenvalues of symmetric matrices are real.
The eigenvectors of symmetric matrices are orthonormal.
Consequently, the eigendecomposition becomes A = UΛUT for Λ
real and U orthogonal.
The eigenvectors of A are an orthonormal basis for the column space
and row space.
45 / 50
Eigendecomposition of a Symmetric Matrix
When A symmetric, we have
If A = AT then A is symmetric.
The eigenvalues of symmetric matrices are real.
The eigenvectors of symmetric matrices are orthonormal.
Consequently, the eigendecomposition becomes A = UΛUT for Λ
real and U orthogonal.
The eigenvectors of A are an orthonormal basis for the column space
and row space.
45 / 50
Eigendecomposition of a Symmetric Matrix
When A symmetric, we have
If A = AT then A is symmetric.
The eigenvalues of symmetric matrices are real.
The eigenvectors of symmetric matrices are orthonormal.
Consequently, the eigendecomposition becomes A = UΛUT for Λ
real and U orthogonal.
The eigenvectors of A are an orthonormal basis for the column space
and row space.
45 / 50
Eigendecomposition of a Symmetric Matrix
When A symmetric, we have
If A = AT then A is symmetric.
The eigenvalues of symmetric matrices are real.
The eigenvectors of symmetric matrices are orthonormal.
Consequently, the eigendecomposition becomes A = UΛUT for Λ
real and U orthogonal.
The eigenvectors of A are an orthonormal basis for the column space
and row space.
45 / 50
Eigendecomposition of a Symmetric Matrix
When A symmetric, we have
If A = AT then A is symmetric.
The eigenvalues of symmetric matrices are real.
The eigenvectors of symmetric matrices are orthonormal.
Consequently, the eigendecomposition becomes A = UΛUT for Λ
real and U orthogonal.
The eigenvectors of A are an orthonormal basis for the column space
and row space.
45 / 50
We can see the action of a symmetric matrix on a vector u
as...
We can decompose the action Au = UΛUT
u as
Projection of u onto the column space of A (Multiplication by UT ).
Scaling of each coefficient Ui, u by the corresponding eigenvalue
(Multiplication by Λ).
Linear combination of the eigenvectors scaled by the resulting
coefficient (Multiplication by U).
Final equation
Au =
n
i=1
λi Ui, u Ui
It would be great to generalize this to all matrices!!!
46 / 50
We can see the action of a symmetric matrix on a vector u
as...
We can decompose the action Au = UΛUT
u as
Projection of u onto the column space of A (Multiplication by UT ).
Scaling of each coefficient Ui, u by the corresponding eigenvalue
(Multiplication by Λ).
Linear combination of the eigenvectors scaled by the resulting
coefficient (Multiplication by U).
Final equation
Au =
n
i=1
λi Ui, u Ui
It would be great to generalize this to all matrices!!!
46 / 50
We can see the action of a symmetric matrix on a vector u
as...
We can decompose the action Au = UΛUT
u as
Projection of u onto the column space of A (Multiplication by UT ).
Scaling of each coefficient Ui, u by the corresponding eigenvalue
(Multiplication by Λ).
Linear combination of the eigenvectors scaled by the resulting
coefficient (Multiplication by U).
Final equation
Au =
n
i=1
λi Ui, u Ui
It would be great to generalize this to all matrices!!!
46 / 50
We can see the action of a symmetric matrix on a vector u
as...
We can decompose the action Au = UΛUT
u as
Projection of u onto the column space of A (Multiplication by UT ).
Scaling of each coefficient Ui, u by the corresponding eigenvalue
(Multiplication by Λ).
Linear combination of the eigenvectors scaled by the resulting
coefficient (Multiplication by U).
Final equation
Au =
n
i=1
λi Ui, u Ui
It would be great to generalize this to all matrices!!!
46 / 50
We can see the action of a symmetric matrix on a vector u
as...
We can decompose the action Au = UΛUT
u as
Projection of u onto the column space of A (Multiplication by UT ).
Scaling of each coefficient Ui, u by the corresponding eigenvalue
(Multiplication by Λ).
Linear combination of the eigenvectors scaled by the resulting
coefficient (Multiplication by U).
Final equation
Au =
n
i=1
λi Ui, u Ui
It would be great to generalize this to all matrices!!!
46 / 50
Outline
1 Introduction
What is a Vector?
2 Vector Spaces
Definition
Linear Independence and Basis of Vector Spaces
Norm of a Vector
Inner Product
Matrices
Trace and Determinant
Matrix Decomposition
Singular Value Decomposition
47 / 50
Singular Value Decomposition
Every Matrix has a singular value decomposition
A = UΣV T
Where
The columns of U are an orthonormal basis for the column space.
The columns of V are an orthonormal basis for the row space.
The Σ is diagonal and the entries on its diagonal σi = Σii are positive
real numbers, called the singular values of A.
The action of Aon a vector u can be decomposed into
Au = n
i=1 σi Vi, u Ui
48 / 50
Singular Value Decomposition
Every Matrix has a singular value decomposition
A = UΣV T
Where
The columns of U are an orthonormal basis for the column space.
The columns of V are an orthonormal basis for the row space.
The Σ is diagonal and the entries on its diagonal σi = Σii are positive
real numbers, called the singular values of A.
The action of Aon a vector u can be decomposed into
Au = n
i=1 σi Vi, u Ui
48 / 50
Singular Value Decomposition
Every Matrix has a singular value decomposition
A = UΣV T
Where
The columns of U are an orthonormal basis for the column space.
The columns of V are an orthonormal basis for the row space.
The Σ is diagonal and the entries on its diagonal σi = Σii are positive
real numbers, called the singular values of A.
The action of Aon a vector u can be decomposed into
Au = n
i=1 σi Vi, u Ui
48 / 50
Singular Value Decomposition
Every Matrix has a singular value decomposition
A = UΣV T
Where
The columns of U are an orthonormal basis for the column space.
The columns of V are an orthonormal basis for the row space.
The Σ is diagonal and the entries on its diagonal σi = Σii are positive
real numbers, called the singular values of A.
The action of Aon a vector u can be decomposed into
Au = n
i=1 σi Vi, u Ui
48 / 50
Properties of the Singular Value Decomposition
First
The eigenvalues of the symmetric matrix AT A are equal to the square of
the singular values of A:
AT A = V ΣUT UT ΣV T = V Σ2V T
Second
The rank of a matrix is equal to the number of nonzero singular values.
Third
The largest singular value σ1 is the solution to the optimization problem:
σ1 = max
x=0
Ax 2
x 2
49 / 50
Properties of the Singular Value Decomposition
First
The eigenvalues of the symmetric matrix AT A are equal to the square of
the singular values of A:
AT A = V ΣUT UT ΣV T = V Σ2V T
Second
The rank of a matrix is equal to the number of nonzero singular values.
Third
The largest singular value σ1 is the solution to the optimization problem:
σ1 = max
x=0
Ax 2
x 2
49 / 50
Properties of the Singular Value Decomposition
First
The eigenvalues of the symmetric matrix AT A are equal to the square of
the singular values of A:
AT A = V ΣUT UT ΣV T = V Σ2V T
Second
The rank of a matrix is equal to the number of nonzero singular values.
Third
The largest singular value σ1 is the solution to the optimization problem:
σ1 = max
x=0
Ax 2
x 2
49 / 50
Properties of the Singular Value Decomposition
Remark
It can be verified that the largest singular value satisfies the properties of a
norm, it is called the spectral norm of the matrix.
Finally
In statistics analyzing data with the singular value decomposition is called
Principal Component Analysis.
50 / 50
Properties of the Singular Value Decomposition
Remark
It can be verified that the largest singular value satisfies the properties of a
norm, it is called the spectral norm of the matrix.
Finally
In statistics analyzing data with the singular value decomposition is called
Principal Component Analysis.
50 / 50

More Related Content

What's hot

Logistic regression in Machine Learning
Logistic regression in Machine LearningLogistic regression in Machine Learning
Logistic regression in Machine Learning
Kuppusamy P
 
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Edureka!
 
Support Vector Machine ppt presentation
Support Vector Machine ppt presentationSupport Vector Machine ppt presentation
Support Vector Machine ppt presentation
AyanaRukasar
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
Sung Yub Kim
 
Performance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning AlgorithmsPerformance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning Algorithms
Kush Kulshrestha
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
parry prabhu
 
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Simplilearn
 
Linear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | EdurekaLinear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | Edureka
Edureka!
 
Geometric algorithms
Geometric algorithmsGeometric algorithms
Geometric algorithms
Ganesh Solanke
 
Random forest algorithm
Random forest algorithmRandom forest algorithm
Random forest algorithm
Rashid Ansari
 
Support vector machine-SVM's
Support vector machine-SVM'sSupport vector machine-SVM's
Support vector machine-SVM's
Anudeep Chowdary Kamepalli
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Simplilearn
 
Random forest
Random forestRandom forest
Random forestUjjawal
 
Machine learning and linear regression programming
Machine learning and linear regression programmingMachine learning and linear regression programming
Machine learning and linear regression programming
Soumya Mukherjee
 
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Simplilearn
 
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
Md. Main Uddin Rony
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
Rishabh Gupta
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learning
mahutte
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
EdutechLearners
 
Support vector machines (svm)
Support vector machines (svm)Support vector machines (svm)
Support vector machines (svm)
Sharayu Patil
 

What's hot (20)

Logistic regression in Machine Learning
Logistic regression in Machine LearningLogistic regression in Machine Learning
Logistic regression in Machine Learning
 
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
Logistic Regression in Python | Logistic Regression Example | Machine Learnin...
 
Support Vector Machine ppt presentation
Support Vector Machine ppt presentationSupport Vector Machine ppt presentation
Support Vector Machine ppt presentation
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 
Performance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning AlgorithmsPerformance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning Algorithms
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
Machine Learning Tutorial Part - 1 | Machine Learning Tutorial For Beginners ...
 
Linear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | EdurekaLinear Regression vs Logistic Regression | Edureka
Linear Regression vs Logistic Regression | Edureka
 
Geometric algorithms
Geometric algorithmsGeometric algorithms
Geometric algorithms
 
Random forest algorithm
Random forest algorithmRandom forest algorithm
Random forest algorithm
 
Support vector machine-SVM's
Support vector machine-SVM'sSupport vector machine-SVM's
Support vector machine-SVM's
 
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
Convolutional Neural Network - CNN | How CNN Works | Deep Learning Course | S...
 
Random forest
Random forestRandom forest
Random forest
 
Machine learning and linear regression programming
Machine learning and linear regression programmingMachine learning and linear regression programming
Machine learning and linear regression programming
 
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
 
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
 
Support vector machine
Support vector machineSupport vector machine
Support vector machine
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learning
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Support vector machines (svm)
Support vector machines (svm)Support vector machines (svm)
Support vector machines (svm)
 

Viewers also liked

Adaptive short term forecasting
Adaptive short term forecastingAdaptive short term forecasting
Adaptive short term forecasting
Alex
 
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning GroupMachine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Till Rohrmann
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrixitutor
 
Cs221 linear algebra
Cs221 linear algebraCs221 linear algebra
Cs221 linear algebradarwinrlo
 
Applications of linear algebra
Applications of linear algebraApplications of linear algebra
Applications of linear algebra
Prerak Trivedi
 
Applications of Linear Algebra in Computer Sciences
Applications of Linear Algebra in Computer SciencesApplications of Linear Algebra in Computer Sciences
Applications of Linear Algebra in Computer Sciences
Amir Sharif Chishti
 
Introduction to Big Data/Machine Learning
Introduction to Big Data/Machine LearningIntroduction to Big Data/Machine Learning
Introduction to Big Data/Machine Learning
Lars Marius Garshol
 

Viewers also liked (7)

Adaptive short term forecasting
Adaptive short term forecastingAdaptive short term forecasting
Adaptive short term forecasting
 
Machine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning GroupMachine Learning with Apache Flink at Stockholm Machine Learning Group
Machine Learning with Apache Flink at Stockholm Machine Learning Group
 
Linear Algebra and Matrix
Linear Algebra and MatrixLinear Algebra and Matrix
Linear Algebra and Matrix
 
Cs221 linear algebra
Cs221 linear algebraCs221 linear algebra
Cs221 linear algebra
 
Applications of linear algebra
Applications of linear algebraApplications of linear algebra
Applications of linear algebra
 
Applications of Linear Algebra in Computer Sciences
Applications of Linear Algebra in Computer SciencesApplications of Linear Algebra in Computer Sciences
Applications of Linear Algebra in Computer Sciences
 
Introduction to Big Data/Machine Learning
Introduction to Big Data/Machine LearningIntroduction to Big Data/Machine Learning
Introduction to Big Data/Machine Learning
 

Similar to 03 Machine Learning Linear Algebra

Ch_12 Review of Matrices and Vectors (PPT).pdf
Ch_12 Review of Matrices and Vectors (PPT).pdfCh_12 Review of Matrices and Vectors (PPT).pdf
Ch_12 Review of Matrices and Vectors (PPT).pdf
Mohammed Faizan
 
Vector space interpretation_of_random_variables
Vector space interpretation_of_random_variablesVector space interpretation_of_random_variables
Vector space interpretation_of_random_variables
Gopi Saiteja
 
2 vectors notes
2 vectors notes2 vectors notes
2 vectors notes
Vinh Nguyen Xuan
 
Vector space
Vector spaceVector space
Vector space
Mehedi Hasan Raju
 
Calculus Homework Help
Calculus Homework HelpCalculus Homework Help
Calculus Homework Help
Maths Assignment Help
 
150490106037
150490106037150490106037
150490106037
Harekrishna Jariwala
 
LinearAlgebraReview.ppt
LinearAlgebraReview.pptLinearAlgebraReview.ppt
LinearAlgebraReview.ppt
ShobhitTyagi46
 
21EC33 BSP Module 1.pdf
21EC33 BSP Module 1.pdf21EC33 BSP Module 1.pdf
21EC33 BSP Module 1.pdf
Ravikiran A
 
Vcla ppt ch=vector space
Vcla ppt ch=vector spaceVcla ppt ch=vector space
Vcla ppt ch=vector space
Mahendra Patel
 
Lecture_note2.pdf
Lecture_note2.pdfLecture_note2.pdf
Lecture_note2.pdf
EssaAlMadhagi
 
Properties of fuzzy inner product spaces
Properties of fuzzy inner product spacesProperties of fuzzy inner product spaces
Properties of fuzzy inner product spaces
ijfls
 
Mathematical Foundations for Machine Learning and Data Mining
Mathematical Foundations for Machine Learning and Data MiningMathematical Foundations for Machine Learning and Data Mining
Mathematical Foundations for Machine Learning and Data Mining
MadhavRao65
 
course slides of Support-Vector-Machine.pdf
course slides of Support-Vector-Machine.pdfcourse slides of Support-Vector-Machine.pdf
course slides of Support-Vector-Machine.pdf
onurenginar1
 
Math for Intelligent Systems - 01 Linear Algebra 01 Vector Spaces
Math for Intelligent Systems - 01 Linear Algebra 01  Vector SpacesMath for Intelligent Systems - 01 Linear Algebra 01  Vector Spaces
Math for Intelligent Systems - 01 Linear Algebra 01 Vector Spaces
Andres Mendez-Vazquez
 
Linear Algebra Assignment Help
Linear Algebra Assignment HelpLinear Algebra Assignment Help
Linear Algebra Assignment Help
Maths Assignment Help
 
Vector Space.pptx
Vector Space.pptxVector Space.pptx
Vector Space.pptx
Dr Ashok Tiwari
 
Chapter 4: Vector Spaces - Part 2/Slides By Pearson
Chapter 4: Vector Spaces - Part 2/Slides By PearsonChapter 4: Vector Spaces - Part 2/Slides By Pearson
Chapter 4: Vector Spaces - Part 2/Slides By Pearson
Chaimae Baroudi
 
Vector Space & Sub Space Presentation
Vector Space & Sub Space PresentationVector Space & Sub Space Presentation
Vector Space & Sub Space Presentation
SufianMehmood2
 
Linear Algebra for Competitive Exams
Linear Algebra for Competitive ExamsLinear Algebra for Competitive Exams
Linear Algebra for Competitive Exams
Kendrika Academy
 

Similar to 03 Machine Learning Linear Algebra (20)

Ch_12 Review of Matrices and Vectors (PPT).pdf
Ch_12 Review of Matrices and Vectors (PPT).pdfCh_12 Review of Matrices and Vectors (PPT).pdf
Ch_12 Review of Matrices and Vectors (PPT).pdf
 
Vector space interpretation_of_random_variables
Vector space interpretation_of_random_variablesVector space interpretation_of_random_variables
Vector space interpretation_of_random_variables
 
2 vectors notes
2 vectors notes2 vectors notes
2 vectors notes
 
Vector space
Vector spaceVector space
Vector space
 
Calculus Homework Help
Calculus Homework HelpCalculus Homework Help
Calculus Homework Help
 
150490106037
150490106037150490106037
150490106037
 
LinearAlgebraReview.ppt
LinearAlgebraReview.pptLinearAlgebraReview.ppt
LinearAlgebraReview.ppt
 
21EC33 BSP Module 1.pdf
21EC33 BSP Module 1.pdf21EC33 BSP Module 1.pdf
21EC33 BSP Module 1.pdf
 
Vcla ppt ch=vector space
Vcla ppt ch=vector spaceVcla ppt ch=vector space
Vcla ppt ch=vector space
 
Elhabian lda09
Elhabian lda09Elhabian lda09
Elhabian lda09
 
Lecture_note2.pdf
Lecture_note2.pdfLecture_note2.pdf
Lecture_note2.pdf
 
Properties of fuzzy inner product spaces
Properties of fuzzy inner product spacesProperties of fuzzy inner product spaces
Properties of fuzzy inner product spaces
 
Mathematical Foundations for Machine Learning and Data Mining
Mathematical Foundations for Machine Learning and Data MiningMathematical Foundations for Machine Learning and Data Mining
Mathematical Foundations for Machine Learning and Data Mining
 
course slides of Support-Vector-Machine.pdf
course slides of Support-Vector-Machine.pdfcourse slides of Support-Vector-Machine.pdf
course slides of Support-Vector-Machine.pdf
 
Math for Intelligent Systems - 01 Linear Algebra 01 Vector Spaces
Math for Intelligent Systems - 01 Linear Algebra 01  Vector SpacesMath for Intelligent Systems - 01 Linear Algebra 01  Vector Spaces
Math for Intelligent Systems - 01 Linear Algebra 01 Vector Spaces
 
Linear Algebra Assignment Help
Linear Algebra Assignment HelpLinear Algebra Assignment Help
Linear Algebra Assignment Help
 
Vector Space.pptx
Vector Space.pptxVector Space.pptx
Vector Space.pptx
 
Chapter 4: Vector Spaces - Part 2/Slides By Pearson
Chapter 4: Vector Spaces - Part 2/Slides By PearsonChapter 4: Vector Spaces - Part 2/Slides By Pearson
Chapter 4: Vector Spaces - Part 2/Slides By Pearson
 
Vector Space & Sub Space Presentation
Vector Space & Sub Space PresentationVector Space & Sub Space Presentation
Vector Space & Sub Space Presentation
 
Linear Algebra for Competitive Exams
Linear Algebra for Competitive ExamsLinear Algebra for Competitive Exams
Linear Algebra for Competitive Exams
 

More from Andres Mendez-Vazquez

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
Andres Mendez-Vazquez
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
Andres Mendez-Vazquez
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
Andres Mendez-Vazquez
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
Andres Mendez-Vazquez
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
Andres Mendez-Vazquez
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
Andres Mendez-Vazquez
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
Andres Mendez-Vazquez
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
Andres Mendez-Vazquez
 
Zetta global
Zetta globalZetta global
Zetta global
Andres Mendez-Vazquez
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
Andres Mendez-Vazquez
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
Andres Mendez-Vazquez
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
Andres Mendez-Vazquez
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
Andres Mendez-Vazquez
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
Andres Mendez-Vazquez
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
Andres Mendez-Vazquez
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
Andres Mendez-Vazquez
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
Andres Mendez-Vazquez
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
Andres Mendez-Vazquez
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
Andres Mendez-Vazquez
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
Andres Mendez-Vazquez
 

More from Andres Mendez-Vazquez (20)

2.03 bayesian estimation
2.03 bayesian estimation2.03 bayesian estimation
2.03 bayesian estimation
 
05 linear transformations
05 linear transformations05 linear transformations
05 linear transformations
 
01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors01.04 orthonormal basis_eigen_vectors
01.04 orthonormal basis_eigen_vectors
 
01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues01.03 squared matrices_and_other_issues
01.03 squared matrices_and_other_issues
 
01.02 linear equations
01.02 linear equations01.02 linear equations
01.02 linear equations
 
01.01 vector spaces
01.01 vector spaces01.01 vector spaces
01.01 vector spaces
 
06 recurrent neural_networks
06 recurrent neural_networks06 recurrent neural_networks
06 recurrent neural_networks
 
05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation05 backpropagation automatic_differentiation
05 backpropagation automatic_differentiation
 
Zetta global
Zetta globalZetta global
Zetta global
 
01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning01 Introduction to Neural Networks and Deep Learning
01 Introduction to Neural Networks and Deep Learning
 
25 introduction reinforcement_learning
25 introduction reinforcement_learning25 introduction reinforcement_learning
25 introduction reinforcement_learning
 
Neural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning SyllabusNeural Networks and Deep Learning Syllabus
Neural Networks and Deep Learning Syllabus
 
Introduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabusIntroduction to artificial_intelligence_syllabus
Introduction to artificial_intelligence_syllabus
 
Ideas 09 22_2018
Ideas 09 22_2018Ideas 09 22_2018
Ideas 09 22_2018
 
Ideas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data SciencesIdeas about a Bachelor in Machine Learning/Data Sciences
Ideas about a Bachelor in Machine Learning/Data Sciences
 
Analysis of Algorithms Syllabus
Analysis of Algorithms  SyllabusAnalysis of Algorithms  Syllabus
Analysis of Algorithms Syllabus
 
20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations20 k-means, k-center, k-meoids and variations
20 k-means, k-center, k-meoids and variations
 
18.1 combining models
18.1 combining models18.1 combining models
18.1 combining models
 
17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension17 vapnik chervonenkis dimension
17 vapnik chervonenkis dimension
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 

Recently uploaded

在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
obonagu
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
R&R Consult
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Dr.Costas Sachpazis
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation & Control
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
AafreenAbuthahir2
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
seandesed
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
Pratik Pawar
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
Kamal Acharya
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
MdTanvirMahtab2
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
fxintegritypublishin
 
Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
Kamal Acharya
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
Intella Parts
 
Vaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdfVaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdf
Kamal Acharya
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
Osamah Alsalih
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
AhmedHussein950959
 
power quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptxpower quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptx
ViniHema
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Teleport Manpower Consultant
 
Railway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdfRailway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdf
TeeVichai
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
Neometrix_Engineering_Pvt_Ltd
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
FluxPrime1
 

Recently uploaded (20)

在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
在线办理(ANU毕业证书)澳洲国立大学毕业证录取通知书一模一样
 
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxCFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptx
 
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...
 
Water Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdfWater Industry Process Automation and Control Monthly - May 2024.pdf
Water Industry Process Automation and Control Monthly - May 2024.pdf
 
WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234WATER CRISIS and its solutions-pptx 1234
WATER CRISIS and its solutions-pptx 1234
 
Architectural Portfolio Sean Lockwood
Architectural Portfolio Sean LockwoodArchitectural Portfolio Sean Lockwood
Architectural Portfolio Sean Lockwood
 
weather web application report.pdf
weather web application report.pdfweather web application report.pdf
weather web application report.pdf
 
Student information management system project report ii.pdf
Student information management system project report ii.pdfStudent information management system project report ii.pdf
Student information management system project report ii.pdf
 
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)
 
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdfHybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdf
 
Automobile Management System Project Report.pdf
Automobile Management System Project Report.pdfAutomobile Management System Project Report.pdf
Automobile Management System Project Report.pdf
 
Forklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella PartsForklift Classes Overview by Intella Parts
Forklift Classes Overview by Intella Parts
 
Vaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdfVaccine management system project report documentation..pdf
Vaccine management system project report documentation..pdf
 
MCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdfMCQ Soil mechanics questions (Soil shear strength).pdf
MCQ Soil mechanics questions (Soil shear strength).pdf
 
ASME IX(9) 2007 Full Version .pdf
ASME IX(9)  2007 Full Version       .pdfASME IX(9)  2007 Full Version       .pdf
ASME IX(9) 2007 Full Version .pdf
 
power quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptxpower quality voltage fluctuation UNIT - I.pptx
power quality voltage fluctuation UNIT - I.pptx
 
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdfTop 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
Top 10 Oil and Gas Projects in Saudi Arabia 2024.pdf
 
Railway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdfRailway Signalling Principles Edition 3.pdf
Railway Signalling Principles Edition 3.pdf
 
Standard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - NeometrixStandard Reomte Control Interface - Neometrix
Standard Reomte Control Interface - Neometrix
 
DESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docxDESIGN A COTTON SEED SEPARATION MACHINE.docx
DESIGN A COTTON SEED SEPARATION MACHINE.docx
 

03 Machine Learning Linear Algebra

  • 1. Machine Learning for Data Mining Linear Algebra Review Andres Mendez-Vazquez May 14, 2015 1 / 50
  • 2. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 2 / 50
  • 3. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 3 / 50
  • 4. What is a Vector? A ordered tuple of numbers x =       x1 x2 ... xn       Expressing a magnitude and a direction 4 / 50
  • 5. What is a Vector? A ordered tuple of numbers x =       x1 x2 ... xn       Expressing a magnitude and a direction Magnitude Direction 4 / 50
  • 6. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 5 / 50
  • 7. Vector Spaces Definition A vector is an element of a vector space Vector Space V It is a set that contains all linear combinations of its elements: 1 If x, y ∈ V then x + y ∈ V . 2 If x ∈ V then αx ∈ V for any scalar α. 3 There exists 0 ∈ V then x + 0 = x for any x ∈ V . A subspace It is a subset of a vector space that is also a vector space 6 / 50
  • 8. Vector Spaces Definition A vector is an element of a vector space Vector Space V It is a set that contains all linear combinations of its elements: 1 If x, y ∈ V then x + y ∈ V . 2 If x ∈ V then αx ∈ V for any scalar α. 3 There exists 0 ∈ V then x + 0 = x for any x ∈ V . A subspace It is a subset of a vector space that is also a vector space 6 / 50
  • 9. Vector Spaces Definition A vector is an element of a vector space Vector Space V It is a set that contains all linear combinations of its elements: 1 If x, y ∈ V then x + y ∈ V . 2 If x ∈ V then αx ∈ V for any scalar α. 3 There exists 0 ∈ V then x + 0 = x for any x ∈ V . A subspace It is a subset of a vector space that is also a vector space 6 / 50
  • 10. Vector Spaces Definition A vector is an element of a vector space Vector Space V It is a set that contains all linear combinations of its elements: 1 If x, y ∈ V then x + y ∈ V . 2 If x ∈ V then αx ∈ V for any scalar α. 3 There exists 0 ∈ V then x + 0 = x for any x ∈ V . A subspace It is a subset of a vector space that is also a vector space 6 / 50
  • 11. Vector Spaces Definition A vector is an element of a vector space Vector Space V It is a set that contains all linear combinations of its elements: 1 If x, y ∈ V then x + y ∈ V . 2 If x ∈ V then αx ∈ V for any scalar α. 3 There exists 0 ∈ V then x + 0 = x for any x ∈ V . A subspace It is a subset of a vector space that is also a vector space 6 / 50
  • 13. Span Definition The span of any set of vectors {x1, x2, ..., xn} is defined as: span (x1, x2, ..., xn) = α1x1 + α2x2 + ... + αnxn What Examples can you Imagine? Give it a shot!!! 8 / 50
  • 14. Span Definition The span of any set of vectors {x1, x2, ..., xn} is defined as: span (x1, x2, ..., xn) = α1x1 + α2x2 + ... + αnxn What Examples can you Imagine? Give it a shot!!! 8 / 50
  • 15. Subspaces of Rn A line through the origin in Rn A plane in Rn 9 / 50
  • 16. Subspaces of Rn A line through the origin in Rn A plane in Rn 9 / 50
  • 17. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 10 / 50
  • 18. Linear Independence and Basis of Vector Spaces Fact 1 A vector x is a linearly independent of a set of vectors {x1, x2, ..., xn} if it does not lie in their span. Fact 2 A set of vectors is linearly independent if every vector is linearly independent of the rest. The Rest 1 A basis of a vector space V is a linearly independent set of vectors whose span is equal to V 2 If the basis has d vectors then the vector space V has dimensionality d. 11 / 50
  • 19. Linear Independence and Basis of Vector Spaces Fact 1 A vector x is a linearly independent of a set of vectors {x1, x2, ..., xn} if it does not lie in their span. Fact 2 A set of vectors is linearly independent if every vector is linearly independent of the rest. The Rest 1 A basis of a vector space V is a linearly independent set of vectors whose span is equal to V 2 If the basis has d vectors then the vector space V has dimensionality d. 11 / 50
  • 20. Linear Independence and Basis of Vector Spaces Fact 1 A vector x is a linearly independent of a set of vectors {x1, x2, ..., xn} if it does not lie in their span. Fact 2 A set of vectors is linearly independent if every vector is linearly independent of the rest. The Rest 1 A basis of a vector space V is a linearly independent set of vectors whose span is equal to V 2 If the basis has d vectors then the vector space V has dimensionality d. 11 / 50
  • 21. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 12 / 50
  • 22. Norm of a Vector Definition A norm u measures the magnitud of the vector. Properties 1 Homogeneity: αx = α x . 2 Triangle inequality: x + y ≤ x + y . 3 Point Separation x = 0 if and only if x = 0. Examples 1 Manhattan or 1-norm : x 1 = d i=1 |xi|. 2 Euclidean or 2-norm : x 2 = d i=1 x2 i . 13 / 50
  • 23. Norm of a Vector Definition A norm u measures the magnitud of the vector. Properties 1 Homogeneity: αx = α x . 2 Triangle inequality: x + y ≤ x + y . 3 Point Separation x = 0 if and only if x = 0. Examples 1 Manhattan or 1-norm : x 1 = d i=1 |xi|. 2 Euclidean or 2-norm : x 2 = d i=1 x2 i . 13 / 50
  • 24. Norm of a Vector Definition A norm u measures the magnitud of the vector. Properties 1 Homogeneity: αx = α x . 2 Triangle inequality: x + y ≤ x + y . 3 Point Separation x = 0 if and only if x = 0. Examples 1 Manhattan or 1-norm : x 1 = d i=1 |xi|. 2 Euclidean or 2-norm : x 2 = d i=1 x2 i . 13 / 50
  • 25. Norm of a Vector Definition A norm u measures the magnitud of the vector. Properties 1 Homogeneity: αx = α x . 2 Triangle inequality: x + y ≤ x + y . 3 Point Separation x = 0 if and only if x = 0. Examples 1 Manhattan or 1-norm : x 1 = d i=1 |xi|. 2 Euclidean or 2-norm : x 2 = d i=1 x2 i . 13 / 50
  • 26. Norm of a Vector Definition A norm u measures the magnitud of the vector. Properties 1 Homogeneity: αx = α x . 2 Triangle inequality: x + y ≤ x + y . 3 Point Separation x = 0 if and only if x = 0. Examples 1 Manhattan or 1-norm : x 1 = d i=1 |xi|. 2 Euclidean or 2-norm : x 2 = d i=1 x2 i . 13 / 50
  • 27. Norm of a Vector Definition A norm u measures the magnitud of the vector. Properties 1 Homogeneity: αx = α x . 2 Triangle inequality: x + y ≤ x + y . 3 Point Separation x = 0 if and only if x = 0. Examples 1 Manhattan or 1-norm : x 1 = d i=1 |xi|. 2 Euclidean or 2-norm : x 2 = d i=1 x2 i . 13 / 50
  • 28. Examples Example 1-norm and 2-norm 14 / 50
  • 29. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 15 / 50
  • 30. Inner Product Definition The inner product between u and v u, v = n i=1 uivi. It is the projection of one vector onto the other one Remark: It is related to the Euclidean norm: u, u = u 2 2. 16 / 50
  • 31. Inner Product Definition The inner product between u and v u, v = n i=1 uivi. It is the projection of one vector onto the other one 16 / 50
  • 32. Properties Meaning The inner product is a measure of correlation between two vectors, scaled by the norms of the vectors if u · v > 0, u and v are aligned 17 / 50
  • 33. Properties Meaning The inner product is a measure of correlation between two vectors, scaled by the norms of the vectors if u · v > 0, u and v are aligned 17 / 50
  • 34. Properties The inner product is a measure of correlation between two vectors, scaled by the norms of the vectors 18 / 50
  • 35. Properties The inner product is a measure of correlation between two vectors, scaled by the norms of the vectors 19 / 50
  • 36. Properties The inner product is a measure of correlation between two vectors, scaled by the norms of the vectors 20 / 50
  • 37. Definitions involving the norm Orthonormal The vectors in orthonormal basis have unit Euclidean norm and are orthonorgonal. To express a vector x in an orthonormal basis For example, given x = α1b1 + α2b2 x, b1 = α1b1 + α2b2, b1 = α1 b1, b1 + α2 b2, b1 = α1 + 0 Likewise, x, b2 = α2 21 / 50
  • 38. Definitions involving the norm Orthonormal The vectors in orthonormal basis have unit Euclidean norm and are orthonorgonal. To express a vector x in an orthonormal basis For example, given x = α1b1 + α2b2 x, b1 = α1b1 + α2b2, b1 = α1 b1, b1 + α2 b2, b1 = α1 + 0 Likewise, x, b2 = α2 21 / 50
  • 39. Definitions involving the norm Orthonormal The vectors in orthonormal basis have unit Euclidean norm and are orthonorgonal. To express a vector x in an orthonormal basis For example, given x = α1b1 + α2b2 x, b1 = α1b1 + α2b2, b1 = α1 b1, b1 + α2 b2, b1 = α1 + 0 Likewise, x, b2 = α2 21 / 50
  • 40. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 22 / 50
  • 41. Linear Operator Definition A linear operator L : U → V is a map from a vector space U to another vector space V satisfies: L (u1 + u2) = L (u1) + L (u2) Something Notable If the dimension n of U and m of V are finite, L can be represented by m × n matrix: A =      a11 a12 · · · a1n a21 a22 · · · a2n · · · am1 am2 · · · amn      23 / 50
  • 42. Linear Operator Definition A linear operator L : U → V is a map from a vector space U to another vector space V satisfies: L (u1 + u2) = L (u1) + L (u2) Something Notable If the dimension n of U and m of V are finite, L can be represented by m × n matrix: A =      a11 a12 · · · a1n a21 a22 · · · a2n · · · am1 am2 · · · amn      23 / 50
  • 43. Thus, product of The product of two linear operator can be seen as the multiplication of two matrices AB =      a11 a12 · · · a1n a21 a22 · · · a2n · · · am1 am2 · · · amn           b11 b12 · · · b1p b21 b22 · · · b2p · · · bn1 bn2 · · · bnp      =      n i=1 a1ibi1 n i=1 a1ibi2 · · · n i=1 a1ibip n i=1 a2ibi1 n i=1 a2ibi2 · · · n i=1 a2ibip · · · n i=1 amibi1 n i=1 amibi2 · · · n i=1 amibip      Note: if A is m × n and B is n × p, then AB is m × p. 24 / 50
  • 44. Thus, product of The product of two linear operator can be seen as the multiplication of two matrices AB =      a11 a12 · · · a1n a21 a22 · · · a2n · · · am1 am2 · · · amn           b11 b12 · · · b1p b21 b22 · · · b2p · · · bn1 bn2 · · · bnp      =      n i=1 a1ibi1 n i=1 a1ibi2 · · · n i=1 a1ibip n i=1 a2ibi1 n i=1 a2ibi2 · · · n i=1 a2ibip · · · n i=1 amibi1 n i=1 amibi2 · · · n i=1 amibip      Note: if A is m × n and B is n × p, then AB is m × p. 24 / 50
  • 45. Transpose of a Matrix The transpose of a matrix is obtained by flipping the rows and columns AT =      a11 a21 · · · an1 a12 a22 · · · an2 · · · a1m a2m · · · anm      Which the following properties AT T = A (A + B)T = AT + BT (AB)T = BT AT Not only that, we have the inner product u, v = uT v 25 / 50
  • 46. Transpose of a Matrix The transpose of a matrix is obtained by flipping the rows and columns AT =      a11 a21 · · · an1 a12 a22 · · · an2 · · · a1m a2m · · · anm      Which the following properties AT T = A (A + B)T = AT + BT (AB)T = BT AT Not only that, we have the inner product u, v = uT v 25 / 50
  • 47. Transpose of a Matrix The transpose of a matrix is obtained by flipping the rows and columns AT =      a11 a21 · · · an1 a12 a22 · · · an2 · · · a1m a2m · · · anm      Which the following properties AT T = A (A + B)T = AT + BT (AB)T = BT AT Not only that, we have the inner product u, v = uT v 25 / 50
  • 48. Transpose of a Matrix The transpose of a matrix is obtained by flipping the rows and columns AT =      a11 a21 · · · an1 a12 a22 · · · an2 · · · a1m a2m · · · anm      Which the following properties AT T = A (A + B)T = AT + BT (AB)T = BT AT Not only that, we have the inner product u, v = uT v 25 / 50
  • 49. Transpose of a Matrix The transpose of a matrix is obtained by flipping the rows and columns AT =      a11 a21 · · · an1 a12 a22 · · · an2 · · · a1m a2m · · · anm      Which the following properties AT T = A (A + B)T = AT + BT (AB)T = BT AT Not only that, we have the inner product u, v = uT v 25 / 50
  • 50. As always, we have the identity operator The identity operator in matrix multiplication is defined as I =      1 0 · · · 0 0 1 · · · 0 · · · 0 0 · · · 1      With properties For any matrix A, AI = A. I is the identity operator for the matrix product. 26 / 50
  • 51. As always, we have the identity operator The identity operator in matrix multiplication is defined as I =      1 0 · · · 0 0 1 · · · 0 · · · 0 0 · · · 1      With properties For any matrix A, AI = A. I is the identity operator for the matrix product. 26 / 50
  • 52. As always, we have the identity operator The identity operator in matrix multiplication is defined as I =      1 0 · · · 0 0 1 · · · 0 · · · 0 0 · · · 1      With properties For any matrix A, AI = A. I is the identity operator for the matrix product. 26 / 50
  • 53. Column Space, Row Space and Rank Let A be an m × n matrix We have the following spaces... Column space Span of the columns of A. Linear subspace of Rm. Row space Span of the rows of A. Linear subspace of Rn. 27 / 50
  • 54. Column Space, Row Space and Rank Let A be an m × n matrix We have the following spaces... Column space Span of the columns of A. Linear subspace of Rm. Row space Span of the rows of A. Linear subspace of Rn. 27 / 50
  • 55. Column Space, Row Space and Rank Let A be an m × n matrix We have the following spaces... Column space Span of the columns of A. Linear subspace of Rm. Row space Span of the rows of A. Linear subspace of Rn. 27 / 50
  • 56. Column Space, Row Space and Rank Let A be an m × n matrix We have the following spaces... Column space Span of the columns of A. Linear subspace of Rm. Row space Span of the rows of A. Linear subspace of Rn. 27 / 50
  • 57. Important facts Something Notable The column and row space of any matrix have the same dimension. The rank The dimension is the rank of the matrix. 28 / 50
  • 58. Important facts Something Notable The column and row space of any matrix have the same dimension. The rank The dimension is the rank of the matrix. 28 / 50
  • 59. Range and Null Space Range Set of vectors equal to Au for some u ∈ Rn. Range (A) = {x|x = Au for some u ∈ Rn } It is a linear subspace of Rm and also called the column space of A. Null Space We have the following definition Null Space (A) = {u|Au = 0} It is a linear subspace of Rm. 29 / 50
  • 60. Range and Null Space Range Set of vectors equal to Au for some u ∈ Rn. Range (A) = {x|x = Au for some u ∈ Rn } It is a linear subspace of Rm and also called the column space of A. Null Space We have the following definition Null Space (A) = {u|Au = 0} It is a linear subspace of Rm. 29 / 50
  • 61. Range and Null Space Range Set of vectors equal to Au for some u ∈ Rn. Range (A) = {x|x = Au for some u ∈ Rn } It is a linear subspace of Rm and also called the column space of A. Null Space We have the following definition Null Space (A) = {u|Au = 0} It is a linear subspace of Rm. 29 / 50
  • 62. Range and Null Space Range Set of vectors equal to Au for some u ∈ Rn. Range (A) = {x|x = Au for some u ∈ Rn } It is a linear subspace of Rm and also called the column space of A. Null Space We have the following definition Null Space (A) = {u|Au = 0} It is a linear subspace of Rm. 29 / 50
  • 63. Important fact Something Notable Every vector in the null space is orthogonal to the rows of A. The null space and row space of a matrix are orthogonal. 30 / 50
  • 64. Important fact Something Notable Every vector in the null space is orthogonal to the rows of A. The null space and row space of a matrix are orthogonal. 30 / 50
  • 65. Range and Column Space We have another interpretation of the matrix-vector product Au = (A1 A2 · · · An)       u1 u2 ... un       =u1A1 + u2A2 + · · · + unAn Thus The result is a linear combination of the columns of A. Actually, the range is the column space. 31 / 50
  • 66. Range and Column Space We have another interpretation of the matrix-vector product Au = (A1 A2 · · · An)       u1 u2 ... un       =u1A1 + u2A2 + · · · + unAn Thus The result is a linear combination of the columns of A. Actually, the range is the column space. 31 / 50
  • 67. Range and Column Space We have another interpretation of the matrix-vector product Au = (A1 A2 · · · An)       u1 u2 ... un       =u1A1 + u2A2 + · · · + unAn Thus The result is a linear combination of the columns of A. Actually, the range is the column space. 31 / 50
  • 68. Matrix Inverse Something Notable For an n × n matrix A: rank + dim(null space) = n. if dim(null space)= 0 then A is full rank. In this case, the action of the matrix is invertible. The inversion is also linear and consequently can be represented by another matrix A−1. A−1 is the only matrix such that A−1A = AA−1 = I. 32 / 50
  • 69. Matrix Inverse Something Notable For an n × n matrix A: rank + dim(null space) = n. if dim(null space)= 0 then A is full rank. In this case, the action of the matrix is invertible. The inversion is also linear and consequently can be represented by another matrix A−1. A−1 is the only matrix such that A−1A = AA−1 = I. 32 / 50
  • 70. Matrix Inverse Something Notable For an n × n matrix A: rank + dim(null space) = n. if dim(null space)= 0 then A is full rank. In this case, the action of the matrix is invertible. The inversion is also linear and consequently can be represented by another matrix A−1. A−1 is the only matrix such that A−1A = AA−1 = I. 32 / 50
  • 71. Matrix Inverse Something Notable For an n × n matrix A: rank + dim(null space) = n. if dim(null space)= 0 then A is full rank. In this case, the action of the matrix is invertible. The inversion is also linear and consequently can be represented by another matrix A−1. A−1 is the only matrix such that A−1A = AA−1 = I. 32 / 50
  • 72. Matrix Inverse Something Notable For an n × n matrix A: rank + dim(null space) = n. if dim(null space)= 0 then A is full rank. In this case, the action of the matrix is invertible. The inversion is also linear and consequently can be represented by another matrix A−1. A−1 is the only matrix such that A−1A = AA−1 = I. 32 / 50
  • 73. Orthogonal Matrices Definition An orthogonal matrix U satisfies UT U = I. Properties U has orthonormal columns. In addition Applying an orthogonal matrix to two vectors does not change their inner product: Uu, Uv = (Uu)T Uv =uT UT Uv =uT v = u, v 33 / 50
  • 74. Orthogonal Matrices Definition An orthogonal matrix U satisfies UT U = I. Properties U has orthonormal columns. In addition Applying an orthogonal matrix to two vectors does not change their inner product: Uu, Uv = (Uu)T Uv =uT UT Uv =uT v = u, v 33 / 50
  • 75. Orthogonal Matrices Definition An orthogonal matrix U satisfies UT U = I. Properties U has orthonormal columns. In addition Applying an orthogonal matrix to two vectors does not change their inner product: Uu, Uv = (Uu)T Uv =uT UT Uv =uT v = u, v 33 / 50
  • 76. Example A classic one Matrices representing rotations are orthogonal. 34 / 50
  • 77. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 35 / 50
  • 78. Trace and Determinant Definition (Trace) The trace is the sum of the diagonal elements of a square matrix. Definition (Determinant) The determinant of a square matrix A, denoted by |A|, is defined as det (A) = n j=1 (−1)i+j aijMij where Mij is determinant of matrix A without the row i and column j. 36 / 50
  • 79. Trace and Determinant Definition (Trace) The trace is the sum of the diagonal elements of a square matrix. Definition (Determinant) The determinant of a square matrix A, denoted by |A|, is defined as det (A) = n j=1 (−1)i+j aijMij where Mij is determinant of matrix A without the row i and column j. 36 / 50
  • 80. Special Case For a 2 × 2 matrix A = a b c d |A| = ad − bc The absolute value of |A|is the area of the parallelogram given by the rows of A 37 / 50
  • 81. Special Case For a 2 × 2 matrix A = a b c d |A| = ad − bc The absolute value of |A|is the area of the parallelogram given by the rows of A 37 / 50
  • 82. Properties of the Determinant Basic Properties |A| = AT |AB| = |A| |B| |A| = 0 if and only if A is not invertible If A is invertible, then A−1 = 1 |A| . 38 / 50
  • 83. Properties of the Determinant Basic Properties |A| = AT |AB| = |A| |B| |A| = 0 if and only if A is not invertible If A is invertible, then A−1 = 1 |A| . 38 / 50
  • 84. Properties of the Determinant Basic Properties |A| = AT |AB| = |A| |B| |A| = 0 if and only if A is not invertible If A is invertible, then A−1 = 1 |A| . 38 / 50
  • 85. Properties of the Determinant Basic Properties |A| = AT |AB| = |A| |B| |A| = 0 if and only if A is not invertible If A is invertible, then A−1 = 1 |A| . 38 / 50
  • 86. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 39 / 50
  • 87. Eigenvalues and Eigenvectors Eigenvalues An eigenvalue λ of a square matrix A satisfies: Au = λu for some vector , which we call an eigenvector. Properties Geometrically the operator A expands when (λ > 1) or contracts (λ < 1) eigenvectors, but does not rotate them. Null Space relation If u is an eigenvector of A, it is in the null space of A − λI, which is consequently not invertible. 40 / 50
  • 88. Eigenvalues and Eigenvectors Eigenvalues An eigenvalue λ of a square matrix A satisfies: Au = λu for some vector , which we call an eigenvector. Properties Geometrically the operator A expands when (λ > 1) or contracts (λ < 1) eigenvectors, but does not rotate them. Null Space relation If u is an eigenvector of A, it is in the null space of A − λI, which is consequently not invertible. 40 / 50
  • 89. Eigenvalues and Eigenvectors Eigenvalues An eigenvalue λ of a square matrix A satisfies: Au = λu for some vector , which we call an eigenvector. Properties Geometrically the operator A expands when (λ > 1) or contracts (λ < 1) eigenvectors, but does not rotate them. Null Space relation If u is an eigenvector of A, it is in the null space of A − λI, which is consequently not invertible. 40 / 50
  • 90. More properties Given the previous relation The eigenvalues of A are the roots of the equation |A − λI| = 0 Remark: We do not calculate the eigenvalues this way Something Notable Eigenvalues and eigenvectors can be complex valued, even if all the entries of A are real. 41 / 50
  • 91. More properties Given the previous relation The eigenvalues of A are the roots of the equation |A − λI| = 0 Remark: We do not calculate the eigenvalues this way Something Notable Eigenvalues and eigenvectors can be complex valued, even if all the entries of A are real. 41 / 50
  • 92. Eigendecomposition of a Matrix Given Let A be an n × n square matrix with n linearly independent eigenvectors p1, p2, ..., pn and eigenvalues λ1, λ2, ..., λn We define the matrices P = (p1 p2 · · · pn) Λ =      λ1 0 · · · 0 0 λ2 · · · 0 · · · 0 0 · · · λn      42 / 50
  • 93. Eigendecomposition of a Matrix Given Let A be an n × n square matrix with n linearly independent eigenvectors p1, p2, ..., pn and eigenvalues λ1, λ2, ..., λn We define the matrices P = (p1 p2 · · · pn) Λ =      λ1 0 · · · 0 0 λ2 · · · 0 · · · 0 0 · · · λn      42 / 50
  • 94. Properties We have that A satisfies AP = PΛ In addition P is full rank. Thus, inverting it yields the eigendecomposition A = PΛP−1 43 / 50
  • 95. Properties We have that A satisfies AP = PΛ In addition P is full rank. Thus, inverting it yields the eigendecomposition A = PΛP−1 43 / 50
  • 96. Properties We have that A satisfies AP = PΛ In addition P is full rank. Thus, inverting it yields the eigendecomposition A = PΛP−1 43 / 50
  • 97. Properties of the Eigendecomposition We have that Not all matrices are diagonalizable/eigendecomposition. Example 1 1 0 1 Trace (A) = Trace (Λ) = n i=1 λi |A| = |Λ| = Πn i=1λi The rank of A is equal to the number of nonzero eigenvalues. If λ is anonzero eigenvalue of A, 1 λ is an eigenvalue of A−1 with the same eigenvector. The eigendecompositon allows to compute matrix powers efficiently: Am = PΛP−1 m = PΛP−1PΛP−1PΛP−1 . . . PΛP−1 = PΛmP−1 44 / 50
  • 98. Properties of the Eigendecomposition We have that Not all matrices are diagonalizable/eigendecomposition. Example 1 1 0 1 Trace (A) = Trace (Λ) = n i=1 λi |A| = |Λ| = Πn i=1λi The rank of A is equal to the number of nonzero eigenvalues. If λ is anonzero eigenvalue of A, 1 λ is an eigenvalue of A−1 with the same eigenvector. The eigendecompositon allows to compute matrix powers efficiently: Am = PΛP−1 m = PΛP−1PΛP−1PΛP−1 . . . PΛP−1 = PΛmP−1 44 / 50
  • 99. Properties of the Eigendecomposition We have that Not all matrices are diagonalizable/eigendecomposition. Example 1 1 0 1 Trace (A) = Trace (Λ) = n i=1 λi |A| = |Λ| = Πn i=1λi The rank of A is equal to the number of nonzero eigenvalues. If λ is anonzero eigenvalue of A, 1 λ is an eigenvalue of A−1 with the same eigenvector. The eigendecompositon allows to compute matrix powers efficiently: Am = PΛP−1 m = PΛP−1PΛP−1PΛP−1 . . . PΛP−1 = PΛmP−1 44 / 50
  • 100. Properties of the Eigendecomposition We have that Not all matrices are diagonalizable/eigendecomposition. Example 1 1 0 1 Trace (A) = Trace (Λ) = n i=1 λi |A| = |Λ| = Πn i=1λi The rank of A is equal to the number of nonzero eigenvalues. If λ is anonzero eigenvalue of A, 1 λ is an eigenvalue of A−1 with the same eigenvector. The eigendecompositon allows to compute matrix powers efficiently: Am = PΛP−1 m = PΛP−1PΛP−1PΛP−1 . . . PΛP−1 = PΛmP−1 44 / 50
  • 101. Properties of the Eigendecomposition We have that Not all matrices are diagonalizable/eigendecomposition. Example 1 1 0 1 Trace (A) = Trace (Λ) = n i=1 λi |A| = |Λ| = Πn i=1λi The rank of A is equal to the number of nonzero eigenvalues. If λ is anonzero eigenvalue of A, 1 λ is an eigenvalue of A−1 with the same eigenvector. The eigendecompositon allows to compute matrix powers efficiently: Am = PΛP−1 m = PΛP−1PΛP−1PΛP−1 . . . PΛP−1 = PΛmP−1 44 / 50
  • 102. Properties of the Eigendecomposition We have that Not all matrices are diagonalizable/eigendecomposition. Example 1 1 0 1 Trace (A) = Trace (Λ) = n i=1 λi |A| = |Λ| = Πn i=1λi The rank of A is equal to the number of nonzero eigenvalues. If λ is anonzero eigenvalue of A, 1 λ is an eigenvalue of A−1 with the same eigenvector. The eigendecompositon allows to compute matrix powers efficiently: Am = PΛP−1 m = PΛP−1PΛP−1PΛP−1 . . . PΛP−1 = PΛmP−1 44 / 50
  • 103. Eigendecomposition of a Symmetric Matrix When A symmetric, we have If A = AT then A is symmetric. The eigenvalues of symmetric matrices are real. The eigenvectors of symmetric matrices are orthonormal. Consequently, the eigendecomposition becomes A = UΛUT for Λ real and U orthogonal. The eigenvectors of A are an orthonormal basis for the column space and row space. 45 / 50
  • 104. Eigendecomposition of a Symmetric Matrix When A symmetric, we have If A = AT then A is symmetric. The eigenvalues of symmetric matrices are real. The eigenvectors of symmetric matrices are orthonormal. Consequently, the eigendecomposition becomes A = UΛUT for Λ real and U orthogonal. The eigenvectors of A are an orthonormal basis for the column space and row space. 45 / 50
  • 105. Eigendecomposition of a Symmetric Matrix When A symmetric, we have If A = AT then A is symmetric. The eigenvalues of symmetric matrices are real. The eigenvectors of symmetric matrices are orthonormal. Consequently, the eigendecomposition becomes A = UΛUT for Λ real and U orthogonal. The eigenvectors of A are an orthonormal basis for the column space and row space. 45 / 50
  • 106. Eigendecomposition of a Symmetric Matrix When A symmetric, we have If A = AT then A is symmetric. The eigenvalues of symmetric matrices are real. The eigenvectors of symmetric matrices are orthonormal. Consequently, the eigendecomposition becomes A = UΛUT for Λ real and U orthogonal. The eigenvectors of A are an orthonormal basis for the column space and row space. 45 / 50
  • 107. Eigendecomposition of a Symmetric Matrix When A symmetric, we have If A = AT then A is symmetric. The eigenvalues of symmetric matrices are real. The eigenvectors of symmetric matrices are orthonormal. Consequently, the eigendecomposition becomes A = UΛUT for Λ real and U orthogonal. The eigenvectors of A are an orthonormal basis for the column space and row space. 45 / 50
  • 108. We can see the action of a symmetric matrix on a vector u as... We can decompose the action Au = UΛUT u as Projection of u onto the column space of A (Multiplication by UT ). Scaling of each coefficient Ui, u by the corresponding eigenvalue (Multiplication by Λ). Linear combination of the eigenvectors scaled by the resulting coefficient (Multiplication by U). Final equation Au = n i=1 λi Ui, u Ui It would be great to generalize this to all matrices!!! 46 / 50
  • 109. We can see the action of a symmetric matrix on a vector u as... We can decompose the action Au = UΛUT u as Projection of u onto the column space of A (Multiplication by UT ). Scaling of each coefficient Ui, u by the corresponding eigenvalue (Multiplication by Λ). Linear combination of the eigenvectors scaled by the resulting coefficient (Multiplication by U). Final equation Au = n i=1 λi Ui, u Ui It would be great to generalize this to all matrices!!! 46 / 50
  • 110. We can see the action of a symmetric matrix on a vector u as... We can decompose the action Au = UΛUT u as Projection of u onto the column space of A (Multiplication by UT ). Scaling of each coefficient Ui, u by the corresponding eigenvalue (Multiplication by Λ). Linear combination of the eigenvectors scaled by the resulting coefficient (Multiplication by U). Final equation Au = n i=1 λi Ui, u Ui It would be great to generalize this to all matrices!!! 46 / 50
  • 111. We can see the action of a symmetric matrix on a vector u as... We can decompose the action Au = UΛUT u as Projection of u onto the column space of A (Multiplication by UT ). Scaling of each coefficient Ui, u by the corresponding eigenvalue (Multiplication by Λ). Linear combination of the eigenvectors scaled by the resulting coefficient (Multiplication by U). Final equation Au = n i=1 λi Ui, u Ui It would be great to generalize this to all matrices!!! 46 / 50
  • 112. We can see the action of a symmetric matrix on a vector u as... We can decompose the action Au = UΛUT u as Projection of u onto the column space of A (Multiplication by UT ). Scaling of each coefficient Ui, u by the corresponding eigenvalue (Multiplication by Λ). Linear combination of the eigenvectors scaled by the resulting coefficient (Multiplication by U). Final equation Au = n i=1 λi Ui, u Ui It would be great to generalize this to all matrices!!! 46 / 50
  • 113. Outline 1 Introduction What is a Vector? 2 Vector Spaces Definition Linear Independence and Basis of Vector Spaces Norm of a Vector Inner Product Matrices Trace and Determinant Matrix Decomposition Singular Value Decomposition 47 / 50
  • 114. Singular Value Decomposition Every Matrix has a singular value decomposition A = UΣV T Where The columns of U are an orthonormal basis for the column space. The columns of V are an orthonormal basis for the row space. The Σ is diagonal and the entries on its diagonal σi = Σii are positive real numbers, called the singular values of A. The action of Aon a vector u can be decomposed into Au = n i=1 σi Vi, u Ui 48 / 50
  • 115. Singular Value Decomposition Every Matrix has a singular value decomposition A = UΣV T Where The columns of U are an orthonormal basis for the column space. The columns of V are an orthonormal basis for the row space. The Σ is diagonal and the entries on its diagonal σi = Σii are positive real numbers, called the singular values of A. The action of Aon a vector u can be decomposed into Au = n i=1 σi Vi, u Ui 48 / 50
  • 116. Singular Value Decomposition Every Matrix has a singular value decomposition A = UΣV T Where The columns of U are an orthonormal basis for the column space. The columns of V are an orthonormal basis for the row space. The Σ is diagonal and the entries on its diagonal σi = Σii are positive real numbers, called the singular values of A. The action of Aon a vector u can be decomposed into Au = n i=1 σi Vi, u Ui 48 / 50
  • 117. Singular Value Decomposition Every Matrix has a singular value decomposition A = UΣV T Where The columns of U are an orthonormal basis for the column space. The columns of V are an orthonormal basis for the row space. The Σ is diagonal and the entries on its diagonal σi = Σii are positive real numbers, called the singular values of A. The action of Aon a vector u can be decomposed into Au = n i=1 σi Vi, u Ui 48 / 50
  • 118. Properties of the Singular Value Decomposition First The eigenvalues of the symmetric matrix AT A are equal to the square of the singular values of A: AT A = V ΣUT UT ΣV T = V Σ2V T Second The rank of a matrix is equal to the number of nonzero singular values. Third The largest singular value σ1 is the solution to the optimization problem: σ1 = max x=0 Ax 2 x 2 49 / 50
  • 119. Properties of the Singular Value Decomposition First The eigenvalues of the symmetric matrix AT A are equal to the square of the singular values of A: AT A = V ΣUT UT ΣV T = V Σ2V T Second The rank of a matrix is equal to the number of nonzero singular values. Third The largest singular value σ1 is the solution to the optimization problem: σ1 = max x=0 Ax 2 x 2 49 / 50
  • 120. Properties of the Singular Value Decomposition First The eigenvalues of the symmetric matrix AT A are equal to the square of the singular values of A: AT A = V ΣUT UT ΣV T = V Σ2V T Second The rank of a matrix is equal to the number of nonzero singular values. Third The largest singular value σ1 is the solution to the optimization problem: σ1 = max x=0 Ax 2 x 2 49 / 50
  • 121. Properties of the Singular Value Decomposition Remark It can be verified that the largest singular value satisfies the properties of a norm, it is called the spectral norm of the matrix. Finally In statistics analyzing data with the singular value decomposition is called Principal Component Analysis. 50 / 50
  • 122. Properties of the Singular Value Decomposition Remark It can be verified that the largest singular value satisfies the properties of a norm, it is called the spectral norm of the matrix. Finally In statistics analyzing data with the singular value decomposition is called Principal Component Analysis. 50 / 50