SlideShare a Scribd company logo
1 of 110
Introduction to Linear Algebra
Mark Goldman
Emily Mackevicius
Outline
1. Matrix arithmetic
2. Matrix properties
3. Eigenvectors & eigenvalues
-BREAK-
4. Examples (on blackboard)
5. Recap, additional matrix properties, SVD
Part 1: Matrix Arithmetic
(w/applications to neural networks)
Matrix addition
Scalar times vector
Scalar times vector
Scalar times vector
Product of 2 Vectors
• Element-by-element
• Inner product
• Outer product
Three ways to multiply
Element-by-element product
(Hadamard product)
• Element-wise multiplication (.* in MATLAB)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
Multiplication:
Dot product (inner product)
1 X N N X 1 1 X 1
• MATLAB: ‘inner matrix dimensions must agree’ Outer dimensions give
size of resulting matrix
Dot product geometric intuition:
“Overlap” of 2 vectors
Example: linear feed-forward network
Input neurons’
Firing rates
r1
r2
rn
ri
Example: linear feed-forward network
Input neurons’
Firing rates
r1
r2
rn
ri
Example: linear feed-forward network
Input neurons’
Firing rates
Output neuron’s
firing rate
r1
r2
rn
ri
Example: linear feed-forward network
• Insight: for a given input
(L2) magnitude, the
response is maximized
when the input is parallel
to the weight vector
• Receptive fields also can
be thought of this way
Input neurons’
Firing rates
Output neuron’s
firing rate
r1
r2
rn
ri
Multiplication: Outer product
N X 1 1 X M N X M
Multiplication: Outer product
Multiplication: Outer product
Multiplication: Outer product
Multiplication: Outer product
Multiplication: Outer product
Multiplication: Outer product
Multiplication: Outer product
• Note: each column or each row is a multiple of the others
Matrix times a vector
Matrix times a vector
Matrix times a vector
M X 1 M X N N X 1
Matrix times a vector:
inner product interpretation
• Rule: the ith element of y is the dot product of
the ith row of W with x
Matrix times a vector:
inner product interpretation
• Rule: the ith element of y is the dot product of
the ith row of W with x
Matrix times a vector:
inner product interpretation
• Rule: the ith element of y is the dot product of
the ith row of W with x
Matrix times a vector:
inner product interpretation
• Rule: the ith element of y is the dot product of
the ith row of W with x
Matrix times a vector:
inner product interpretation
• Rule: the ith element of y is the dot product of
the ith row of W with x
Matrix times a vector:
outer product interpretation
• The product is a weighted sum of the columns
of W, weighted by the entries of x
Matrix times a vector:
outer product interpretation
• The product is a weighted sum of the columns
of W, weighted by the entries of x
Matrix times a vector:
outer product interpretation
• The product is a weighted sum of the columns
of W, weighted by the entries of x
Matrix times a vector:
outer product interpretation
• The product is a weighted sum of the columns
of W, weighted by the entries of x
Example of the outer product method
Example of the outer product method
(3,1)
(0,2)
Example of the outer product method
(3,1)
(0,4)
Example of the outer product method
(3,5)
• Note: different
combinations of
the columns of
M can give you
any vector in
the plane
(we say the columns of M
“span” the plane)
Rank of a Matrix
• Are there special matrices whose columns
don’t span the full plane?
Rank of a Matrix
• Are there special matrices whose columns
don’t span the full plane?
(1,2)
(-2, -4)
• You can only get vectors along the (1,2) direction
(i.e. outputs live in 1 dimension, so we call the
matrix rank 1)
Example: 2-layer linear network
• Wij is the connection
strength (weight) onto
neuron yi from neuron xj.
Example: 2-layer linear network:
inner product point of view
• What is the response of cell yi of the second layer?
• The response is the dot
product of the ith row of
W with the vector x
Example: 2-layer linear network:
outer product point of view
• How does cell xj contribute to the pattern of firing
of layer 2?
Contribution
of xj to
network output
1st column
of W
Product of 2 Matrices
• MATLAB: ‘inner matrix dimensions must agree’
• Note: Matrix multiplication doesn’t (generally) commute, AB  BA
N X P P X M N X M
Matrix times Matrix:
by inner products
• Cij is the inner product of the ith row with the jth column
Matrix times Matrix:
by inner products
• Cij is the inner product of the ith row with the jth column
Matrix times Matrix:
by inner products
• Cij is the inner product of the ith row with the jth column
Matrix times Matrix:
by inner products
• Cij is the inner product of the ith row of A with the jth column of B
Matrix times Matrix:
by outer products
Matrix times Matrix:
by outer products
Matrix times Matrix:
by outer products
Matrix times Matrix:
by outer products
• C is a sum of outer products of the columns of A with the rows of B
Part 2: Matrix Properties
• (A few) special matrices
• Matrix transformations & the determinant
• Matrices & systems of algebraic equations
Special matrices: diagonal matrix
• This acts like scalar multiplication
Special matrices: identity matrix
for all
Special matrices: inverse matrix
• Does the inverse always exist?
How does a matrix transform a square?
(1,0)
(0,1)
How does a matrix transform a square?
(1,0)
(0,1)
How does a matrix transform a square?
(3,1)
(0,2)
(1,0)
(0,1)
Geometric definition of the determinant:
How does a matrix transform a square?
(1,0)
(0,1)
Example: solve the algebraic equation
Example: solve the algebraic equation
Example: solve the algebraic equation

• Some non-zero vectors are sent to 0
Example of an underdetermined system
• Some non-zero vectors are sent to 0
Example of an underdetermined system
Example of an underdetermined system

Example of an underdetermined system
• Some non-zero x are sent to 0 (the set of all x with
Mx=0 are called the “nullspace” of M)
• This is because det(M)=0 so M is not invertible. (If
det(M) isn’t 0, the only solution is x = 0)

Part 3: Eigenvectors & eigenvalues
What do matrices do to vectors?
(3,1)
(0,2)
(2,1)
Recall
(3,5)
(2,1)
What do matrices do to vectors?
(3,5)
• The new vector is:
1) rotated
2) scaled
(2,1)
Are there any special vectors
that only get scaled?
Are there any special vectors
that only get scaled?
Try (1,1)
Are there any special vectors
that only get scaled?
= (1,1)
Are there any special vectors
that only get scaled?
= (3,3)
= (1,1)
Are there any special vectors
that only get scaled?
• For this special vector,
multiplying by M is like
multiplying by a scalar.
• (1,1) is called an
eigenvector of M
• 3 (the scaling factor) is
called the eigenvalue
associated with this
eigenvector
= (1,1)
= (3,3)
Are there any other eigenvectors?
• Yes! The easiest way to find is with MATLAB’s
eig command.
• Exercise: verify that (-1.5, 1) is also an
eigenvector of M.
• Note: eigenvectors are only defined up to a
scale factor.
– Conventions are either to make e’s unit vectors, or
make one of the elements 1
Step back:
Eigenvectors obey this equation
Step back:
Eigenvectors obey this equation
Step back:
Eigenvectors obey this equation
Step back:
Eigenvectors obey this equation
• This is called the characteristic equation for l
• In general, for an N x N matrix, there are N
eigenvectors
BREAK
Part 4: Examples (on blackboard)
• Principal Components Analysis (PCA)
• Single, linear differential equation
• Coupled differential equations
Part 5: Recap & Additional useful stuff
• Matrix diagonalization recap:
transforming between original & eigenvector coordinates
• More special matrices & matrix properties
• Singular Value Decomposition (SVD)
Coupled differential equations
• Calculate the eigenvectors and eigenvalues.
– Eigenvalues have typical form:
• The corresponding eigenvector component has
dynamics:
• Step 1: Find the
eigenvalues and
eigenvectors of M.
• Step 2: Decompose x
into its eigenvector
components
• Step 3: Stretch/scale
each eigenvalue
component
• Step 4: (solve for c
and) transform back
to original
coordinates.
Practical program for approaching
equations coupled through a term Mx
eig(M) in MATLAB
• Step 1: Find the
eigenvalues and
eigenvectors of M.
• Step 2: Decompose x
into its eigenvector
components
• Step 3: Stretch/scale
each eigenvalue
component
• Step 4: (solve for c
and) transform back
to original
coordinates.
Practical program for approaching
equations coupled through a term Mx
• Step 1: Find the
eigenvalues and
eigenvectors of M.
• Step 2: Decompose x
into its eigenvector
components
• Step 3: Stretch/scale
each eigenvalue
component
• Step 4: (solve for c
and) transform back
to original
coordinates.
Practical program for approaching
equations coupled through a term Mx
• Step 1: Find the
eigenvalues and
eigenvectors of M.
• Step 2: Decompose x
into its eigenvector
components
• Step 3: Stretch/scale
each eigenvalue
component
• Step 4: (solve for c
and) transform back
to original
coordinates.
Practical program for approaching
equations coupled through a term Mx
• Step 1: Find the
eigenvalues and
eigenvectors of M.
• Step 2: Decompose x
into its eigenvector
components
• Step 3: Stretch/scale
each eigenvalue
component
• Step 4: (solve for c
and) transform back
to original
coordinates.
Practical program for approaching
equations coupled through a term Mx
• Step 1: Find the
eigenvalues and
eigenvectors of M.
• Step 2: Decompose x
into its eigenvector
components
• Step 3: Stretch/scale
each eigenvalue
component
• Step 4: (solve for c
and) transform back
to original
coordinates.
Practical program for approaching
equations coupled through a term Mx
Where (step 1):
MATLAB:
Putting it all together…
Step 2: Transform
into eigencoordinates
Step 3: Scale by li
along the ith
eigencoordinate
Step 4: Transform
back to original
coordinate system
Putting it all together…
Left eigenvectors
-The rows of E inverse are called the left eigenvectors
because they satisfy E-1 M = L E-1.
-Together with the eigenvalues, they determine how x is
decomposed into each of its eigenvector components.
Matrix in
eigencoordinate
system
Original Matrix
Where:
Putting it all together…
• Note: M and Lambda look very different.
Q: Are there any properties that are preserved between them?
A: Yes, 2 very important ones:
Matrix in eigencoordinate
system
Original Matrix
Trace and Determinant
2.
1.
Special Matrices: Normal matrix
• Normal matrix: all eigenvectors are orthogonal
 Can transform to eigencoordinates (“change basis”) with a
simple rotation* of the coordinate axes
 A normal matrix’s eigenvector matrix E is a *generalized
rotation (unitary or orthonormal) matrix, defined by:
E
Picture:
(*note: generalized means one can also do reflections of the eigenvectors through a line/plane”)
Special Matrices: Normal matrix
• Normal matrix: all eigenvectors are orthogonal
 Can transform to eigencoordinates (“change basis”)
with a simple rotation of the coordinate axes
 E is a rotation (unitary or orthogonal) matrix, defined
by:
where if: then:
Special Matrices: Normal matrix
• Eigenvector decomposition in this case:
• Left and right eigenvectors are identical!
• Symmetric Matrix:
Special Matrices
• e.g. Covariance matrices, Hopfield network
• Properties:
– Eigenvalues are real
– Eigenvectors are orthogonal (i.e. it’s a normal matrix)
SVD: Decomposes matrix into outer products
(e.g. of a neural/spatial mode and a temporal mode)
t = 1 t = 2 t = T
n = 1
n = 2
n = N
SVD: Decomposes matrix into outer products
(e.g. of a neural/spatial mode and a temporal mode)
n = 1
n = 2
n = N
t = 1 t = 2 t = T
SVD: Decomposes matrix into outer products
(e.g. of a neural/spatial mode and a temporal mode)
Rows of VT are
eigenvectors of
MTM
Columns of U
are eigenvectors
of MMT
• Note: the eigenvalues are the same for MTM and MMT
SVD: Decomposes matrix into outer products
(e.g. of a neural/spatial mode and a temporal mode)
Rows of VT are
eigenvectors of
MTM
Columns of U
are eigenvectors
of MMT
• Thus, SVD pairs “spatial” patterns with associated
“temporal” profiles through the outer product
The End

More Related Content

Similar to LinearAlgebra_2016updatedFromwiki.ppt

Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraMUHAMMADUSMAN93058
 
Basic MATLAB-Presentation.pptx
Basic MATLAB-Presentation.pptxBasic MATLAB-Presentation.pptx
Basic MATLAB-Presentation.pptxPremanandS3
 
machine learning.pptx
machine learning.pptxmachine learning.pptx
machine learning.pptxAbdusSadik
 
Synthesis of Linear and Non-Separable Planar Array Patterns
Synthesis of Linear and Non-Separable Planar Array PatternsSynthesis of Linear and Non-Separable Planar Array Patterns
Synthesis of Linear and Non-Separable Planar Array PatternsMichael Buckley
 
Tensor representations in signal processing and machine learning (tutorial ta...
Tensor representations in signal processing and machine learning (tutorial ta...Tensor representations in signal processing and machine learning (tutorial ta...
Tensor representations in signal processing and machine learning (tutorial ta...Tatsuya Yokota
 
Eigen values and Eigen vectors ppt world
Eigen values and Eigen vectors ppt worldEigen values and Eigen vectors ppt world
Eigen values and Eigen vectors ppt worldraykoustav145
 
Support Vector Machines- SVM
Support Vector Machines- SVMSupport Vector Machines- SVM
Support Vector Machines- SVMCarlo Carandang
 
Introduction to Matlab - Basic Functions
Introduction to Matlab - Basic FunctionsIntroduction to Matlab - Basic Functions
Introduction to Matlab - Basic Functionsjoellivz
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deepKNaveenKumarECE
 
Matrix Algebra : Mathematics for Business
Matrix Algebra : Mathematics for BusinessMatrix Algebra : Mathematics for Business
Matrix Algebra : Mathematics for BusinessKhan Tanjeel Ahmed
 
Brief review on matrix Algebra for mathematical economics
Brief review on matrix Algebra for mathematical economicsBrief review on matrix Algebra for mathematical economics
Brief review on matrix Algebra for mathematical economicsfelekephiliphos3
 
Engineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdfEngineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdfssuseraae901
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksSivagowry Shathesh
 

Similar to LinearAlgebra_2016updatedFromwiki.ppt (20)

Linear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear AlgebraLinear Algebra Presentation including basic of linear Algebra
Linear Algebra Presentation including basic of linear Algebra
 
Basic MATLAB-Presentation.pptx
Basic MATLAB-Presentation.pptxBasic MATLAB-Presentation.pptx
Basic MATLAB-Presentation.pptx
 
Mini_Project
Mini_ProjectMini_Project
Mini_Project
 
Data Mining Lecture_9.pptx
Data Mining Lecture_9.pptxData Mining Lecture_9.pptx
Data Mining Lecture_9.pptx
 
machine learning.pptx
machine learning.pptxmachine learning.pptx
machine learning.pptx
 
MATLAB & Image Processing
MATLAB & Image ProcessingMATLAB & Image Processing
MATLAB & Image Processing
 
Synthesis of Linear and Non-Separable Planar Array Patterns
Synthesis of Linear and Non-Separable Planar Array PatternsSynthesis of Linear and Non-Separable Planar Array Patterns
Synthesis of Linear and Non-Separable Planar Array Patterns
 
Tensor representations in signal processing and machine learning (tutorial ta...
Tensor representations in signal processing and machine learning (tutorial ta...Tensor representations in signal processing and machine learning (tutorial ta...
Tensor representations in signal processing and machine learning (tutorial ta...
 
Eigen values and Eigen vectors ppt world
Eigen values and Eigen vectors ppt worldEigen values and Eigen vectors ppt world
Eigen values and Eigen vectors ppt world
 
Matlab introduction
Matlab introductionMatlab introduction
Matlab introduction
 
Support Vector Machines- SVM
Support Vector Machines- SVMSupport Vector Machines- SVM
Support Vector Machines- SVM
 
Introduction to Matlab - Basic Functions
Introduction to Matlab - Basic FunctionsIntroduction to Matlab - Basic Functions
Introduction to Matlab - Basic Functions
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deep
 
Matrix Algebra : Mathematics for Business
Matrix Algebra : Mathematics for BusinessMatrix Algebra : Mathematics for Business
Matrix Algebra : Mathematics for Business
 
Chap8 slides
Chap8 slidesChap8 slides
Chap8 slides
 
Oed chapter 1
Oed chapter 1Oed chapter 1
Oed chapter 1
 
Brief review on matrix Algebra for mathematical economics
Brief review on matrix Algebra for mathematical economicsBrief review on matrix Algebra for mathematical economics
Brief review on matrix Algebra for mathematical economics
 
Engineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdfEngineering Numerical Analysis-Introduction.pdf
Engineering Numerical Analysis-Introduction.pdf
 
Lesson 6
Lesson 6Lesson 6
Lesson 6
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networks
 

Recently uploaded

Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........LeaCamillePacle
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Celine George
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationAadityaSharma884161
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Mark Reed
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfSpandanaRallapalli
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 

Recently uploaded (20)

Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........Atmosphere science 7 quarter 4 .........
Atmosphere science 7 quarter 4 .........
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Raw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptxRaw materials used in Herbal Cosmetics.pptx
Raw materials used in Herbal Cosmetics.pptx
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...OS-operating systems- ch04 (Threads) ...
OS-operating systems- ch04 (Threads) ...
 
Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17Computed Fields and api Depends in the Odoo 17
Computed Fields and api Depends in the Odoo 17
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
ROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint PresentationROOT CAUSE ANALYSIS PowerPoint Presentation
ROOT CAUSE ANALYSIS PowerPoint Presentation
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)Influencing policy (training slides from Fast Track Impact)
Influencing policy (training slides from Fast Track Impact)
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"Rapple "Scholarly Communications and the Sustainable Development Goals"
Rapple "Scholarly Communications and the Sustainable Development Goals"
 
ACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdfACC 2024 Chronicles. Cardiology. Exam.pdf
ACC 2024 Chronicles. Cardiology. Exam.pdf
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 

LinearAlgebra_2016updatedFromwiki.ppt

  • 1. Introduction to Linear Algebra Mark Goldman Emily Mackevicius
  • 2. Outline 1. Matrix arithmetic 2. Matrix properties 3. Eigenvectors & eigenvalues -BREAK- 4. Examples (on blackboard) 5. Recap, additional matrix properties, SVD
  • 3. Part 1: Matrix Arithmetic (w/applications to neural networks)
  • 8. Product of 2 Vectors • Element-by-element • Inner product • Outer product Three ways to multiply
  • 9. Element-by-element product (Hadamard product) • Element-wise multiplication (.* in MATLAB)
  • 14. Multiplication: Dot product (inner product) 1 X N N X 1 1 X 1 • MATLAB: ‘inner matrix dimensions must agree’ Outer dimensions give size of resulting matrix
  • 15. Dot product geometric intuition: “Overlap” of 2 vectors
  • 16. Example: linear feed-forward network Input neurons’ Firing rates r1 r2 rn ri
  • 17. Example: linear feed-forward network Input neurons’ Firing rates r1 r2 rn ri
  • 18. Example: linear feed-forward network Input neurons’ Firing rates Output neuron’s firing rate r1 r2 rn ri
  • 19. Example: linear feed-forward network • Insight: for a given input (L2) magnitude, the response is maximized when the input is parallel to the weight vector • Receptive fields also can be thought of this way Input neurons’ Firing rates Output neuron’s firing rate r1 r2 rn ri
  • 27. Multiplication: Outer product • Note: each column or each row is a multiple of the others
  • 28. Matrix times a vector
  • 29. Matrix times a vector
  • 30. Matrix times a vector M X 1 M X N N X 1
  • 31. Matrix times a vector: inner product interpretation • Rule: the ith element of y is the dot product of the ith row of W with x
  • 32. Matrix times a vector: inner product interpretation • Rule: the ith element of y is the dot product of the ith row of W with x
  • 33. Matrix times a vector: inner product interpretation • Rule: the ith element of y is the dot product of the ith row of W with x
  • 34. Matrix times a vector: inner product interpretation • Rule: the ith element of y is the dot product of the ith row of W with x
  • 35. Matrix times a vector: inner product interpretation • Rule: the ith element of y is the dot product of the ith row of W with x
  • 36. Matrix times a vector: outer product interpretation • The product is a weighted sum of the columns of W, weighted by the entries of x
  • 37. Matrix times a vector: outer product interpretation • The product is a weighted sum of the columns of W, weighted by the entries of x
  • 38. Matrix times a vector: outer product interpretation • The product is a weighted sum of the columns of W, weighted by the entries of x
  • 39. Matrix times a vector: outer product interpretation • The product is a weighted sum of the columns of W, weighted by the entries of x
  • 40. Example of the outer product method
  • 41. Example of the outer product method (3,1) (0,2)
  • 42. Example of the outer product method (3,1) (0,4)
  • 43. Example of the outer product method (3,5) • Note: different combinations of the columns of M can give you any vector in the plane (we say the columns of M “span” the plane)
  • 44. Rank of a Matrix • Are there special matrices whose columns don’t span the full plane?
  • 45. Rank of a Matrix • Are there special matrices whose columns don’t span the full plane? (1,2) (-2, -4) • You can only get vectors along the (1,2) direction (i.e. outputs live in 1 dimension, so we call the matrix rank 1)
  • 46. Example: 2-layer linear network • Wij is the connection strength (weight) onto neuron yi from neuron xj.
  • 47. Example: 2-layer linear network: inner product point of view • What is the response of cell yi of the second layer? • The response is the dot product of the ith row of W with the vector x
  • 48. Example: 2-layer linear network: outer product point of view • How does cell xj contribute to the pattern of firing of layer 2? Contribution of xj to network output 1st column of W
  • 49. Product of 2 Matrices • MATLAB: ‘inner matrix dimensions must agree’ • Note: Matrix multiplication doesn’t (generally) commute, AB  BA N X P P X M N X M
  • 50. Matrix times Matrix: by inner products • Cij is the inner product of the ith row with the jth column
  • 51. Matrix times Matrix: by inner products • Cij is the inner product of the ith row with the jth column
  • 52. Matrix times Matrix: by inner products • Cij is the inner product of the ith row with the jth column
  • 53. Matrix times Matrix: by inner products • Cij is the inner product of the ith row of A with the jth column of B
  • 54. Matrix times Matrix: by outer products
  • 55. Matrix times Matrix: by outer products
  • 56. Matrix times Matrix: by outer products
  • 57. Matrix times Matrix: by outer products • C is a sum of outer products of the columns of A with the rows of B
  • 58. Part 2: Matrix Properties • (A few) special matrices • Matrix transformations & the determinant • Matrices & systems of algebraic equations
  • 59. Special matrices: diagonal matrix • This acts like scalar multiplication
  • 60. Special matrices: identity matrix for all
  • 61. Special matrices: inverse matrix • Does the inverse always exist?
  • 62. How does a matrix transform a square? (1,0) (0,1)
  • 63. How does a matrix transform a square? (1,0) (0,1)
  • 64. How does a matrix transform a square? (3,1) (0,2) (1,0) (0,1)
  • 65. Geometric definition of the determinant: How does a matrix transform a square? (1,0) (0,1)
  • 66. Example: solve the algebraic equation
  • 67. Example: solve the algebraic equation
  • 68. Example: solve the algebraic equation 
  • 69. • Some non-zero vectors are sent to 0 Example of an underdetermined system
  • 70. • Some non-zero vectors are sent to 0 Example of an underdetermined system
  • 71. Example of an underdetermined system 
  • 72. Example of an underdetermined system • Some non-zero x are sent to 0 (the set of all x with Mx=0 are called the “nullspace” of M) • This is because det(M)=0 so M is not invertible. (If det(M) isn’t 0, the only solution is x = 0) 
  • 73. Part 3: Eigenvectors & eigenvalues
  • 74. What do matrices do to vectors? (3,1) (0,2) (2,1)
  • 76. What do matrices do to vectors? (3,5) • The new vector is: 1) rotated 2) scaled (2,1)
  • 77. Are there any special vectors that only get scaled?
  • 78. Are there any special vectors that only get scaled? Try (1,1)
  • 79. Are there any special vectors that only get scaled? = (1,1)
  • 80. Are there any special vectors that only get scaled? = (3,3) = (1,1)
  • 81. Are there any special vectors that only get scaled? • For this special vector, multiplying by M is like multiplying by a scalar. • (1,1) is called an eigenvector of M • 3 (the scaling factor) is called the eigenvalue associated with this eigenvector = (1,1) = (3,3)
  • 82. Are there any other eigenvectors? • Yes! The easiest way to find is with MATLAB’s eig command. • Exercise: verify that (-1.5, 1) is also an eigenvector of M. • Note: eigenvectors are only defined up to a scale factor. – Conventions are either to make e’s unit vectors, or make one of the elements 1
  • 86. Step back: Eigenvectors obey this equation • This is called the characteristic equation for l • In general, for an N x N matrix, there are N eigenvectors
  • 87. BREAK
  • 88. Part 4: Examples (on blackboard) • Principal Components Analysis (PCA) • Single, linear differential equation • Coupled differential equations
  • 89. Part 5: Recap & Additional useful stuff • Matrix diagonalization recap: transforming between original & eigenvector coordinates • More special matrices & matrix properties • Singular Value Decomposition (SVD)
  • 90. Coupled differential equations • Calculate the eigenvectors and eigenvalues. – Eigenvalues have typical form: • The corresponding eigenvector component has dynamics:
  • 91. • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx eig(M) in MATLAB
  • 92. • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx
  • 93. • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx
  • 94. • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx
  • 95. • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx
  • 96. • Step 1: Find the eigenvalues and eigenvectors of M. • Step 2: Decompose x into its eigenvector components • Step 3: Stretch/scale each eigenvalue component • Step 4: (solve for c and) transform back to original coordinates. Practical program for approaching equations coupled through a term Mx
  • 97. Where (step 1): MATLAB: Putting it all together…
  • 98. Step 2: Transform into eigencoordinates Step 3: Scale by li along the ith eigencoordinate Step 4: Transform back to original coordinate system Putting it all together…
  • 99. Left eigenvectors -The rows of E inverse are called the left eigenvectors because they satisfy E-1 M = L E-1. -Together with the eigenvalues, they determine how x is decomposed into each of its eigenvector components.
  • 101. • Note: M and Lambda look very different. Q: Are there any properties that are preserved between them? A: Yes, 2 very important ones: Matrix in eigencoordinate system Original Matrix Trace and Determinant 2. 1.
  • 102. Special Matrices: Normal matrix • Normal matrix: all eigenvectors are orthogonal  Can transform to eigencoordinates (“change basis”) with a simple rotation* of the coordinate axes  A normal matrix’s eigenvector matrix E is a *generalized rotation (unitary or orthonormal) matrix, defined by: E Picture: (*note: generalized means one can also do reflections of the eigenvectors through a line/plane”)
  • 103. Special Matrices: Normal matrix • Normal matrix: all eigenvectors are orthogonal  Can transform to eigencoordinates (“change basis”) with a simple rotation of the coordinate axes  E is a rotation (unitary or orthogonal) matrix, defined by: where if: then:
  • 104. Special Matrices: Normal matrix • Eigenvector decomposition in this case: • Left and right eigenvectors are identical!
  • 105. • Symmetric Matrix: Special Matrices • e.g. Covariance matrices, Hopfield network • Properties: – Eigenvalues are real – Eigenvectors are orthogonal (i.e. it’s a normal matrix)
  • 106. SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) t = 1 t = 2 t = T n = 1 n = 2 n = N
  • 107. SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) n = 1 n = 2 n = N t = 1 t = 2 t = T
  • 108. SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) Rows of VT are eigenvectors of MTM Columns of U are eigenvectors of MMT • Note: the eigenvalues are the same for MTM and MMT
  • 109. SVD: Decomposes matrix into outer products (e.g. of a neural/spatial mode and a temporal mode) Rows of VT are eigenvectors of MTM Columns of U are eigenvectors of MMT • Thus, SVD pairs “spatial” patterns with associated “temporal” profiles through the outer product

Editor's Notes

  1. We’ll see implications of this feature shortly… (motivation for “rank 1”/1-dimensional matrix) and also underlying components of SVD
  2. Such rank 1 matrices can be written as an outer product, e.g. in this case as (1 2)’ * (1 -2)
  3. Note: this way of matrix multiplying, i.e. breaking into outer products is key to Singular Value Decomposition (SVD) which I’ll discuss at end of lecture; note that each outer product is rank 1 since each column (or row) is a multiple of every other column (or row)
  4. For scalars, there is no inverse if the scalar’s value is 0. For matrices, the corresponding quantity that has to be nonzero for the inverse to exist is called the determinant, which we’ll show next corresponds geometrically (up to a sign) to an area (or, in higher dimensions, volume).
  5. Note, potentially a bit confusing: the vector (1,0) got mapped into (0,2) and (0,1) got mapped into (3,1): this relates to fact that determinant is negative (i.e. in some sense, orientation of parallelogram flipped)
  6. If above determinant is zero, then inverse doesn’t exist and can’t conclude that x=0 by simply multiplying both sides of the equation by the inverse.
  7. c
  8. Step 4: (solve for c and) transform back to original coordinates.
  9. Notes: 1) sometimes left eigenvectors are called adjoint eigenvectors 2) later we’ll show that in some cases the right and left eigenvectors are the same (Normal matrices). It is easily shown that they obey the equation: e_left M = lambda * e_left, i.e. they are analogous to the usual (right) eigenvectors except that they multiply M from the *left* (and they must be written as row vectors to do this). Proof (can do on board): M = E*Lambda*E^(-1) E^(-1) M = Lambda*E^(-1) 3) the left eigenvectors of M transpose are the same as the right eigenvectors of M Proof: M*E = E * Lambda E’ * M’ = Lambda * E’ (i.e. rows of E’ are the left eigenvectors of M’, and are identical to columns of E) 4) Even though eigenvectors are not necessarily orthogonal, left & right eigenvectors corresponding to distinct eigenvalues are orthogonal Proof: follows directly from E^(-1) * E = Identity .
  10. Key feature: if decomposing a general vector into components in an orthonormal coordinate system, then do this through simple orthogonal projection along the orthonormal axes of the coordinate system. This can be done through a dot product (as noted near the beginning of this tutorial…). Since transforming into the eigenvector coordinate system is accomplished by a dot product with the rows of E^(-1), we see that this operation is simply a dot product with the eigenvectors when we have E^(-1) = E transpose. Note: reflections could technically be through any axis, because one can then rotate more to make up for which axis one reflected through. Note that performing reflection changes the sign of the determinant (e.g. from =1 to -1). [Aside/not relevant here but for cultural interest: Unitary or orthonormal matrices that are restricted to have positive determinant (= +1) are called “special unitary” or “special orthonormal” matrices.] Also note that the absolute value of the determinant equaling 1 for unitary/orthonormal matrices reflects the earlier comment that determinants give the area of the transformation of a unit square, and a rotation matrix is area-preserving.
  11. Proof that eigvals real: Suppose Av = (lambda)v, where lambda potentially complex and A = A’ (‘ denotes transpose) and A real-valued Consider v’*Av = v’*(Av) = lambda v’*v = lambda |v|^2, where * denotes complex conjugate (NOT multiplication) but also v’*Av = (v’*A)v = (A’v*)’v = (Av*)’v = lambda* |v|^2 Thus, lambda = lambda*  lambda real Proof that eigvectors orthogonal: now consider two eigenvectors v and w with distinct eigenvalues Consider v’Aw two ways above to see that v’w must equal zero Note: covariance matrices have further restriction that eigenvalues are positive, this results from the fact that covariance matrices are of the form XX’
  12. Note 1: Each neural mode has an associated temporal mode, and that M can be written as a linear sum of outer products. Note 2: Alternatively, the time course of each spatial mode (i.e. the strength of the spatial mode at each point in time) is given by the associated temporal eigenvector times the singular value. Sometimes people refer to each temporal eigenvector scaled by its singular value as the temporal ‘component’ to distinguish it from the spatial ‘mode’. Note 3: remember that outer products are 1d (i.e. rank 1) Note 4: if you are centering your data, you can subtract off the mean of *either* your rows *or* your columns, but you cannot do both (i.e. first subtract off mean of rows, then subtract off means of resulting columns) or you will lose a dimension. More generally, if you think of your individual data points as occupying either the rows or columns of M, then you want to do the operation that corresponds to subtracting the means of the data points, i.e. “centering the data”; if you do the other mean subtraction, you will see that your data loses a dimension. Try this with a 2x3 matrix, plotting it with the interpretation of its being 3 two-dimensional data points to see what happens if you subtract off the “wrong mean”.