SlideShare a Scribd company logo
CHAPTERS 8
UNSUPERVISED LEARNING:
PRINCIPAL-COMPONENTS ANALYSIS (PCA)
CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq M. Mostafa
Computer Science Department
Faculty of Computer & Information Sciences
AIN SHAMS UNIVERSITY
Credits: Some Slides are taken from presentations on PCA by :
1. Barnabás Póczos University of Alberta
2. Jieping Ye, http://www.public.asu.edu/~jye02
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
 Introduction
 Tasks of Unsupervised Learning
 What is Data Reduction?
 Why we need to Reduce Data Dimensionality?
 Clustering and Data Reduction
 The PCA Computation
 Computer Experiment
2
Outlines
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 3
Unsupervised Learning
 In unsupervised learning, the requirement is to discover
significant patterns, or features, of the input data
through the use of unlabeled examples.
 That it, the network operates according to the rule:
“Learn from examples without a teacher”
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
What is feature reduction?
 Feature reduction refers to the mapping of the original high-dimensional data
onto a lower-dimensional space.
 Criterion for feature reduction can be different based on different problem settings.
 Unsupervised setting: minimize the information loss
 Supervised setting: maximize the class discrimination
 Given a set of data points of p variables
Compute the linear transformation (projection)
 nxxx ,,, 21 
)(: pdxGyxG dTpdp
 
High Dimensional Data
Gene expression Face images Handwritten digits
Why feature reduction?
 Most machine learning and data mining
techniques may not be effective for high-
dimensional data
 Curse of Dimensionality
 Query accuracy and efficiency degrade rapidly as the
dimension increases.
 The intrinsic dimension may be small.
 For example, the number of genes responsible for a certain
type of disease may be small.
Why feature reduction?
 Visualization: projection of high-dimensional data
onto 2D or 3D.
 Data compression: efficient storage and retrieval.
 Noise removal: positive effect on query accuracy.
What is Principal Component Analysis?
 Principal component analysis (PCA)
 Reduce the dimensionality of a data set by finding a new set of
variables, smaller than the original set of variables
 Retains most of the sample's information.
 Useful for the compression and classification of data.
 By information we mean the variation present in the
sample, given by the correlations between the original
variables.
 The new variables, called principal components (PCs), are
uncorrelated, and are ordered by the fraction of the total
information each retains.
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
principal components (PCs)
2z
1z
1z• the 1st PC is a minimum distance fit to a line in X space
• the 2nd PC is a minimum distance fit to a line in the plane
perpendicular to the 1st PC
PCs are a series of linear least squares fits to a sample,
each orthogonal to all the previous.
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Algebraic definition of PCs
.,,2,1,
1
111 njxaxaz
p
i
ijij
T
 
  p
nxxx ,,, 21 
]var[ 1z
),,,(
),,,(
21
121111
pjjjj
p
xxxx
aaaa




Given a sample of n observations on a vector of p variables
define the first principal component of the sample
by the linear transformation
where the vector
is chosen such that is maximum.
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
To find first note that
where
is the covariance matrix.
Algebraic Derivation of the PCA
  T
i
n
i
i xxxx
n
 1
1
1a
 
   11
1
11
1
2
11
2
111
1
1
))((]var[
aaaxxxxa
n
xaxa
n
zzEz
T
n
i
T
ii
T
n
i
T
i
T






mean.theis
1
1


n
i
ix
n
x
In the following, we assume the Data is centered.
0x
Algebraic derivation of PCs
np
nxxxX 
 ],,,[ 21 
0x
T
XX
n
S
1

Assume
Form the matrix:
then
T
VUX 
Obtain eigenvectors of S by computing the SVD of X:
Principle Component Analysis
Orthogonal projection of data onto lower-dimension linear
space that...
 maximizes variance of projected data (purple line)
 minimizes mean squared distance between
data point and
projections (sum of blue lines)
PCA:
Principle Components Analysis
Idea:
 Given data points in a d-dimensional space,
project into lower dimensional space while preserving as much
information as possible
 Eg, find best planar approximation to 3D data
 Eg, find best 12-D approximation to 104-D data
 In particular, choose projection that
minimizes squared error
in reconstructing original data
 Vectors originating from the center of mass
 Principal component #1 points
in the direction of the largest variance.
 Each subsequent principal component…
 is orthogonal to the previous ones, and
 points in the directions of the largest
variance of the residual subspace
The Principal Components
2D Gaussian dataset
1st PCA axis
2nd PCA axis
PCA algorithm I (sequential)
 




m
i
k
j
i
T
jji
T
k
m 1
2
1
1
1
})]({[
1
maxarg xwwxww
w
}){(
1
maxarg
1
2
i
1
1 


m
i
T
m
xww
w
We maximize the
variance of the projection
in the residual subspace
We maximize the variance of projection of x
x’ PCA reconstruction
Given the centered data {x1, …, xm}, compute the principal vectors:
1st PCA vector
kth PCA vector
w1(w1
Tx)
w2(w2
Tx)
x
w1
w2
x’=w1(w1
Tx)+w2(w2
Tx)
w
PCA algorithm II
(sample covariance matrix)
 Given data {x1, …, xm}, compute covariance matrix 
 PCA basis vectors = the eigenvectors of 
 Larger eigenvalue  more important eigenvectors


m
i
T
i
m 1
))((
1
xxxx 

m
i
i
m 1
1
xxwhere
PCA algorithm II
PCA algorithm(X, k): top k eigenvalues/eigenvectors
% X = N  m data matrix,
% … each data point xi = column vector, i=1..m
•
• X  subtract mean x from each column vector xi in X
•   XXT … covariance matrix of X
• { i, ui }i=1..N = eigenvectors/eigenvalues of 
... 1  2  …  N
• Return { i, ui }i=1..k
% top k principle components


m
im 1
1
ixx
PCA algorithm III
(SVD of the data matrix)
Singular Value Decomposition of the centered data matrix X.
Xfeatures  samples = USVT
X VTSU=
samples
significant
noise
noise
noise
significant
sig.
PCA algorithm III
 Columns of U
 the principal vectors, { u(1), …, u(k) }
 orthogonal and has unit norm – so UTU = I
 Can reconstruct the data using linear combinations of
{ u(1), …, u(k) }
 Matrix S
 Diagonal
 Shows importance of each eigenvector
 Columns of VT
 The coefficients for reconstructing the samples
Face recognition
Challenge: Facial Recognition
 Want to identify specific person, based on facial image
 Robust to glasses, lighting,…
 Can’t just use the given 256 x 256 pixels
Applying PCA: Eigenfaces
 Example data set: Images of faces
 Famous Eigenface approach
[Turk & Pentland], [Sirovich & Kirby]
 Each face x is …
 256  256 values (luminance at location)
 x in 256256 (view as 64K dim vector)
 Form X = [ x1 , …, xm ] centered data mtx
 Compute  = XXT
 Problem:  is 64K  64K … HUGE!!!
256x256
realvalues
m faces
X =
x1, …, xm
Method A: Build a PCA subspace for each person and check
which subspace can reconstruct the test image the best
Method B: Build one PCA database for the whole dataset and
then classify based on the weights.
Computational Complexity
 Suppose m instances, each of size N
 Eigenfaces: m=500 faces, each of size N=64K
 Given NN covariance matrix , can compute
 all N eigenvectors/eigenvalues in O(N3)
 first k eigenvectors/eigenvalues in O(k N2)
 But if N=64K, EXPENSIVE!
A Clever Workaround
 Note that m<<64K
 Use L=XTX instead of =XXT
 If v is eigenvector of L
then Xv is eigenvector of 
Proof: L v =  v
XTX v =  v
X (XTX v) = X( v) =  Xv
(XXT)X v =  (Xv)
 Xv) =  (Xv)
256x256
realvalues
m faces
X =
x1, …, xm
Principle Components (Method B)
Reconstructing… (Method B)
 … faster if train with…
 only people w/out glasses
 same lighting conditions
Shortcomings
 Requires carefully controlled data:
 All faces centered in frame
 Same size
 Some sensitivity to angle
 Alternative:
 “Learn” one set of PCA vectors for each angle
 Use the one with lowest error
 Method is completely knowledge free
 (sometimes this is good!)
 Doesn’t know that faces are wrapped around 3D objects
(heads)
 Makes no effort to preserve class distinctions
Facial expression recognition
Happiness subspace (method A)
Disgust subspace (method A)
Facial Expression Recognition
Movies (method A)
Facial Expression Recognition
Movies (method A)
Facial Expression Recognition
Movies (method A)
Image Compression
Original Image
• Divide the original 372x492 image into patches:
• Each patch is an instance that contains 12x12 pixels on a grid
• View each as a 144-D vector
L2 error and PCA dim
PCA compression: 144D ) 60D
PCA compression: 144D ) 16D
16 most important eigenvectors
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
PCA compression: 144D ) 6D
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
6 most important eigenvectors
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
3 most important eigenvectors
PCA compression: 144D ) 3D
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
3 most important eigenvectors
PCA compression: 144D ) 1D
60 most important eigenvectors
Looks like the discrete cosine bases of JPG!...
2D Discrete Cosine Basis
http://en.wikipedia.org/wiki/Discrete_cosine_transform
Noise Filtering
Noise Filtering, Auto-Encoder…
x x’
U x
Noisy image
Denoised image
using 15 PCA components
PCA Shortcomings
PCA, a Problematic Data Set
PCA doesn’t know labels!
PCA vs Fisher Linear Discriminant
 PCA maximizes variance,
independent of class
 magenta
 FLD attempts to separate classes
 green line
PCA, a Problematic Data Set
PCA cannot capture NON-LINEAR structure!
PCA Conclusions
 PCA
 finds orthonormal basis for data
 Sorts dimensions in order of “importance”
 Discard low significance dimensions
 Uses:
 Get compact description
 Ignore noise
 Improve classification (hopefully)
 Not magic:
 Doesn’t know class labels
 Can only capture linear variations
 One of many tricks to reduce dimensionality!
Applications of PCA
 Eigenfaces for recognition. Turk and Pentland.
1991.
 Principal Component Analysis for clustering gene
expression data. Yeung and Ruzzo. 2001.
 Probabilistic Disease Classification of Expression-
Dependent Proteomic Data from Mass
Spectrometry of Human Serum. Lilien. 2003.
PCA for image compression
d=1 d=2 d=4 d=8
d=16 d=32 d=64 d=100
Original
Image

More Related Content

What's hot

Feature selection
Feature selectionFeature selection
Feature selection
Dong Guo
 
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic ConceptsData Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Salah Amean
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximizationbutest
 
Unsupervised learning
Unsupervised learningUnsupervised learning
Unsupervised learning
amalalhait
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
Databricks
 
Optimization/Gradient Descent
Optimization/Gradient DescentOptimization/Gradient Descent
Optimization/Gradient Descent
kandelin
 
Performance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning AlgorithmsPerformance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning Algorithms
Kush Kulshrestha
 
K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
Mohammad Junaid Khan
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIER
Knoldus Inc.
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methodsReza Ramezani
 
Lect5 principal component analysis
Lect5 principal component analysisLect5 principal component analysis
Lect5 principal component analysis
hktripathy
 
Ridge regression
Ridge regressionRidge regression
Ridge regression
Ananda Swarup
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
Ricardo Wendell Rodrigues da Silveira
 
DBSCAN (2014_11_25 06_21_12 UTC)
DBSCAN (2014_11_25 06_21_12 UTC)DBSCAN (2014_11_25 06_21_12 UTC)
DBSCAN (2014_11_25 06_21_12 UTC)Cory Cook
 
Linear regression with gradient descent
Linear regression with gradient descentLinear regression with gradient descent
Linear regression with gradient descent
Suraj Parmar
 
An Introduction to Supervised Machine Learning and Pattern Classification: Th...
An Introduction to Supervised Machine Learning and Pattern Classification: Th...An Introduction to Supervised Machine Learning and Pattern Classification: Th...
An Introduction to Supervised Machine Learning and Pattern Classification: Th...
Sebastian Raschka
 
Machine learning ppt.
Machine learning ppt.Machine learning ppt.
Machine learning ppt.
ASHOK KUMAR
 
Birch Algorithm With Solved Example
Birch Algorithm With Solved ExampleBirch Algorithm With Solved Example
Birch Algorithm With Solved Example
kailash shaw
 

What's hot (20)

Feature selection
Feature selectionFeature selection
Feature selection
 
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic ConceptsData Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
Data Mining:Concepts and Techniques, Chapter 8. Classification: Basic Concepts
 
Lecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation MaximizationLecture 18: Gaussian Mixture Models and Expectation Maximization
Lecture 18: Gaussian Mixture Models and Expectation Maximization
 
Unsupervised learning
Unsupervised learningUnsupervised learning
Unsupervised learning
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
Optimization/Gradient Descent
Optimization/Gradient DescentOptimization/Gradient Descent
Optimization/Gradient Descent
 
Performance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning AlgorithmsPerformance Metrics for Machine Learning Algorithms
Performance Metrics for Machine Learning Algorithms
 
K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIER
 
Feature selection concepts and methods
Feature selection concepts and methodsFeature selection concepts and methods
Feature selection concepts and methods
 
Lect5 principal component analysis
Lect5 principal component analysisLect5 principal component analysis
Lect5 principal component analysis
 
Ridge regression
Ridge regressionRidge regression
Ridge regression
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
Kmeans
KmeansKmeans
Kmeans
 
DBSCAN (2014_11_25 06_21_12 UTC)
DBSCAN (2014_11_25 06_21_12 UTC)DBSCAN (2014_11_25 06_21_12 UTC)
DBSCAN (2014_11_25 06_21_12 UTC)
 
3. mining frequent patterns
3. mining frequent patterns3. mining frequent patterns
3. mining frequent patterns
 
Linear regression with gradient descent
Linear regression with gradient descentLinear regression with gradient descent
Linear regression with gradient descent
 
An Introduction to Supervised Machine Learning and Pattern Classification: Th...
An Introduction to Supervised Machine Learning and Pattern Classification: Th...An Introduction to Supervised Machine Learning and Pattern Classification: Th...
An Introduction to Supervised Machine Learning and Pattern Classification: Th...
 
Machine learning ppt.
Machine learning ppt.Machine learning ppt.
Machine learning ppt.
 
Birch Algorithm With Solved Example
Birch Algorithm With Solved ExampleBirch Algorithm With Solved Example
Birch Algorithm With Solved Example
 

Similar to Neural Networks: Principal Component Analysis (PCA)

5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf
Rahul926331
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx
36rajneekant
 
Neural Networks: Support Vector machines
Neural Networks: Support Vector machinesNeural Networks: Support Vector machines
Neural Networks: Support Vector machines
Mostafa G. M. Mostafa
 
Understandig PCA and LDA
Understandig PCA and LDAUnderstandig PCA and LDA
Understandig PCA and LDA
Dr. Syed Hassan Amin
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
PyData
 
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual RepresentationsSimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
ynxm25hpxp
 
Reducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksReducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networks
Hakky St
 
CSC446: Pattern Recognition (LN6)
CSC446: Pattern Recognition (LN6)CSC446: Pattern Recognition (LN6)
CSC446: Pattern Recognition (LN6)
Mostafa G. M. Mostafa
 
Getting started with chemometric classification
Getting started with chemometric classificationGetting started with chemometric classification
Getting started with chemometric classification
Alex Henderson
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
qwerty432737
 
The following ppt is about principal component analysis
The following ppt is about principal component analysisThe following ppt is about principal component analysis
The following ppt is about principal component analysis
Sushmit8
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
gerogepatton
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
IJITE
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
Hakka Labs
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...butest
 
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
r-kor
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
MrHacker61
 
Csc446: Pattern Recognition
Csc446: Pattern Recognition Csc446: Pattern Recognition
Csc446: Pattern Recognition
Mostafa G. M. Mostafa
 

Similar to Neural Networks: Principal Component Analysis (PCA) (20)

5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx
 
Neural Networks: Support Vector machines
Neural Networks: Support Vector machinesNeural Networks: Support Vector machines
Neural Networks: Support Vector machines
 
Understandig PCA and LDA
Understandig PCA and LDAUnderstandig PCA and LDA
Understandig PCA and LDA
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
 
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual RepresentationsSimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
 
Reducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksReducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networks
 
CSC446: Pattern Recognition (LN6)
CSC446: Pattern Recognition (LN6)CSC446: Pattern Recognition (LN6)
CSC446: Pattern Recognition (LN6)
 
Getting started with chemometric classification
Getting started with chemometric classificationGetting started with chemometric classification
Getting started with chemometric classification
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
pca.ppt
pca.pptpca.ppt
pca.ppt
 
The following ppt is about principal component analysis
The following ppt is about principal component analysisThe following ppt is about principal component analysis
The following ppt is about principal component analysis
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...
 
Project PPT
Project PPTProject PPT
Project PPT
 
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Csc446: Pattern Recognition
Csc446: Pattern Recognition Csc446: Pattern Recognition
Csc446: Pattern Recognition
 

More from Mostafa G. M. Mostafa

CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)
Mostafa G. M. Mostafa
 
CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)
Mostafa G. M. Mostafa
 
CSC446: Pattern Recognition (LN5)
CSC446: Pattern Recognition (LN5)CSC446: Pattern Recognition (LN5)
CSC446: Pattern Recognition (LN5)
Mostafa G. M. Mostafa
 
CSC446: Pattern Recognition (LN4)
CSC446: Pattern Recognition (LN4)CSC446: Pattern Recognition (LN4)
CSC446: Pattern Recognition (LN4)
Mostafa G. M. Mostafa
 
CSC446: Pattern Recognition (LN3)
CSC446: Pattern Recognition (LN3)CSC446: Pattern Recognition (LN3)
CSC446: Pattern Recognition (LN3)
Mostafa G. M. Mostafa
 
Csc446: Pattren Recognition (LN2)
Csc446: Pattren Recognition (LN2)Csc446: Pattren Recognition (LN2)
Csc446: Pattren Recognition (LN2)
Mostafa G. M. Mostafa
 
Csc446: Pattren Recognition
Csc446: Pattren RecognitionCsc446: Pattren Recognition
Csc446: Pattren Recognition
Mostafa G. M. Mostafa
 
Csc446: Pattren Recognition (LN1)
Csc446: Pattren Recognition (LN1)Csc446: Pattren Recognition (LN1)
Csc446: Pattren Recognition (LN1)
Mostafa G. M. Mostafa
 
Digital Image Processing: Image Restoration
Digital Image Processing: Image RestorationDigital Image Processing: Image Restoration
Digital Image Processing: Image Restoration
Mostafa G. M. Mostafa
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
Mostafa G. M. Mostafa
 
Digital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial DomainDigital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial Domain
Mostafa G. M. Mostafa
 
Digital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainDigital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency Domain
Mostafa G. M. Mostafa
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image Fundamentals
Mostafa G. M. Mostafa
 
Digital Image Processing: An Introduction
Digital Image Processing: An IntroductionDigital Image Processing: An Introduction
Digital Image Processing: An Introduction
Mostafa G. M. Mostafa
 
Neural Networks: Introducton
Neural Networks: IntroductonNeural Networks: Introducton
Neural Networks: Introducton
Mostafa G. M. Mostafa
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
Mostafa G. M. Mostafa
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
Mostafa G. M. Mostafa
 
Neural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear RegressionNeural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear Regression
Mostafa G. M. Mostafa
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
Mostafa G. M. Mostafa
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
Mostafa G. M. Mostafa
 

More from Mostafa G. M. Mostafa (20)

CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)
 
CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)
 
CSC446: Pattern Recognition (LN5)
CSC446: Pattern Recognition (LN5)CSC446: Pattern Recognition (LN5)
CSC446: Pattern Recognition (LN5)
 
CSC446: Pattern Recognition (LN4)
CSC446: Pattern Recognition (LN4)CSC446: Pattern Recognition (LN4)
CSC446: Pattern Recognition (LN4)
 
CSC446: Pattern Recognition (LN3)
CSC446: Pattern Recognition (LN3)CSC446: Pattern Recognition (LN3)
CSC446: Pattern Recognition (LN3)
 
Csc446: Pattren Recognition (LN2)
Csc446: Pattren Recognition (LN2)Csc446: Pattren Recognition (LN2)
Csc446: Pattren Recognition (LN2)
 
Csc446: Pattren Recognition
Csc446: Pattren RecognitionCsc446: Pattren Recognition
Csc446: Pattren Recognition
 
Csc446: Pattren Recognition (LN1)
Csc446: Pattren Recognition (LN1)Csc446: Pattren Recognition (LN1)
Csc446: Pattren Recognition (LN1)
 
Digital Image Processing: Image Restoration
Digital Image Processing: Image RestorationDigital Image Processing: Image Restoration
Digital Image Processing: Image Restoration
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
 
Digital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial DomainDigital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial Domain
 
Digital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainDigital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency Domain
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image Fundamentals
 
Digital Image Processing: An Introduction
Digital Image Processing: An IntroductionDigital Image Processing: An Introduction
Digital Image Processing: An Introduction
 
Neural Networks: Introducton
Neural Networks: IntroductonNeural Networks: Introducton
Neural Networks: Introducton
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
 
Neural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear RegressionNeural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear Regression
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
 

Recently uploaded

Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
DhatriParmar
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
Peter Windle
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 

Recently uploaded (20)

Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
The Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptxThe Accursed House by Émile Gaboriau.pptx
The Accursed House by Émile Gaboriau.pptx
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 

Neural Networks: Principal Component Analysis (PCA)

  • 1. CHAPTERS 8 UNSUPERVISED LEARNING: PRINCIPAL-COMPONENTS ANALYSIS (PCA) CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq M. Mostafa Computer Science Department Faculty of Computer & Information Sciences AIN SHAMS UNIVERSITY Credits: Some Slides are taken from presentations on PCA by : 1. Barnabás Póczos University of Alberta 2. Jieping Ye, http://www.public.asu.edu/~jye02
  • 2. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq  Introduction  Tasks of Unsupervised Learning  What is Data Reduction?  Why we need to Reduce Data Dimensionality?  Clustering and Data Reduction  The PCA Computation  Computer Experiment 2 Outlines
  • 3. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 3 Unsupervised Learning  In unsupervised learning, the requirement is to discover significant patterns, or features, of the input data through the use of unlabeled examples.  That it, the network operates according to the rule: “Learn from examples without a teacher”
  • 4. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq What is feature reduction?  Feature reduction refers to the mapping of the original high-dimensional data onto a lower-dimensional space.  Criterion for feature reduction can be different based on different problem settings.  Unsupervised setting: minimize the information loss  Supervised setting: maximize the class discrimination  Given a set of data points of p variables Compute the linear transformation (projection)  nxxx ,,, 21  )(: pdxGyxG dTpdp  
  • 5. High Dimensional Data Gene expression Face images Handwritten digits
  • 6. Why feature reduction?  Most machine learning and data mining techniques may not be effective for high- dimensional data  Curse of Dimensionality  Query accuracy and efficiency degrade rapidly as the dimension increases.  The intrinsic dimension may be small.  For example, the number of genes responsible for a certain type of disease may be small.
  • 7. Why feature reduction?  Visualization: projection of high-dimensional data onto 2D or 3D.  Data compression: efficient storage and retrieval.  Noise removal: positive effect on query accuracy.
  • 8. What is Principal Component Analysis?  Principal component analysis (PCA)  Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables  Retains most of the sample's information.  Useful for the compression and classification of data.  By information we mean the variation present in the sample, given by the correlations between the original variables.  The new variables, called principal components (PCs), are uncorrelated, and are ordered by the fraction of the total information each retains.
  • 9. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq principal components (PCs) 2z 1z 1z• the 1st PC is a minimum distance fit to a line in X space • the 2nd PC is a minimum distance fit to a line in the plane perpendicular to the 1st PC PCs are a series of linear least squares fits to a sample, each orthogonal to all the previous.
  • 10. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq Algebraic definition of PCs .,,2,1, 1 111 njxaxaz p i ijij T     p nxxx ,,, 21  ]var[ 1z ),,,( ),,,( 21 121111 pjjjj p xxxx aaaa     Given a sample of n observations on a vector of p variables define the first principal component of the sample by the linear transformation where the vector is chosen such that is maximum.
  • 11. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq To find first note that where is the covariance matrix. Algebraic Derivation of the PCA   T i n i i xxxx n  1 1 1a      11 1 11 1 2 11 2 111 1 1 ))((]var[ aaaxxxxa n xaxa n zzEz T n i T ii T n i T i T       mean.theis 1 1   n i ix n x In the following, we assume the Data is centered. 0x
  • 12. Algebraic derivation of PCs np nxxxX   ],,,[ 21  0x T XX n S 1  Assume Form the matrix: then T VUX  Obtain eigenvectors of S by computing the SVD of X:
  • 13. Principle Component Analysis Orthogonal projection of data onto lower-dimension linear space that...  maximizes variance of projected data (purple line)  minimizes mean squared distance between data point and projections (sum of blue lines) PCA:
  • 14. Principle Components Analysis Idea:  Given data points in a d-dimensional space, project into lower dimensional space while preserving as much information as possible  Eg, find best planar approximation to 3D data  Eg, find best 12-D approximation to 104-D data  In particular, choose projection that minimizes squared error in reconstructing original data
  • 15.  Vectors originating from the center of mass  Principal component #1 points in the direction of the largest variance.  Each subsequent principal component…  is orthogonal to the previous ones, and  points in the directions of the largest variance of the residual subspace The Principal Components
  • 19. PCA algorithm I (sequential)       m i k j i T jji T k m 1 2 1 1 1 })]({[ 1 maxarg xwwxww w }){( 1 maxarg 1 2 i 1 1    m i T m xww w We maximize the variance of the projection in the residual subspace We maximize the variance of projection of x x’ PCA reconstruction Given the centered data {x1, …, xm}, compute the principal vectors: 1st PCA vector kth PCA vector w1(w1 Tx) w2(w2 Tx) x w1 w2 x’=w1(w1 Tx)+w2(w2 Tx) w
  • 20. PCA algorithm II (sample covariance matrix)  Given data {x1, …, xm}, compute covariance matrix   PCA basis vectors = the eigenvectors of   Larger eigenvalue  more important eigenvectors   m i T i m 1 ))(( 1 xxxx   m i i m 1 1 xxwhere
  • 21. PCA algorithm II PCA algorithm(X, k): top k eigenvalues/eigenvectors % X = N  m data matrix, % … each data point xi = column vector, i=1..m • • X  subtract mean x from each column vector xi in X •   XXT … covariance matrix of X • { i, ui }i=1..N = eigenvectors/eigenvalues of  ... 1  2  …  N • Return { i, ui }i=1..k % top k principle components   m im 1 1 ixx
  • 22. PCA algorithm III (SVD of the data matrix) Singular Value Decomposition of the centered data matrix X. Xfeatures  samples = USVT X VTSU= samples significant noise noise noise significant sig.
  • 23. PCA algorithm III  Columns of U  the principal vectors, { u(1), …, u(k) }  orthogonal and has unit norm – so UTU = I  Can reconstruct the data using linear combinations of { u(1), …, u(k) }  Matrix S  Diagonal  Shows importance of each eigenvector  Columns of VT  The coefficients for reconstructing the samples
  • 25. Challenge: Facial Recognition  Want to identify specific person, based on facial image  Robust to glasses, lighting,…  Can’t just use the given 256 x 256 pixels
  • 26. Applying PCA: Eigenfaces  Example data set: Images of faces  Famous Eigenface approach [Turk & Pentland], [Sirovich & Kirby]  Each face x is …  256  256 values (luminance at location)  x in 256256 (view as 64K dim vector)  Form X = [ x1 , …, xm ] centered data mtx  Compute  = XXT  Problem:  is 64K  64K … HUGE!!! 256x256 realvalues m faces X = x1, …, xm Method A: Build a PCA subspace for each person and check which subspace can reconstruct the test image the best Method B: Build one PCA database for the whole dataset and then classify based on the weights.
  • 27. Computational Complexity  Suppose m instances, each of size N  Eigenfaces: m=500 faces, each of size N=64K  Given NN covariance matrix , can compute  all N eigenvectors/eigenvalues in O(N3)  first k eigenvectors/eigenvalues in O(k N2)  But if N=64K, EXPENSIVE!
  • 28. A Clever Workaround  Note that m<<64K  Use L=XTX instead of =XXT  If v is eigenvector of L then Xv is eigenvector of  Proof: L v =  v XTX v =  v X (XTX v) = X( v) =  Xv (XXT)X v =  (Xv)  Xv) =  (Xv) 256x256 realvalues m faces X = x1, …, xm
  • 30. Reconstructing… (Method B)  … faster if train with…  only people w/out glasses  same lighting conditions
  • 31. Shortcomings  Requires carefully controlled data:  All faces centered in frame  Same size  Some sensitivity to angle  Alternative:  “Learn” one set of PCA vectors for each angle  Use the one with lowest error  Method is completely knowledge free  (sometimes this is good!)  Doesn’t know that faces are wrapped around 3D objects (heads)  Makes no effort to preserve class distinctions
  • 39. Original Image • Divide the original 372x492 image into patches: • Each patch is an instance that contains 12x12 pixels on a grid • View each as a 144-D vector
  • 40. L2 error and PCA dim
  • 43. 16 most important eigenvectors 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12
  • 45. 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 6 most important eigenvectors
  • 46. 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 3 most important eigenvectors
  • 48. 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 3 most important eigenvectors
  • 50. 60 most important eigenvectors Looks like the discrete cosine bases of JPG!...
  • 51. 2D Discrete Cosine Basis http://en.wikipedia.org/wiki/Discrete_cosine_transform
  • 55. Denoised image using 15 PCA components
  • 57. PCA, a Problematic Data Set PCA doesn’t know labels!
  • 58. PCA vs Fisher Linear Discriminant  PCA maximizes variance, independent of class  magenta  FLD attempts to separate classes  green line
  • 59. PCA, a Problematic Data Set PCA cannot capture NON-LINEAR structure!
  • 60. PCA Conclusions  PCA  finds orthonormal basis for data  Sorts dimensions in order of “importance”  Discard low significance dimensions  Uses:  Get compact description  Ignore noise  Improve classification (hopefully)  Not magic:  Doesn’t know class labels  Can only capture linear variations  One of many tricks to reduce dimensionality!
  • 61. Applications of PCA  Eigenfaces for recognition. Turk and Pentland. 1991.  Principal Component Analysis for clustering gene expression data. Yeung and Ruzzo. 2001.  Probabilistic Disease Classification of Expression- Dependent Proteomic Data from Mass Spectrometry of Human Serum. Lilien. 2003.
  • 62. PCA for image compression d=1 d=2 d=4 d=8 d=16 d=32 d=64 d=100 Original Image