SlideShare a Scribd company logo
1 of 62
Download to read offline
CHAPTERS 8
UNSUPERVISED LEARNING:
PRINCIPAL-COMPONENTS ANALYSIS (PCA)
CSC445: Neural Networks
Prof. Dr. Mostafa Gadal-Haqq M. Mostafa
Computer Science Department
Faculty of Computer & Information Sciences
AIN SHAMS UNIVERSITY
Credits: Some Slides are taken from presentations on PCA by :
1. Barnabás Póczos University of Alberta
2. Jieping Ye, http://www.public.asu.edu/~jye02
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
 Introduction
 Tasks of Unsupervised Learning
 What is Data Reduction?
 Why we need to Reduce Data Dimensionality?
 Clustering and Data Reduction
 The PCA Computation
 Computer Experiment
2
Outlines
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 3
Unsupervised Learning
 In unsupervised learning, the requirement is to discover
significant patterns, or features, of the input data
through the use of unlabeled examples.
 That it, the network operates according to the rule:
“Learn from examples without a teacher”
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
What is feature reduction?
 Feature reduction refers to the mapping of the original high-dimensional data
onto a lower-dimensional space.
 Criterion for feature reduction can be different based on different problem settings.
 Unsupervised setting: minimize the information loss
 Supervised setting: maximize the class discrimination
 Given a set of data points of p variables
Compute the linear transformation (projection)
 nxxx ,,, 21 
)(: pdxGyxG dTpdp
 
High Dimensional Data
Gene expression Face images Handwritten digits
Why feature reduction?
 Most machine learning and data mining
techniques may not be effective for high-
dimensional data
 Curse of Dimensionality
 Query accuracy and efficiency degrade rapidly as the
dimension increases.
 The intrinsic dimension may be small.
 For example, the number of genes responsible for a certain
type of disease may be small.
Why feature reduction?
 Visualization: projection of high-dimensional data
onto 2D or 3D.
 Data compression: efficient storage and retrieval.
 Noise removal: positive effect on query accuracy.
What is Principal Component Analysis?
 Principal component analysis (PCA)
 Reduce the dimensionality of a data set by finding a new set of
variables, smaller than the original set of variables
 Retains most of the sample's information.
 Useful for the compression and classification of data.
 By information we mean the variation present in the
sample, given by the correlations between the original
variables.
 The new variables, called principal components (PCs), are
uncorrelated, and are ordered by the fraction of the total
information each retains.
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
principal components (PCs)
2z
1z
1z• the 1st PC is a minimum distance fit to a line in X space
• the 2nd PC is a minimum distance fit to a line in the plane
perpendicular to the 1st PC
PCs are a series of linear least squares fits to a sample,
each orthogonal to all the previous.
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
Algebraic definition of PCs
.,,2,1,
1
111 njxaxaz
p
i
ijij
T
 
  p
nxxx ,,, 21 
]var[ 1z
),,,(
),,,(
21
121111
pjjjj
p
xxxx
aaaa




Given a sample of n observations on a vector of p variables
define the first principal component of the sample
by the linear transformation
where the vector
is chosen such that is maximum.
ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq
To find first note that
where
is the covariance matrix.
Algebraic Derivation of the PCA
  T
i
n
i
i xxxx
n
 1
1
1a
 
   11
1
11
1
2
11
2
111
1
1
))((]var[
aaaxxxxa
n
xaxa
n
zzEz
T
n
i
T
ii
T
n
i
T
i
T






mean.theis
1
1


n
i
ix
n
x
In the following, we assume the Data is centered.
0x
Algebraic derivation of PCs
np
nxxxX 
 ],,,[ 21 
0x
T
XX
n
S
1

Assume
Form the matrix:
then
T
VUX 
Obtain eigenvectors of S by computing the SVD of X:
Principle Component Analysis
Orthogonal projection of data onto lower-dimension linear
space that...
 maximizes variance of projected data (purple line)
 minimizes mean squared distance between
data point and
projections (sum of blue lines)
PCA:
Principle Components Analysis
Idea:
 Given data points in a d-dimensional space,
project into lower dimensional space while preserving as much
information as possible
 Eg, find best planar approximation to 3D data
 Eg, find best 12-D approximation to 104-D data
 In particular, choose projection that
minimizes squared error
in reconstructing original data
 Vectors originating from the center of mass
 Principal component #1 points
in the direction of the largest variance.
 Each subsequent principal component…
 is orthogonal to the previous ones, and
 points in the directions of the largest
variance of the residual subspace
The Principal Components
2D Gaussian dataset
1st PCA axis
2nd PCA axis
PCA algorithm I (sequential)
 




m
i
k
j
i
T
jji
T
k
m 1
2
1
1
1
})]({[
1
maxarg xwwxww
w
}){(
1
maxarg
1
2
i
1
1 


m
i
T
m
xww
w
We maximize the
variance of the projection
in the residual subspace
We maximize the variance of projection of x
x’ PCA reconstruction
Given the centered data {x1, …, xm}, compute the principal vectors:
1st PCA vector
kth PCA vector
w1(w1
Tx)
w2(w2
Tx)
x
w1
w2
x’=w1(w1
Tx)+w2(w2
Tx)
w
PCA algorithm II
(sample covariance matrix)
 Given data {x1, …, xm}, compute covariance matrix 
 PCA basis vectors = the eigenvectors of 
 Larger eigenvalue  more important eigenvectors


m
i
T
i
m 1
))((
1
xxxx 

m
i
i
m 1
1
xxwhere
PCA algorithm II
PCA algorithm(X, k): top k eigenvalues/eigenvectors
% X = N  m data matrix,
% … each data point xi = column vector, i=1..m
•
• X  subtract mean x from each column vector xi in X
•   XXT … covariance matrix of X
• { i, ui }i=1..N = eigenvectors/eigenvalues of 
... 1  2  …  N
• Return { i, ui }i=1..k
% top k principle components


m
im 1
1
ixx
PCA algorithm III
(SVD of the data matrix)
Singular Value Decomposition of the centered data matrix X.
Xfeatures  samples = USVT
X VTSU=
samples
significant
noise
noise
noise
significant
sig.
PCA algorithm III
 Columns of U
 the principal vectors, { u(1), …, u(k) }
 orthogonal and has unit norm – so UTU = I
 Can reconstruct the data using linear combinations of
{ u(1), …, u(k) }
 Matrix S
 Diagonal
 Shows importance of each eigenvector
 Columns of VT
 The coefficients for reconstructing the samples
Face recognition
Challenge: Facial Recognition
 Want to identify specific person, based on facial image
 Robust to glasses, lighting,…
 Can’t just use the given 256 x 256 pixels
Applying PCA: Eigenfaces
 Example data set: Images of faces
 Famous Eigenface approach
[Turk & Pentland], [Sirovich & Kirby]
 Each face x is …
 256  256 values (luminance at location)
 x in 256256 (view as 64K dim vector)
 Form X = [ x1 , …, xm ] centered data mtx
 Compute  = XXT
 Problem:  is 64K  64K … HUGE!!!
256x256
realvalues
m faces
X =
x1, …, xm
Method A: Build a PCA subspace for each person and check
which subspace can reconstruct the test image the best
Method B: Build one PCA database for the whole dataset and
then classify based on the weights.
Computational Complexity
 Suppose m instances, each of size N
 Eigenfaces: m=500 faces, each of size N=64K
 Given NN covariance matrix , can compute
 all N eigenvectors/eigenvalues in O(N3)
 first k eigenvectors/eigenvalues in O(k N2)
 But if N=64K, EXPENSIVE!
A Clever Workaround
 Note that m<<64K
 Use L=XTX instead of =XXT
 If v is eigenvector of L
then Xv is eigenvector of 
Proof: L v =  v
XTX v =  v
X (XTX v) = X( v) =  Xv
(XXT)X v =  (Xv)
 Xv) =  (Xv)
256x256
realvalues
m faces
X =
x1, …, xm
Principle Components (Method B)
Reconstructing… (Method B)
 … faster if train with…
 only people w/out glasses
 same lighting conditions
Shortcomings
 Requires carefully controlled data:
 All faces centered in frame
 Same size
 Some sensitivity to angle
 Alternative:
 “Learn” one set of PCA vectors for each angle
 Use the one with lowest error
 Method is completely knowledge free
 (sometimes this is good!)
 Doesn’t know that faces are wrapped around 3D objects
(heads)
 Makes no effort to preserve class distinctions
Facial expression recognition
Happiness subspace (method A)
Disgust subspace (method A)
Facial Expression Recognition
Movies (method A)
Facial Expression Recognition
Movies (method A)
Facial Expression Recognition
Movies (method A)
Image Compression
Original Image
• Divide the original 372x492 image into patches:
• Each patch is an instance that contains 12x12 pixels on a grid
• View each as a 144-D vector
L2 error and PCA dim
PCA compression: 144D ) 60D
PCA compression: 144D ) 16D
16 most important eigenvectors
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
PCA compression: 144D ) 6D
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
6 most important eigenvectors
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
3 most important eigenvectors
PCA compression: 144D ) 3D
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
2 4 6 8 10 12
2
4
6
8
10
12
3 most important eigenvectors
PCA compression: 144D ) 1D
60 most important eigenvectors
Looks like the discrete cosine bases of JPG!...
2D Discrete Cosine Basis
http://en.wikipedia.org/wiki/Discrete_cosine_transform
Noise Filtering
Noise Filtering, Auto-Encoder…
x x’
U x
Noisy image
Denoised image
using 15 PCA components
PCA Shortcomings
PCA, a Problematic Data Set
PCA doesn’t know labels!
PCA vs Fisher Linear Discriminant
 PCA maximizes variance,
independent of class
 magenta
 FLD attempts to separate classes
 green line
PCA, a Problematic Data Set
PCA cannot capture NON-LINEAR structure!
PCA Conclusions
 PCA
 finds orthonormal basis for data
 Sorts dimensions in order of “importance”
 Discard low significance dimensions
 Uses:
 Get compact description
 Ignore noise
 Improve classification (hopefully)
 Not magic:
 Doesn’t know class labels
 Can only capture linear variations
 One of many tricks to reduce dimensionality!
Applications of PCA
 Eigenfaces for recognition. Turk and Pentland.
1991.
 Principal Component Analysis for clustering gene
expression data. Yeung and Ruzzo. 2001.
 Probabilistic Disease Classification of Expression-
Dependent Proteomic Data from Mass
Spectrometry of Human Serum. Lilien. 2003.
PCA for image compression
d=1 d=2 d=4 d=8
d=16 d=32 d=64 d=100
Original
Image

More Related Content

What's hot

Speaker Recognition using Gaussian Mixture Model
Speaker Recognition using Gaussian Mixture Model Speaker Recognition using Gaussian Mixture Model
Speaker Recognition using Gaussian Mixture Model Saurab Dulal
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshareRed Innovators
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learningmilad abbasi
 
Semi-Supervised Learning
Semi-Supervised LearningSemi-Supervised Learning
Semi-Supervised LearningLukas Tencer
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentMuhammad Rasel
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptronomaraldabash
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applicationsSangeeta Tiwari
 
Recursive Neural Networks
Recursive Neural NetworksRecursive Neural Networks
Recursive Neural NetworksSangwoo Mo
 
Autoencoders
AutoencodersAutoencoders
AutoencodersCloudxLab
 
Independent Component Analysis
Independent Component AnalysisIndependent Component Analysis
Independent Component AnalysisTatsuya Yokota
 
Data preprocessing using Machine Learning
Data  preprocessing using Machine Learning Data  preprocessing using Machine Learning
Data preprocessing using Machine Learning Gopal Sakarkar
 

What's hot (20)

Speaker Recognition using Gaussian Mixture Model
Speaker Recognition using Gaussian Mixture Model Speaker Recognition using Gaussian Mixture Model
Speaker Recognition using Gaussian Mixture Model
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)Fuzzy Clustering(C-means, K-means)
Fuzzy Clustering(C-means, K-means)
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
Feature Selection
Feature Selection Feature Selection
Feature Selection
 
Autoencoders in Deep Learning
Autoencoders in Deep LearningAutoencoders in Deep Learning
Autoencoders in Deep Learning
 
Semi-Supervised Learning
Semi-Supervised LearningSemi-Supervised Learning
Semi-Supervised Learning
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
GMM
GMMGMM
GMM
 
Deep learning
Deep learningDeep learning
Deep learning
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
 
Back propagation
Back propagationBack propagation
Back propagation
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
 
Recursive Neural Networks
Recursive Neural NetworksRecursive Neural Networks
Recursive Neural Networks
 
Autoencoders
AutoencodersAutoencoders
Autoencoders
 
Independent Component Analysis
Independent Component AnalysisIndependent Component Analysis
Independent Component Analysis
 
Data preprocessing using Machine Learning
Data  preprocessing using Machine Learning Data  preprocessing using Machine Learning
Data preprocessing using Machine Learning
 

Similar to Neural Networks: Principal Component Analysis (PCA)

5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdfRahul926331
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx36rajneekant
 
Neural Networks: Support Vector machines
Neural Networks: Support Vector machinesNeural Networks: Support Vector machines
Neural Networks: Support Vector machinesMostafa G. M. Mostafa
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014PyData
 
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual RepresentationsSimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual Representationsynxm25hpxp
 
Reducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksReducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksHakky St
 
Getting started with chemometric classification
Getting started with chemometric classificationGetting started with chemometric classification
Getting started with chemometric classificationAlex Henderson
 
The following ppt is about principal component analysis
The following ppt is about principal component analysisThe following ppt is about principal component analysis
The following ppt is about principal component analysisSushmit8
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksIJITE
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networksgerogepatton
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleHakka Labs
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...butest
 
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개r-kor
 

Similar to Neural Networks: Principal Component Analysis (PCA) (20)

5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf5 DimensionalityReduction.pdf
5 DimensionalityReduction.pdf
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx
 
Neural Networks: Support Vector machines
Neural Networks: Support Vector machinesNeural Networks: Support Vector machines
Neural Networks: Support Vector machines
 
Understandig PCA and LDA
Understandig PCA and LDAUnderstandig PCA and LDA
Understandig PCA and LDA
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
 
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual RepresentationsSimCLR: A Simple Framework for Contrastive Learning of Visual Representations
SimCLR: A Simple Framework for Contrastive Learning of Visual Representations
 
Reducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networksReducing the dimensionality of data with neural networks
Reducing the dimensionality of data with neural networks
 
CSC446: Pattern Recognition (LN6)
CSC446: Pattern Recognition (LN6)CSC446: Pattern Recognition (LN6)
CSC446: Pattern Recognition (LN6)
 
Getting started with chemometric classification
Getting started with chemometric classificationGetting started with chemometric classification
Getting started with chemometric classification
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
pca.ppt
pca.pptpca.ppt
pca.ppt
 
The following ppt is about principal component analysis
The following ppt is about principal component analysisThe following ppt is about principal component analysis
The following ppt is about principal component analysis
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
 
Fixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural NetworksFixed-Point Code Synthesis for Neural Networks
Fixed-Point Code Synthesis for Neural Networks
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
 
. An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic .... An introduction to machine learning and probabilistic ...
. An introduction to machine learning and probabilistic ...
 
Project PPT
Project PPTProject PPT
Project PPT
 
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
RUCK 2017 MxNet과 R을 연동한 딥러닝 소개
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
Csc446: Pattern Recognition
Csc446: Pattern Recognition Csc446: Pattern Recognition
Csc446: Pattern Recognition
 

More from Mostafa G. M. Mostafa

Digital Image Processing: Image Restoration
Digital Image Processing: Image RestorationDigital Image Processing: Image Restoration
Digital Image Processing: Image RestorationMostafa G. M. Mostafa
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationMostafa G. M. Mostafa
 
Digital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial DomainDigital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial DomainMostafa G. M. Mostafa
 
Digital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainDigital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainMostafa G. M. Mostafa
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsMostafa G. M. Mostafa
 
Digital Image Processing: An Introduction
Digital Image Processing: An IntroductionDigital Image Processing: An Introduction
Digital Image Processing: An IntroductionMostafa G. M. Mostafa
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmMostafa G. M. Mostafa
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronMostafa G. M. Mostafa
 
Neural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear RegressionNeural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear RegressionMostafa G. M. Mostafa
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronMostafa G. M. Mostafa
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Mostafa G. M. Mostafa
 

More from Mostafa G. M. Mostafa (20)

CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)
 
CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)CSC446: Pattern Recognition (LN7)
CSC446: Pattern Recognition (LN7)
 
CSC446: Pattern Recognition (LN5)
CSC446: Pattern Recognition (LN5)CSC446: Pattern Recognition (LN5)
CSC446: Pattern Recognition (LN5)
 
CSC446: Pattern Recognition (LN4)
CSC446: Pattern Recognition (LN4)CSC446: Pattern Recognition (LN4)
CSC446: Pattern Recognition (LN4)
 
CSC446: Pattern Recognition (LN3)
CSC446: Pattern Recognition (LN3)CSC446: Pattern Recognition (LN3)
CSC446: Pattern Recognition (LN3)
 
Csc446: Pattren Recognition (LN2)
Csc446: Pattren Recognition (LN2)Csc446: Pattren Recognition (LN2)
Csc446: Pattren Recognition (LN2)
 
Csc446: Pattren Recognition
Csc446: Pattren RecognitionCsc446: Pattren Recognition
Csc446: Pattren Recognition
 
Csc446: Pattren Recognition (LN1)
Csc446: Pattren Recognition (LN1)Csc446: Pattren Recognition (LN1)
Csc446: Pattren Recognition (LN1)
 
Digital Image Processing: Image Restoration
Digital Image Processing: Image RestorationDigital Image Processing: Image Restoration
Digital Image Processing: Image Restoration
 
Digital Image Processing: Image Segmentation
Digital Image Processing: Image SegmentationDigital Image Processing: Image Segmentation
Digital Image Processing: Image Segmentation
 
Digital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial DomainDigital Image Processing: Image Enhancement in the Spatial Domain
Digital Image Processing: Image Enhancement in the Spatial Domain
 
Digital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency DomainDigital Image Processing: Image Enhancement in the Frequency Domain
Digital Image Processing: Image Enhancement in the Frequency Domain
 
Digital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image FundamentalsDigital Image Processing: Digital Image Fundamentals
Digital Image Processing: Digital Image Fundamentals
 
Digital Image Processing: An Introduction
Digital Image Processing: An IntroductionDigital Image Processing: An Introduction
Digital Image Processing: An Introduction
 
Neural Networks: Introducton
Neural Networks: IntroductonNeural Networks: Introducton
Neural Networks: Introducton
 
Neural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
 
Neural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's PerceptronNeural Networks: Rosenblatt's Perceptron
Neural Networks: Rosenblatt's Perceptron
 
Neural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear RegressionNeural Networks: Model Building Through Linear Regression
Neural Networks: Model Building Through Linear Regression
 
Neural Networks: Multilayer Perceptron
Neural Networks: Multilayer PerceptronNeural Networks: Multilayer Perceptron
Neural Networks: Multilayer Perceptron
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
 

Recently uploaded

18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon AUnboundStockton
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfUjwalaBharambe
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentInMediaRes1
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxAvyJaneVismanos
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxEyham Joco
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTiammrhaywood
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerunnathinaik
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitolTechU
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementmkooblal
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxOH TEIK BIN
 

Recently uploaded (20)

18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Crayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon ACrayon Activity Handout For the Crayon A
Crayon Activity Handout For the Crayon A
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdfFraming an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
Framing an Appropriate Research Question 6b9b26d93da94caf993c038d9efcdedb.pdf
 
Meghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media ComponentMeghan Sutherland In Media Res Media Component
Meghan Sutherland In Media Res Media Component
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
Final demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptxFinal demo Grade 9 for demo Plan dessert.pptx
Final demo Grade 9 for demo Plan dessert.pptx
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Types of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptxTypes of Journalistic Writing Grade 8.pptx
Types of Journalistic Writing Grade 8.pptx
 
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPTECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
ECONOMIC CONTEXT - LONG FORM TV DRAMA - PPT
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
internship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developerinternship ppt on smartinternz platform as salesforce developer
internship ppt on smartinternz platform as salesforce developer
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Capitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptxCapitol Tech U Doctoral Presentation - April 2024.pptx
Capitol Tech U Doctoral Presentation - April 2024.pptx
 
Hierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of managementHierarchy of management that covers different levels of management
Hierarchy of management that covers different levels of management
 
Solving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptxSolving Puzzles Benefits Everyone (English).pptx
Solving Puzzles Benefits Everyone (English).pptx
 

Neural Networks: Principal Component Analysis (PCA)

  • 1. CHAPTERS 8 UNSUPERVISED LEARNING: PRINCIPAL-COMPONENTS ANALYSIS (PCA) CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq M. Mostafa Computer Science Department Faculty of Computer & Information Sciences AIN SHAMS UNIVERSITY Credits: Some Slides are taken from presentations on PCA by : 1. Barnabás Póczos University of Alberta 2. Jieping Ye, http://www.public.asu.edu/~jye02
  • 2. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq  Introduction  Tasks of Unsupervised Learning  What is Data Reduction?  Why we need to Reduce Data Dimensionality?  Clustering and Data Reduction  The PCA Computation  Computer Experiment 2 Outlines
  • 3. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq 3 Unsupervised Learning  In unsupervised learning, the requirement is to discover significant patterns, or features, of the input data through the use of unlabeled examples.  That it, the network operates according to the rule: “Learn from examples without a teacher”
  • 4. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq What is feature reduction?  Feature reduction refers to the mapping of the original high-dimensional data onto a lower-dimensional space.  Criterion for feature reduction can be different based on different problem settings.  Unsupervised setting: minimize the information loss  Supervised setting: maximize the class discrimination  Given a set of data points of p variables Compute the linear transformation (projection)  nxxx ,,, 21  )(: pdxGyxG dTpdp  
  • 5. High Dimensional Data Gene expression Face images Handwritten digits
  • 6. Why feature reduction?  Most machine learning and data mining techniques may not be effective for high- dimensional data  Curse of Dimensionality  Query accuracy and efficiency degrade rapidly as the dimension increases.  The intrinsic dimension may be small.  For example, the number of genes responsible for a certain type of disease may be small.
  • 7. Why feature reduction?  Visualization: projection of high-dimensional data onto 2D or 3D.  Data compression: efficient storage and retrieval.  Noise removal: positive effect on query accuracy.
  • 8. What is Principal Component Analysis?  Principal component analysis (PCA)  Reduce the dimensionality of a data set by finding a new set of variables, smaller than the original set of variables  Retains most of the sample's information.  Useful for the compression and classification of data.  By information we mean the variation present in the sample, given by the correlations between the original variables.  The new variables, called principal components (PCs), are uncorrelated, and are ordered by the fraction of the total information each retains.
  • 9. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq principal components (PCs) 2z 1z 1z• the 1st PC is a minimum distance fit to a line in X space • the 2nd PC is a minimum distance fit to a line in the plane perpendicular to the 1st PC PCs are a series of linear least squares fits to a sample, each orthogonal to all the previous.
  • 10. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq Algebraic definition of PCs .,,2,1, 1 111 njxaxaz p i ijij T     p nxxx ,,, 21  ]var[ 1z ),,,( ),,,( 21 121111 pjjjj p xxxx aaaa     Given a sample of n observations on a vector of p variables define the first principal component of the sample by the linear transformation where the vector is chosen such that is maximum.
  • 11. ASU-CSC445: Neural Networks Prof. Dr. Mostafa Gadal-Haqq To find first note that where is the covariance matrix. Algebraic Derivation of the PCA   T i n i i xxxx n  1 1 1a      11 1 11 1 2 11 2 111 1 1 ))((]var[ aaaxxxxa n xaxa n zzEz T n i T ii T n i T i T       mean.theis 1 1   n i ix n x In the following, we assume the Data is centered. 0x
  • 12. Algebraic derivation of PCs np nxxxX   ],,,[ 21  0x T XX n S 1  Assume Form the matrix: then T VUX  Obtain eigenvectors of S by computing the SVD of X:
  • 13. Principle Component Analysis Orthogonal projection of data onto lower-dimension linear space that...  maximizes variance of projected data (purple line)  minimizes mean squared distance between data point and projections (sum of blue lines) PCA:
  • 14. Principle Components Analysis Idea:  Given data points in a d-dimensional space, project into lower dimensional space while preserving as much information as possible  Eg, find best planar approximation to 3D data  Eg, find best 12-D approximation to 104-D data  In particular, choose projection that minimizes squared error in reconstructing original data
  • 15.  Vectors originating from the center of mass  Principal component #1 points in the direction of the largest variance.  Each subsequent principal component…  is orthogonal to the previous ones, and  points in the directions of the largest variance of the residual subspace The Principal Components
  • 19. PCA algorithm I (sequential)       m i k j i T jji T k m 1 2 1 1 1 })]({[ 1 maxarg xwwxww w }){( 1 maxarg 1 2 i 1 1    m i T m xww w We maximize the variance of the projection in the residual subspace We maximize the variance of projection of x x’ PCA reconstruction Given the centered data {x1, …, xm}, compute the principal vectors: 1st PCA vector kth PCA vector w1(w1 Tx) w2(w2 Tx) x w1 w2 x’=w1(w1 Tx)+w2(w2 Tx) w
  • 20. PCA algorithm II (sample covariance matrix)  Given data {x1, …, xm}, compute covariance matrix   PCA basis vectors = the eigenvectors of   Larger eigenvalue  more important eigenvectors   m i T i m 1 ))(( 1 xxxx   m i i m 1 1 xxwhere
  • 21. PCA algorithm II PCA algorithm(X, k): top k eigenvalues/eigenvectors % X = N  m data matrix, % … each data point xi = column vector, i=1..m • • X  subtract mean x from each column vector xi in X •   XXT … covariance matrix of X • { i, ui }i=1..N = eigenvectors/eigenvalues of  ... 1  2  …  N • Return { i, ui }i=1..k % top k principle components   m im 1 1 ixx
  • 22. PCA algorithm III (SVD of the data matrix) Singular Value Decomposition of the centered data matrix X. Xfeatures  samples = USVT X VTSU= samples significant noise noise noise significant sig.
  • 23. PCA algorithm III  Columns of U  the principal vectors, { u(1), …, u(k) }  orthogonal and has unit norm – so UTU = I  Can reconstruct the data using linear combinations of { u(1), …, u(k) }  Matrix S  Diagonal  Shows importance of each eigenvector  Columns of VT  The coefficients for reconstructing the samples
  • 25. Challenge: Facial Recognition  Want to identify specific person, based on facial image  Robust to glasses, lighting,…  Can’t just use the given 256 x 256 pixels
  • 26. Applying PCA: Eigenfaces  Example data set: Images of faces  Famous Eigenface approach [Turk & Pentland], [Sirovich & Kirby]  Each face x is …  256  256 values (luminance at location)  x in 256256 (view as 64K dim vector)  Form X = [ x1 , …, xm ] centered data mtx  Compute  = XXT  Problem:  is 64K  64K … HUGE!!! 256x256 realvalues m faces X = x1, …, xm Method A: Build a PCA subspace for each person and check which subspace can reconstruct the test image the best Method B: Build one PCA database for the whole dataset and then classify based on the weights.
  • 27. Computational Complexity  Suppose m instances, each of size N  Eigenfaces: m=500 faces, each of size N=64K  Given NN covariance matrix , can compute  all N eigenvectors/eigenvalues in O(N3)  first k eigenvectors/eigenvalues in O(k N2)  But if N=64K, EXPENSIVE!
  • 28. A Clever Workaround  Note that m<<64K  Use L=XTX instead of =XXT  If v is eigenvector of L then Xv is eigenvector of  Proof: L v =  v XTX v =  v X (XTX v) = X( v) =  Xv (XXT)X v =  (Xv)  Xv) =  (Xv) 256x256 realvalues m faces X = x1, …, xm
  • 30. Reconstructing… (Method B)  … faster if train with…  only people w/out glasses  same lighting conditions
  • 31. Shortcomings  Requires carefully controlled data:  All faces centered in frame  Same size  Some sensitivity to angle  Alternative:  “Learn” one set of PCA vectors for each angle  Use the one with lowest error  Method is completely knowledge free  (sometimes this is good!)  Doesn’t know that faces are wrapped around 3D objects (heads)  Makes no effort to preserve class distinctions
  • 39. Original Image • Divide the original 372x492 image into patches: • Each patch is an instance that contains 12x12 pixels on a grid • View each as a 144-D vector
  • 40. L2 error and PCA dim
  • 43. 16 most important eigenvectors 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12
  • 45. 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 6 most important eigenvectors
  • 46. 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 3 most important eigenvectors
  • 48. 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 2 4 6 8 10 12 3 most important eigenvectors
  • 50. 60 most important eigenvectors Looks like the discrete cosine bases of JPG!...
  • 51. 2D Discrete Cosine Basis http://en.wikipedia.org/wiki/Discrete_cosine_transform
  • 55. Denoised image using 15 PCA components
  • 57. PCA, a Problematic Data Set PCA doesn’t know labels!
  • 58. PCA vs Fisher Linear Discriminant  PCA maximizes variance, independent of class  magenta  FLD attempts to separate classes  green line
  • 59. PCA, a Problematic Data Set PCA cannot capture NON-LINEAR structure!
  • 60. PCA Conclusions  PCA  finds orthonormal basis for data  Sorts dimensions in order of “importance”  Discard low significance dimensions  Uses:  Get compact description  Ignore noise  Improve classification (hopefully)  Not magic:  Doesn’t know class labels  Can only capture linear variations  One of many tricks to reduce dimensionality!
  • 61. Applications of PCA  Eigenfaces for recognition. Turk and Pentland. 1991.  Principal Component Analysis for clustering gene expression data. Yeung and Ruzzo. 2001.  Probabilistic Disease Classification of Expression- Dependent Proteomic Data from Mass Spectrometry of Human Serum. Lilien. 2003.
  • 62. PCA for image compression d=1 d=2 d=4 d=8 d=16 d=32 d=64 d=100 Original Image