Embed presentation
Downloaded 44 times
![Principal Component Analysis Well Explained With an Example in MATLAB
X = [1 2 4 3 5 9 4 2;
5 4 7 4 3 2 1 3;
3 2 4 5 6 2 1 2;
4 1 3 2 2 1 3 4]
First column represents one feature vector and has four dimensions and there are 8 feature
vectors.
Here, dimensions < number of feature vectors.
[vecs,val]=eigs(X*X',2);% X*X' gives matrix of size 4x4 as 4x8*8x4 . Only first two largest
eigen vectors are considered as eigs(X*X',2)
%So, instead of storing 4 vectors, you store 2 vectors
wt1=vecs'*X(:,1);
reconstructed =vecs*wt1; % approximate of the original
wt2=vecs'*X(:,2);
reconstructed =vecs*wt2;
For example: if you have 4 feature vector and each feature has 8 dimensions as shown below
X =[ 1 5 3 4;
2 4 2 1;
4 7 4 3;
3 4 5 2;
5 3 6 2;
9 2 2 1;
4 1 1 3;
2 3 2 4];
[vecs,val]=eigs(X'*X,2);%This is done to simplify computation as X*X' gives matrix of size
8x8. This is the trick used by Pentland and Turk
ef=X*vecs; % in order to get eigen vectors of X*X', we have to multiply X with eigen
vectors of X'*X..
for i=1:size(ef,2)
ef(:,i)=ef(:,i)./norm(ef(:,i)); % u divide each shape by its norm
end](https://image.slidesharecdn.com/publish-150507052828-lva1-app6892/85/Principal-Component-Analysis-With-Simple-Matlab-Example-1-320.jpg)

PCA is used to reduce the dimensionality of data by projecting it onto a lower dimensional space defined by the eigenvectors of the data's covariance matrix. The document demonstrates PCA on two example datasets, showing how the original data can be approximated by projecting it onto the first few principal components. Eigenvalue decomposition is performed on the covariance matrices to extract the principal components, and the original data is reconstructed from its projections onto these components.
![Principal Component Analysis Well Explained With an Example in MATLAB
X = [1 2 4 3 5 9 4 2;
5 4 7 4 3 2 1 3;
3 2 4 5 6 2 1 2;
4 1 3 2 2 1 3 4]
First column represents one feature vector and has four dimensions and there are 8 feature
vectors.
Here, dimensions < number of feature vectors.
[vecs,val]=eigs(X*X',2);% X*X' gives matrix of size 4x4 as 4x8*8x4 . Only first two largest
eigen vectors are considered as eigs(X*X',2)
%So, instead of storing 4 vectors, you store 2 vectors
wt1=vecs'*X(:,1);
reconstructed =vecs*wt1; % approximate of the original
wt2=vecs'*X(:,2);
reconstructed =vecs*wt2;
For example: if you have 4 feature vector and each feature has 8 dimensions as shown below
X =[ 1 5 3 4;
2 4 2 1;
4 7 4 3;
3 4 5 2;
5 3 6 2;
9 2 2 1;
4 1 1 3;
2 3 2 4];
[vecs,val]=eigs(X'*X,2);%This is done to simplify computation as X*X' gives matrix of size
8x8. This is the trick used by Pentland and Turk
ef=X*vecs; % in order to get eigen vectors of X*X', we have to multiply X with eigen
vectors of X'*X..
for i=1:size(ef,2)
ef(:,i)=ef(:,i)./norm(ef(:,i)); % u divide each shape by its norm
end](https://image.slidesharecdn.com/publish-150507052828-lva1-app6892/85/Principal-Component-Analysis-With-Simple-Matlab-Example-1-320.jpg)
