SlideShare a Scribd company logo
1 of 6
Download to read offline
International Association of Scientific Innovation and Research (IASIR)
(An Association Unifying the Sciences, Engineering, and Applied Research)
International Journal of Engineering, Business and Enterprise
Applications (IJEBEA)
www.iasir.net
IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 132
ISSN (Print): 2279-0020
ISSN (Online): 2279-0039
An Efficient Face Recognition Technique Using PCA and Artificial
Neural Network
1
KARTHIK G, 2
SATEESH KUMAR H C
Department of Telecommunication Engineering, Dayananda Sagar College of Engineering,
VTU, Bangalore, India
Abstract: Face recognition is one of the biometric tool for authentication and verification. It is having both
research and practical relevance. . Face recognition, it not only makes hackers virtually impossible to steal
one's "password", but also increases the userfriendliness in human-computer interaction. Facial recognition
technology (FRT) has emerged as an attractive solution to address many contemporary needs for identification
and the verification of identity claims. A facial recognition based verification system can further be deemed a
computer application for automatically identifying or verifying a person in a digital image.The two common
approaches employed for face recognition are analytic (local features based) and holistic (global features
based) approaches with acceptable success rates. In this paper, we present a hybrid features based face
recognition technique using principal component analysis technique.Principal component analysisis used to
compute global feature while the local feature are computed configuring the central moment and Eigen vectors
and the standard deviation of the nose,mouth and eyes segments of the human face as the decision support
entities of Artificial neural network.
Keywords: face recognition; analytic approach; holistic approach; hybrid features; artificial neural network;
central moment; eigen vectors; standard deviation
I. Introduction
Biometrics refers to a science of analyzing human body parts for security purposes. The word
biometrics is derived from the Greek words bios (life) and metrikos (measure).Biometric identification is
becoming more popular of late owing to the current security requirements of society in the field of
information, business, military, e-commerce and etc. For our use, biometrics refers to technologies for
measuring and analyzing a person's physiological or behavioral characteristics. These characteristics are unique
to individuals hence can be used to verify or identify a person. In general, biometric systems process raw data in
order to extract a template which is easier to process and store, but carries most of the information needed. Face
recognition is a nonintrusive method, and facial images are the most common biometric characteristics used
by humans to make a personal recognition. Human faces are complex objects with features that can vary
over time. However, we humans have a natural ability to recognize faces and Indentify person at the spur of
the second. Of course, our natural recognition ability extends beyond face recognition too. In Human Robot
Interface [3] or Human Computer Interface (HCI), the machines are to be trained to recognize and identify
and differentiate the human faces. There is thus a need to simulate recognition artificially in our attempts
to create intelligent autonomous machines. Recently face recognition is attracting much attention in the society
of network multimedia information access. Basically, any face recognition system can be
depicted by the following block diagram.
Fig. 1: Basic blocks of a face recognition system.
Pre-processing Feature Extraction Training and
Unit Testing
1) Pre-processing Unit: In the initial phase, the image captured in the true colour format is converted to
gray scale image and resized to a predefined standard and noise is removed. Further Histogram
Equalization (HE) and Discrete Wavelet Transform (DWT) are carried out for illumination normalization and
expression normalization respectively [4].
2) Feature Extraction: In this phase, facial features are extracted using Edge Detection Techniques,
Principal Component Analysis (PCA) Technique, Discrete Cosine Transform (DCT) coefficients, DWT
coefficients or fusion of different techniques [5].
3) Training and Testing: Here, Euclidean Distance (ED), Hamming Distance, Support Vector Machine
(SVM), Neural Network [6] and Random Forest (RF) [7] may be used for training followed by testing the
new images or the test images for recognition.
Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137
IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 133
The popular approaches for face recognition are based either on the location and shape of facial attributes
such as the eyes, eyebrows, nose, lips and chin, and their spatial relationships, or the overall analysis of
the face image that represents a face as a weighted combination of a number of canonical faces.
In the former approach the local attributes of the face are considered in training and testing while the latter
approach reckons in the information derived from the whole.The local features based approach demand a vast
collection of database images for effective training thus increasing the computation time. The global technique
works well with frontal view face images but they are sensitive to translation, rotation and pose changes.
Since, these two approaches do not give a complete representation of the facial image, hybrid features
based face recognition system using principal component analysis and artificial neural network is
designed.
This paper presents the face recognition method using PCA that extracts the geometrical features of the
biometrical characteristic of the face such as eyes, nose, and mouth and the overall analysis of the whole
face. After the pre-processing stage, segments of the eyes, nose and mouth are extracted from the faces of
the database. These blocks are then resized and the training features are computed. These facial features
reduce the dimensionality by gathering the essential information while removing all redundancies present in
the segment. Besides, the global features of the total image are also computed. These specially designed
features are then used as decision support entities of the classifier system configured using the Artificial
neural network.
II. Principle Component Analysis
Features of the face images are extracted using PCA in this purposed methodology. PCA is dimensionality
reduction method and retain the majority of the variations present in the data set. It capture the variations the
dataset and use this information to encode the face images. It computes the feature vectors for different face
points and forms a column matrix of these vectors. PCA algorithm steps are shown in Fig 2.
Fig. 2: Features Extraction using PCA by computing the Eigenface Images
PCA projects the data along the directions where variations in the data are maximum. The algorithm is follows
as:
 Assume the m sample images contained in the database as B1, B2, B3………Bm.
 Calculate the average image, Ø, as: Ø= ∑ Bl /M, where 1< L<M, each image will be a column
vector the same size.
 The covariance matrix is computed as by C = BT
B where B = [O1 O2 O3….Om].
 Calculate the eigenvalues of the covariance matrix C and keep only k largest eigenvalues for
dimensionality reduction as λk = ∑m
n=1(UK
T
On).
 Eigenfaces are the eigenvectors UK of the covariance matrix C corresponding to the largest
eigenvalues.
Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137
IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 134
 All the centered images are projected into face space on eigenface basis to compute the projections
of the face images as feature vectors as: w = UT
O = UT
(Bi - Ø), where 1< i<m.
PCA method computes the maximum variations in data with converting it from high dimensional image space
to low dimensional image space. These extracted projections of face images are further processed to Artificial
Neural Networks for training and testing purposes.
III. Eigenvector with Highest Eigen Value
An eigenvector of a matrix is a vector such that, if multiplied with the matrix, the result is always an integer
multiple of that vector. This integer value is the corresponding Eigenvalue of the eigenvector. This relationship
can be described by the equation:
M × u = × u, where u is an eigenvector of the matrix M is the matrix and is the corresponding Eigenvalue.
Eigenvectors possess following properties:
• They can be determined only for square matrices.
• There are n eigenvectors (and corresponding Eigenvalues) in an n × n matrix.
• All eigenvectors are perpendicular, i.e. at right angle with each other.
The traditional motivation for selecting the Eigenvectors with the largest Eigenvalues is that the Eigenvalues
represent the amount of variance along a particular Eigenvector. By selecting the Eigenvectors with the largest
Eigenvalues, one selects the dimensions along which the gallery images vary the most. Since the Eigenvectors
are ordered high to low by the amount of variance found between images along each Eigenvector, the last
Eigenvectors find the smallest amounts of variance. Often the assumption is made that noise is associated with
the lower valued Eigen values where smaller amounts of variation are found among the images .
IV. Artificial Neural Network
Artificial neural networks, commonly referred to as “neural networks”, has been motivated right from its
inception by the recognition that the brain computes in an entirely different way from the conventional digital
computer . A neural network is built-up on neurons, which are the basic information treating units. These do a
weighted linear combination of their inputs and pass the sum through an activation function of sigmoid type,
which acts as a switch and propagates a given input activation further or suppresses it. In order to be able to
detect faces, a neural network must first be trained to handle this task.
There are differ types of ANN. Some of them are Kohonen networks, Radial Basis Function and Multilayered
Perceptron. The multilayered feed forward neural network is shown in Figure 3. It consist of three layers
namely input layer, hidden layer and output layer. These layers of processing elements make independent
computation of data and pass it to another layer. The computation of processing elements is completed based on
weighted sum of the inputs. The output is then compared with the target value and calculation of the mean
square error is carried out which is processed back to the hidden layer to tune its weights. Iteration of this
process occurs for each layer in order to minimize the error by continually tuning the weight of each layer.
Hence, it is known as the back propagation. The iteration process carried out till the error falls below the
threshold level.
Fig. 3: Multilayered feed-forward network configuration
In face recognition system that uses ANN, the configuration works in the following frames:-
Input to Feed Forward Network: - Here the parameters are selected to perform required Neural Networks
operation i.e. the number of input layers, hidden layers and output layers. These input neurons receives the data
from the training set of face images.
Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137
IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 135
Back Propagation and weight Adjustment: - The input layer processes the data to the hidden layer. The
hidden layer computes the data further and then passes it to the output layer. Output layer compares it with that
of target value and obtain the error signals. These errors are sent back for weight adjustments of each layer to
minimize the error as shown in Fig. 4.
Fig. 4: Back Propagation of multilayered ANN
Mathematical Operation: - It performs the mathematical function on the output signal. The functions can be
log-sigmoid, threshold function and Tangent hyperbolic function. If the output values of the function are same
as that of the output values of the Tested face, the face is detected. Hence, the Neural Networks provides the
response to the input which is alike as the training data.
V. IMPLEMENTATION PROCESS
In this work, PCA technique is used to extract the features of the face images. PCA extracts the variations in the
features of face images which contains the highest information with decomposed dimensions.
Fig5. Basic blocks for Face Recognition
Extracted features calculate the eigenfaces.These eigenfaces are taken as input to the Artificial Neural Networks
for training the neural networks. For testing basis, the eigenface of the tested image is provided as input to the
neural networks that are trained and it finds the best match considering the threshold value for rejecting the
unknown face images.
VI. RESULTS
We applied each feature extraction method with Artificial neural network on the SDUMLA-HMT face database.
We ex-tracted PCA feature vectors with an application program coded using Matlab 7.0.Tests were done on a
PC with Intel Pentium D 2.8-GHZ CPU and 1024-MB RAM. In this study, standard SDUMLA-HMT database
images (10 poses for each of 40 people) were converted into JPEG image format without changing their size.
For both feature extraction methods a total of six train-ing sets were composed that include varying pose counts
(from 1 to 6) for each person and remaining poses are chosen as the test set. Our training sets include 40, 80,
120, 160, 200 and 240 images according to chosen pose count. For each person, poses with the same indices are
chosen for the corresponding set. The results obtained are tabulated in Table 1 which shows that the proposed
technique is more efficient than face recognition technique using PCA and Support Vector Machine(SVM)
Table1: Improvement from the Face Recognition System using PCA and SVM
Type of
technique
Recognition rate
(in %)
Error rate on the
evaluation set (in
%)
Half total error rate
on the evaluation
set (in %)
Verification rate at
1% FAR (in %):
1% FAR on the test
set equals (in %)
PCA and SVM 85.52 6.64 6.12 50.00 54.14
PCA and ANN 92.99 3.03 2.70 95.49 94.95
Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137
IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 136
Fig. 6: Recognition rate
Fig. 7: Error rate
Fig. 8: Verification rate
VII. CONCLUSION
In this paper, a new Face recognition method is presented. The new method was considered as a combination of
PCA, and Artificial neural network. We used these algorithms to construct efficient face recognition method
with a high recognition rate. Proposed method consists of following parts: image preprocessing that includes
histogram equalization, normalization and mean centering, dimension reduction using PCA that main features
Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137
IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 137
that are important for representing face images are extracted,and artificial neural network algorithm is employed
to train the database. The ANN takes the features vector as input, and trains the network to learn a complex
mapping for classification.Simulation results using SDUMLA-HMT face datasets demonstrated the ability of
the proposed method for optimal feature extraction. Hence it is concluded that this method has the recognition
rate more than 90 %.
REFERENCES
[1] K. Ramesha and K. B. Raja, “Dual transform based feature extraction for face recognition,” International Journal of Computer
Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of Lipschitz-
Hankel type involving products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955.
[2] Khashman, “Intelligent face recognition: local versus global pattern averaging”, Lecture Notes in Artificial Intelligence,4304,
Springer-Verlag, 2006, pp. 956 – 961.
[3] Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy,“Expression and illumination invariant preprocessing technique for face
recognition,” Proceedings of the International Conference on Computer Engineering and System, 2008, pp. 59-64.
[4] S. Ranawade, “Face recognition and verification using artificial neural network,” International Journal of Computer Applications,
2010, vol. I, no. 14, pp. 21-25.
[5] Bernd Heisele, Y Purdy Ho, and Tomaso Poggio, “Face recognition with support vector machines: global versus component-
based approach”, in Proc. 8th International Conference on Computer Vision, pages 688–694, 2001.
[6] Peter Belhumeur, J. Hespanha and David Kriegman, “Eigenfaces versus Fisherfaces: Recognition using class specific linear
projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720.
[7] K. Ramesha , K. B. Raja, K. R. Venugopal and L. M. Patnaik, “Feature extraction based face recognition, gender and age
classification,” International Journal on Computer Science and Engineering, 2010, vol. II, no. 01S, pp. 14-23.
[8] Albert Montillo and Haibin Ling, “Age regression from faces using random forests,” Proceedings of the IEEE International
Conference on Image Processing, 2009, pp.2465-2468.
[9] H. Murase and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Journal of Computer Vision, vol.
XIV, 1995, pp. 5-24.
[10] M. Turk and A. P. Pentland, “Face recognition using Eigenfaces,” Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition, 1991, pp. 586-591.
[11] Peter Belhumeur, J. Hespanha and David Kriegman, “Eigenfaces versus Fisherfaces: Recognition using class specific linear
projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720.
Author Profile
Mr. Karthik G, received B.E Degree in Electronics and Communication from Visvesvaraya Technological
University in 2011, Currently Pursuing M.Tech Degree in Digital Communication and Networking from
Visvesvaraya Technological University in 2014, Department of Telecommunication Engineering, Dayananda
Sagar College of Engineering, Bangalore, India. Main Interests in Networking, Digital Communications.
Mr. H.C.SATEESH KUMAR H C is a Associate Professor in the Dept of Telecommunication Engg, Dayananda
sagar College of Engg, Bangalore. He obtained his B.E. degree in Electronics Engg from Bangalore University.
His specialization in Master degree was “Bio-Medical instrumentation from Mysore University and currently he
is pursuing Ph.D. in the area of Image segmentation under the guidance of Dr.K.B.Raja, Associate Professor,
Dept of Electronics and Communication Engg, University Visvesvaraya college of Engg, Bangalore. His area of
interest is in the field of Signal Processing and Communication Engg.

More Related Content

What's hot

IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET Journal
 
Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941Editor IJARCET
 
Paper id 21201419
Paper id 21201419Paper id 21201419
Paper id 21201419IJRAT
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...eSAT Publishing House
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...eSAT Journals
 
Quality assessment for online iris
Quality assessment for online irisQuality assessment for online iris
Quality assessment for online iriscsandit
 
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION csandit
 
22 29 dec16 8nov16 13272 28268-1-ed(edit)
22 29 dec16 8nov16 13272 28268-1-ed(edit)22 29 dec16 8nov16 13272 28268-1-ed(edit)
22 29 dec16 8nov16 13272 28268-1-ed(edit)IAESIJEECS
 
A review on image enhancement techniques
A review on image enhancement techniquesA review on image enhancement techniques
A review on image enhancement techniquesIJEACS
 
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...IRJET Journal
 
Fingerprint Image Compression using Sparse Representation and Enhancement wit...
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Fingerprint Image Compression using Sparse Representation and Enhancement wit...
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Editor IJCATR
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus imagesIAEME Publication
 
IRJET- Image De-Blurring using Blind De-Convolution Algorithm
IRJET-  	  Image De-Blurring using Blind De-Convolution AlgorithmIRJET-  	  Image De-Blurring using Blind De-Convolution Algorithm
IRJET- Image De-Blurring using Blind De-Convolution AlgorithmIRJET Journal
 
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...CSCJournals
 

What's hot (16)

IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...IRJET-  	  Comparison and Simulation based Analysis of an Optimized Block Mat...
IRJET- Comparison and Simulation based Analysis of an Optimized Block Mat...
 
Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941Ijarcet vol-2-issue-3-938-941
Ijarcet vol-2-issue-3-938-941
 
Paper id 21201419
Paper id 21201419Paper id 21201419
Paper id 21201419
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...
 
Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...Fpga implementation of image segmentation by using edge detection based on so...
Fpga implementation of image segmentation by using edge detection based on so...
 
Quality assessment for online iris
Quality assessment for online irisQuality assessment for online iris
Quality assessment for online iris
 
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
COMPUTER VISION PERFORMANCE AND IMAGE QUALITY METRICS: A RECIPROCAL RELATION
 
22 29 dec16 8nov16 13272 28268-1-ed(edit)
22 29 dec16 8nov16 13272 28268-1-ed(edit)22 29 dec16 8nov16 13272 28268-1-ed(edit)
22 29 dec16 8nov16 13272 28268-1-ed(edit)
 
366 369
366 369366 369
366 369
 
A review on image enhancement techniques
A review on image enhancement techniquesA review on image enhancement techniques
A review on image enhancement techniques
 
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
Analysis of Digital Image Forgery Detection using Adaptive Over-Segmentation ...
 
Fingerprint Image Compression using Sparse Representation and Enhancement wit...
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Fingerprint Image Compression using Sparse Representation and Enhancement wit...
Fingerprint Image Compression using Sparse Representation and Enhancement wit...
 
Property based fusion for multifocus images
Property based fusion for multifocus imagesProperty based fusion for multifocus images
Property based fusion for multifocus images
 
IRJET- Image De-Blurring using Blind De-Convolution Algorithm
IRJET-  	  Image De-Blurring using Blind De-Convolution AlgorithmIRJET-  	  Image De-Blurring using Blind De-Convolution Algorithm
IRJET- Image De-Blurring using Blind De-Convolution Algorithm
 
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...Performance Comparison of Face Recognition Using DCT Against Face Recognition...
Performance Comparison of Face Recognition Using DCT Against Face Recognition...
 
N010226872
N010226872N010226872
N010226872
 

Viewers also liked (8)

Ijebea14 270
Ijebea14 270Ijebea14 270
Ijebea14 270
 
Ijebea14 267
Ijebea14 267Ijebea14 267
Ijebea14 267
 
Ijebea14 271
Ijebea14 271Ijebea14 271
Ijebea14 271
 
Ijebea14 278
Ijebea14 278Ijebea14 278
Ijebea14 278
 
Ijebea14 285
Ijebea14 285Ijebea14 285
Ijebea14 285
 
Ijebea14 287
Ijebea14 287Ijebea14 287
Ijebea14 287
 
Ijebea14 277
Ijebea14 277Ijebea14 277
Ijebea14 277
 
Ijebea14 272
Ijebea14 272Ijebea14 272
Ijebea14 272
 

Similar to Ijebea14 276

Application of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisApplication of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisIAEME Publication
 
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGEAPPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGEcscpconf
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
 
IRJET - Simulation of Colour Image Processing Techniques on VHDL
IRJET - Simulation of Colour Image Processing Techniques on VHDLIRJET - Simulation of Colour Image Processing Techniques on VHDL
IRJET - Simulation of Colour Image Processing Techniques on VHDLIRJET Journal
 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
 
Real time voting system using face recognition for different expressions and ...
Real time voting system using face recognition for different expressions and ...Real time voting system using face recognition for different expressions and ...
Real time voting system using face recognition for different expressions and ...eSAT Publishing House
 
IRJET - A Review on Face Recognition using Deep Learning Algorithm
IRJET -  	  A Review on Face Recognition using Deep Learning AlgorithmIRJET -  	  A Review on Face Recognition using Deep Learning Algorithm
IRJET - A Review on Face Recognition using Deep Learning AlgorithmIRJET Journal
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...IJMER
 
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...ijbuiiir1
 
A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...
A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...
A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...IRJET Journal
 
A survey on content based medical image retrieval for
A survey on content based medical image retrieval forA survey on content based medical image retrieval for
A survey on content based medical image retrieval foreSAT Publishing House
 
Efficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet TransformEfficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet TransformCSCJournals
 
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...inventy
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...IOSR Journals
 

Similar to Ijebea14 276 (20)

Application of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysisApplication of gaussian filter with principal component analysis
Application of gaussian filter with principal component analysis
 
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGEAPPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...
 
IRJET - Simulation of Colour Image Processing Techniques on VHDL
IRJET - Simulation of Colour Image Processing Techniques on VHDLIRJET - Simulation of Colour Image Processing Techniques on VHDL
IRJET - Simulation of Colour Image Processing Techniques on VHDL
 
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen FacesImplementation of Face Recognition in Cloud Vision Using Eigen Faces
Implementation of Face Recognition in Cloud Vision Using Eigen Faces
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
 
G0333946
G0333946G0333946
G0333946
 
Real time voting system using face recognition for different expressions and ...
Real time voting system using face recognition for different expressions and ...Real time voting system using face recognition for different expressions and ...
Real time voting system using face recognition for different expressions and ...
 
IRJET - A Review on Face Recognition using Deep Learning Algorithm
IRJET -  	  A Review on Face Recognition using Deep Learning AlgorithmIRJET -  	  A Review on Face Recognition using Deep Learning Algorithm
IRJET - A Review on Face Recognition using Deep Learning Algorithm
 
Ck36515520
Ck36515520Ck36515520
Ck36515520
 
A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...A novel approach for performance parameter estimation of face recognition bas...
A novel approach for performance parameter estimation of face recognition bas...
 
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...Innovative Analytic and Holistic Combined Face Recognition and Verification M...
Innovative Analytic and Holistic Combined Face Recognition and Verification M...
 
A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...
A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...
A REVIEW ON BRAIN TUMOR DETECTION FOR HIGHER ACCURACY USING DEEP NEURAL NETWO...
 
E0543135
E0543135E0543135
E0543135
 
A survey on content based medical image retrieval for
A survey on content based medical image retrieval forA survey on content based medical image retrieval for
A survey on content based medical image retrieval for
 
Efficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet TransformEfficient Small Template Iris Recognition System Using Wavelet Transform
Efficient Small Template Iris Recognition System Using Wavelet Transform
 
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...
A Dualistic Sub-Image Histogram Equalization Based Enhancement and Segmentati...
 
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...Feature Level Fusion Based Bimodal Biometric Using  Transformation Domine Tec...
Feature Level Fusion Based Bimodal Biometric Using Transformation Domine Tec...
 

More from Iasir Journals (20)

ijetcas14 650
ijetcas14 650ijetcas14 650
ijetcas14 650
 
Ijetcas14 648
Ijetcas14 648Ijetcas14 648
Ijetcas14 648
 
Ijetcas14 647
Ijetcas14 647Ijetcas14 647
Ijetcas14 647
 
Ijetcas14 643
Ijetcas14 643Ijetcas14 643
Ijetcas14 643
 
Ijetcas14 641
Ijetcas14 641Ijetcas14 641
Ijetcas14 641
 
Ijetcas14 639
Ijetcas14 639Ijetcas14 639
Ijetcas14 639
 
Ijetcas14 632
Ijetcas14 632Ijetcas14 632
Ijetcas14 632
 
Ijetcas14 624
Ijetcas14 624Ijetcas14 624
Ijetcas14 624
 
Ijetcas14 619
Ijetcas14 619Ijetcas14 619
Ijetcas14 619
 
Ijetcas14 615
Ijetcas14 615Ijetcas14 615
Ijetcas14 615
 
Ijetcas14 608
Ijetcas14 608Ijetcas14 608
Ijetcas14 608
 
Ijetcas14 605
Ijetcas14 605Ijetcas14 605
Ijetcas14 605
 
Ijetcas14 604
Ijetcas14 604Ijetcas14 604
Ijetcas14 604
 
Ijetcas14 598
Ijetcas14 598Ijetcas14 598
Ijetcas14 598
 
Ijetcas14 594
Ijetcas14 594Ijetcas14 594
Ijetcas14 594
 
Ijetcas14 593
Ijetcas14 593Ijetcas14 593
Ijetcas14 593
 
Ijetcas14 591
Ijetcas14 591Ijetcas14 591
Ijetcas14 591
 
Ijetcas14 589
Ijetcas14 589Ijetcas14 589
Ijetcas14 589
 
Ijetcas14 585
Ijetcas14 585Ijetcas14 585
Ijetcas14 585
 
Ijetcas14 584
Ijetcas14 584Ijetcas14 584
Ijetcas14 584
 

Recently uploaded

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 

Recently uploaded (20)

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 

Ijebea14 276

  • 1. International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Engineering, Business and Enterprise Applications (IJEBEA) www.iasir.net IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 132 ISSN (Print): 2279-0020 ISSN (Online): 2279-0039 An Efficient Face Recognition Technique Using PCA and Artificial Neural Network 1 KARTHIK G, 2 SATEESH KUMAR H C Department of Telecommunication Engineering, Dayananda Sagar College of Engineering, VTU, Bangalore, India Abstract: Face recognition is one of the biometric tool for authentication and verification. It is having both research and practical relevance. . Face recognition, it not only makes hackers virtually impossible to steal one's "password", but also increases the userfriendliness in human-computer interaction. Facial recognition technology (FRT) has emerged as an attractive solution to address many contemporary needs for identification and the verification of identity claims. A facial recognition based verification system can further be deemed a computer application for automatically identifying or verifying a person in a digital image.The two common approaches employed for face recognition are analytic (local features based) and holistic (global features based) approaches with acceptable success rates. In this paper, we present a hybrid features based face recognition technique using principal component analysis technique.Principal component analysisis used to compute global feature while the local feature are computed configuring the central moment and Eigen vectors and the standard deviation of the nose,mouth and eyes segments of the human face as the decision support entities of Artificial neural network. Keywords: face recognition; analytic approach; holistic approach; hybrid features; artificial neural network; central moment; eigen vectors; standard deviation I. Introduction Biometrics refers to a science of analyzing human body parts for security purposes. The word biometrics is derived from the Greek words bios (life) and metrikos (measure).Biometric identification is becoming more popular of late owing to the current security requirements of society in the field of information, business, military, e-commerce and etc. For our use, biometrics refers to technologies for measuring and analyzing a person's physiological or behavioral characteristics. These characteristics are unique to individuals hence can be used to verify or identify a person. In general, biometric systems process raw data in order to extract a template which is easier to process and store, but carries most of the information needed. Face recognition is a nonintrusive method, and facial images are the most common biometric characteristics used by humans to make a personal recognition. Human faces are complex objects with features that can vary over time. However, we humans have a natural ability to recognize faces and Indentify person at the spur of the second. Of course, our natural recognition ability extends beyond face recognition too. In Human Robot Interface [3] or Human Computer Interface (HCI), the machines are to be trained to recognize and identify and differentiate the human faces. There is thus a need to simulate recognition artificially in our attempts to create intelligent autonomous machines. Recently face recognition is attracting much attention in the society of network multimedia information access. Basically, any face recognition system can be depicted by the following block diagram. Fig. 1: Basic blocks of a face recognition system. Pre-processing Feature Extraction Training and Unit Testing 1) Pre-processing Unit: In the initial phase, the image captured in the true colour format is converted to gray scale image and resized to a predefined standard and noise is removed. Further Histogram Equalization (HE) and Discrete Wavelet Transform (DWT) are carried out for illumination normalization and expression normalization respectively [4]. 2) Feature Extraction: In this phase, facial features are extracted using Edge Detection Techniques, Principal Component Analysis (PCA) Technique, Discrete Cosine Transform (DCT) coefficients, DWT coefficients or fusion of different techniques [5]. 3) Training and Testing: Here, Euclidean Distance (ED), Hamming Distance, Support Vector Machine (SVM), Neural Network [6] and Random Forest (RF) [7] may be used for training followed by testing the new images or the test images for recognition.
  • 2. Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137 IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 133 The popular approaches for face recognition are based either on the location and shape of facial attributes such as the eyes, eyebrows, nose, lips and chin, and their spatial relationships, or the overall analysis of the face image that represents a face as a weighted combination of a number of canonical faces. In the former approach the local attributes of the face are considered in training and testing while the latter approach reckons in the information derived from the whole.The local features based approach demand a vast collection of database images for effective training thus increasing the computation time. The global technique works well with frontal view face images but they are sensitive to translation, rotation and pose changes. Since, these two approaches do not give a complete representation of the facial image, hybrid features based face recognition system using principal component analysis and artificial neural network is designed. This paper presents the face recognition method using PCA that extracts the geometrical features of the biometrical characteristic of the face such as eyes, nose, and mouth and the overall analysis of the whole face. After the pre-processing stage, segments of the eyes, nose and mouth are extracted from the faces of the database. These blocks are then resized and the training features are computed. These facial features reduce the dimensionality by gathering the essential information while removing all redundancies present in the segment. Besides, the global features of the total image are also computed. These specially designed features are then used as decision support entities of the classifier system configured using the Artificial neural network. II. Principle Component Analysis Features of the face images are extracted using PCA in this purposed methodology. PCA is dimensionality reduction method and retain the majority of the variations present in the data set. It capture the variations the dataset and use this information to encode the face images. It computes the feature vectors for different face points and forms a column matrix of these vectors. PCA algorithm steps are shown in Fig 2. Fig. 2: Features Extraction using PCA by computing the Eigenface Images PCA projects the data along the directions where variations in the data are maximum. The algorithm is follows as:  Assume the m sample images contained in the database as B1, B2, B3………Bm.  Calculate the average image, Ø, as: Ø= ∑ Bl /M, where 1< L<M, each image will be a column vector the same size.  The covariance matrix is computed as by C = BT B where B = [O1 O2 O3….Om].  Calculate the eigenvalues of the covariance matrix C and keep only k largest eigenvalues for dimensionality reduction as λk = ∑m n=1(UK T On).  Eigenfaces are the eigenvectors UK of the covariance matrix C corresponding to the largest eigenvalues.
  • 3. Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137 IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 134  All the centered images are projected into face space on eigenface basis to compute the projections of the face images as feature vectors as: w = UT O = UT (Bi - Ø), where 1< i<m. PCA method computes the maximum variations in data with converting it from high dimensional image space to low dimensional image space. These extracted projections of face images are further processed to Artificial Neural Networks for training and testing purposes. III. Eigenvector with Highest Eigen Value An eigenvector of a matrix is a vector such that, if multiplied with the matrix, the result is always an integer multiple of that vector. This integer value is the corresponding Eigenvalue of the eigenvector. This relationship can be described by the equation: M × u = × u, where u is an eigenvector of the matrix M is the matrix and is the corresponding Eigenvalue. Eigenvectors possess following properties: • They can be determined only for square matrices. • There are n eigenvectors (and corresponding Eigenvalues) in an n × n matrix. • All eigenvectors are perpendicular, i.e. at right angle with each other. The traditional motivation for selecting the Eigenvectors with the largest Eigenvalues is that the Eigenvalues represent the amount of variance along a particular Eigenvector. By selecting the Eigenvectors with the largest Eigenvalues, one selects the dimensions along which the gallery images vary the most. Since the Eigenvectors are ordered high to low by the amount of variance found between images along each Eigenvector, the last Eigenvectors find the smallest amounts of variance. Often the assumption is made that noise is associated with the lower valued Eigen values where smaller amounts of variation are found among the images . IV. Artificial Neural Network Artificial neural networks, commonly referred to as “neural networks”, has been motivated right from its inception by the recognition that the brain computes in an entirely different way from the conventional digital computer . A neural network is built-up on neurons, which are the basic information treating units. These do a weighted linear combination of their inputs and pass the sum through an activation function of sigmoid type, which acts as a switch and propagates a given input activation further or suppresses it. In order to be able to detect faces, a neural network must first be trained to handle this task. There are differ types of ANN. Some of them are Kohonen networks, Radial Basis Function and Multilayered Perceptron. The multilayered feed forward neural network is shown in Figure 3. It consist of three layers namely input layer, hidden layer and output layer. These layers of processing elements make independent computation of data and pass it to another layer. The computation of processing elements is completed based on weighted sum of the inputs. The output is then compared with the target value and calculation of the mean square error is carried out which is processed back to the hidden layer to tune its weights. Iteration of this process occurs for each layer in order to minimize the error by continually tuning the weight of each layer. Hence, it is known as the back propagation. The iteration process carried out till the error falls below the threshold level. Fig. 3: Multilayered feed-forward network configuration In face recognition system that uses ANN, the configuration works in the following frames:- Input to Feed Forward Network: - Here the parameters are selected to perform required Neural Networks operation i.e. the number of input layers, hidden layers and output layers. These input neurons receives the data from the training set of face images.
  • 4. Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137 IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 135 Back Propagation and weight Adjustment: - The input layer processes the data to the hidden layer. The hidden layer computes the data further and then passes it to the output layer. Output layer compares it with that of target value and obtain the error signals. These errors are sent back for weight adjustments of each layer to minimize the error as shown in Fig. 4. Fig. 4: Back Propagation of multilayered ANN Mathematical Operation: - It performs the mathematical function on the output signal. The functions can be log-sigmoid, threshold function and Tangent hyperbolic function. If the output values of the function are same as that of the output values of the Tested face, the face is detected. Hence, the Neural Networks provides the response to the input which is alike as the training data. V. IMPLEMENTATION PROCESS In this work, PCA technique is used to extract the features of the face images. PCA extracts the variations in the features of face images which contains the highest information with decomposed dimensions. Fig5. Basic blocks for Face Recognition Extracted features calculate the eigenfaces.These eigenfaces are taken as input to the Artificial Neural Networks for training the neural networks. For testing basis, the eigenface of the tested image is provided as input to the neural networks that are trained and it finds the best match considering the threshold value for rejecting the unknown face images. VI. RESULTS We applied each feature extraction method with Artificial neural network on the SDUMLA-HMT face database. We ex-tracted PCA feature vectors with an application program coded using Matlab 7.0.Tests were done on a PC with Intel Pentium D 2.8-GHZ CPU and 1024-MB RAM. In this study, standard SDUMLA-HMT database images (10 poses for each of 40 people) were converted into JPEG image format without changing their size. For both feature extraction methods a total of six train-ing sets were composed that include varying pose counts (from 1 to 6) for each person and remaining poses are chosen as the test set. Our training sets include 40, 80, 120, 160, 200 and 240 images according to chosen pose count. For each person, poses with the same indices are chosen for the corresponding set. The results obtained are tabulated in Table 1 which shows that the proposed technique is more efficient than face recognition technique using PCA and Support Vector Machine(SVM) Table1: Improvement from the Face Recognition System using PCA and SVM Type of technique Recognition rate (in %) Error rate on the evaluation set (in %) Half total error rate on the evaluation set (in %) Verification rate at 1% FAR (in %): 1% FAR on the test set equals (in %) PCA and SVM 85.52 6.64 6.12 50.00 54.14 PCA and ANN 92.99 3.03 2.70 95.49 94.95
  • 5. Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137 IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 136 Fig. 6: Recognition rate Fig. 7: Error rate Fig. 8: Verification rate VII. CONCLUSION In this paper, a new Face recognition method is presented. The new method was considered as a combination of PCA, and Artificial neural network. We used these algorithms to construct efficient face recognition method with a high recognition rate. Proposed method consists of following parts: image preprocessing that includes histogram equalization, normalization and mean centering, dimension reduction using PCA that main features
  • 6. Karthik G et al., International Journal of Engineering, Business and Enterprise Applications, 8(2), March-May., 2014, pp. 132-137 IJEBEA 14-276; © 2014, IJEBEA All Rights Reserved Page 137 that are important for representing face images are extracted,and artificial neural network algorithm is employed to train the database. The ANN takes the features vector as input, and trains the network to learn a complex mapping for classification.Simulation results using SDUMLA-HMT face datasets demonstrated the ability of the proposed method for optimal feature extraction. Hence it is concluded that this method has the recognition rate more than 90 %. REFERENCES [1] K. Ramesha and K. B. Raja, “Dual transform based feature extraction for face recognition,” International Journal of Computer Science Issues, 2011, vol.VIII, no. 5, pp. 115-120.G. Eason, B. Noble, and I. N. Sneddon, “On certain integrals of Lipschitz- Hankel type involving products of Bessel functions,” Phil. Trans. Roy. Soc. London, vol. A247, pp. 529–551, April 1955. [2] Khashman, “Intelligent face recognition: local versus global pattern averaging”, Lecture Notes in Artificial Intelligence,4304, Springer-Verlag, 2006, pp. 956 – 961. [3] Abbas, M. I. Khalil, S. Abdel-Hay and H. M. Fahmy,“Expression and illumination invariant preprocessing technique for face recognition,” Proceedings of the International Conference on Computer Engineering and System, 2008, pp. 59-64. [4] S. Ranawade, “Face recognition and verification using artificial neural network,” International Journal of Computer Applications, 2010, vol. I, no. 14, pp. 21-25. [5] Bernd Heisele, Y Purdy Ho, and Tomaso Poggio, “Face recognition with support vector machines: global versus component- based approach”, in Proc. 8th International Conference on Computer Vision, pages 688–694, 2001. [6] Peter Belhumeur, J. Hespanha and David Kriegman, “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720. [7] K. Ramesha , K. B. Raja, K. R. Venugopal and L. M. Patnaik, “Feature extraction based face recognition, gender and age classification,” International Journal on Computer Science and Engineering, 2010, vol. II, no. 01S, pp. 14-23. [8] Albert Montillo and Haibin Ling, “Age regression from faces using random forests,” Proceedings of the IEEE International Conference on Image Processing, 2009, pp.2465-2468. [9] H. Murase and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” Journal of Computer Vision, vol. XIV, 1995, pp. 5-24. [10] M. Turk and A. P. Pentland, “Face recognition using Eigenfaces,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1991, pp. 586-591. [11] Peter Belhumeur, J. Hespanha and David Kriegman, “Eigenfaces versus Fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 1997, vol. XIX, no. 7, pp.711-720. Author Profile Mr. Karthik G, received B.E Degree in Electronics and Communication from Visvesvaraya Technological University in 2011, Currently Pursuing M.Tech Degree in Digital Communication and Networking from Visvesvaraya Technological University in 2014, Department of Telecommunication Engineering, Dayananda Sagar College of Engineering, Bangalore, India. Main Interests in Networking, Digital Communications. Mr. H.C.SATEESH KUMAR H C is a Associate Professor in the Dept of Telecommunication Engg, Dayananda sagar College of Engg, Bangalore. He obtained his B.E. degree in Electronics Engg from Bangalore University. His specialization in Master degree was “Bio-Medical instrumentation from Mysore University and currently he is pursuing Ph.D. in the area of Image segmentation under the guidance of Dr.K.B.Raja, Associate Professor, Dept of Electronics and Communication Engg, University Visvesvaraya college of Engg, Bangalore. His area of interest is in the field of Signal Processing and Communication Engg.