Published on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

Published in: Technology
  • Be the first to comment

  • Be the first to like this


  1. 1. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328 Metamorphism using Warping and Vectorization Method for Image:Review MandeepkaurSandhuand Ajay Kumar Dogra. Department of Computer Science Engineering, Beant college of Engineering and Management, Gurdaspur.Abstract:Face recognition presents a challenging difficultly government buildings, and ATMs (automatic tellerin the field of image analysis and computer vision, machines), and to secure computers and mobileand as such has received a great deal of devotion phones.Computerized facial recognition is based onover the last few years because of its many tenders capturing an image of a face, extracting features,in various domains. In this paper, an overview of comparing it to images in a database, and identifyingsome of the well-known methods. The applications matches. As the computer cannot see the same wayof this technology and some of the difficulties as a human eye can, it needs to convert images intoplaguing current systems with regard to this task numbers representing the various features of a face.have also been provided. This paper also mentions The sets of numbers representing one face aresome of the most recent algorithms developed for compared with numbers representing another face.this purpose and attempts to give an idea of the The excellence of the computer recognition system isstate of the art of face recognition technology. hooked onthe quality of the image and mathematical algorithms used to convert a picture into numbers.Keywords: Face recognition, biometric, algorithms Important factors for the image excellence are light, background, and position of the head. Pictures can beINTRODUCTION: taken of a still or moving subjects. Still subjects are Interest in face recognition is moving toward photographed, for example by the police (mug shots)uncontrolledor moderately controlled environments, or by specially placed security cameras (accessin that either theprobe or gallery images or both are control). However, the most challenging applicationassumed to be acquiredunder uncontrolled is the ability to use images captured by surveillanceconditions. Also of interest are morerobust similarity cameras (shopping malls, train stations, ATMs), ormeasures or, in general, techniques todetermine closed-circuit television (CCTV). In many cases thewhether two facial images correctly match, subject in those images is moving fast, and the lighti.e.,whether they belong to the same person.An and the position of the head is not optimal.important real-life application of interest is Face Recognitionautomatedsurveillance, where the objective is to Challengesrecognize and trackpeople who are on a watch list. In  Physical appearancethis open world applicationthe system is tasked to  Acquisition geometryrecognize a small set of people whilerejecting  Imaging conditionseveryone else as being one of the wanted  Compression artifactspeople.Traditionally, algorithms have been developed 1. Face Detectionfor closedworld applications. Theassumption is that  Face detection task: to identify and locatethe probe image and its closest image inthe gallery human faces in an image regardless of theirbelong to the same person. The information obtained position, scale, in plane rotation, orientation,by this search is then used to dorecognition. pose (out of plane rotation), andCurrently, the technology is used by police, forensic illumination.scientists, governments, private companies, the  The first step for any automatic facemilitary, and casinos. The police use facial recognition systemrecognition for identification of criminals.  Face detection methods:Companies use it for securing access to restrictedareas. Casinos use facial recognition to eliminate Knowledge-based: Encode human knowledge ofCheaters and dishonest money counters. Finally, in what constitutes a typical face (usually, thethe United States, nearly half of the states use relationship between facial features)computerized identity verification, while the NationalCenter for Missing and Exploited Children uses the Feature invariant approaches: Aim to findtechnique to find missing children on the Internet. In structural features of a face that exist even when theMexico, a voter database was compiled to prevent pose, viewpoint, or lighting conditions varyvote fraud. Facial recognition technology can be usedin a number of other places, such as airports, 1322 | P a g e
  2. 2. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328Template matching methods:Several standard Face Recognitionpatterns stored to describe the face as a whole or the In general, face recognition systems proceedfacial features separately. The most direct method by identifying the face in the scene, thus estimatingused for face recognition is the matching between and normalizing for translation, scale, and in-planethe test images and a set of training images rotation. Many approaches to finding faces are basedbased on measuring the correlation. The similarity on weak models of the human face that model faceis obtained by normalize cross correlation. shape in terms of facial texture. Once a prospective face has been localized, theAppearance-based methods:The models (or approaches to face recognition then divided into twotemplates) are learned from a set of training images categories:which capture the representative variability of facial  Face appearanceappearance  Face geometryFramework of Face recognition Face Region/ position of facial feature Face Detection Find Face Recognition 1. Face Region Identify the person 2. Facial FeatureFace Recognition: Advantages  Photos of faces are widely used in passports  Authentication systems. This is only a first and driver’s licenses where the possession challenge in a long list of technical authentication protocol is augmented with a challenges that are associated with robust photo for manual inspection purposes; there face authentication is wide public acceptance for this biometric  Face currently is a poor biometric for use in identifier a pure identification protocol  An obvious circumvention method is disguise  Face recognition systems are the least  There is some criminal association with face intrusive from a biometric sampling point of identifiers since this biometric has long been view, requiring no contact, nor even the used by law enforcement agencies awareness of the subject (‘mugshots’).  The biometric works, or at least works in 2. Face Recognition Algorithms theory, with legacy photograph data-bases, 3.1 Principal Component Analysis (PCA) videotape, or other image sources Derived from Karhunen-Loeves  Face recognition can, at least in theory, be transformation. Given an s-dimensional vector used for screening of unwanted individuals representation of each face in a training set of in a crowd, in real time images, Principal Component Analysis (PCA) tends  It is a fairly good biometric identifier for to find a t-dimensional subspace whose basis vectors small-scale verification applications correspond to the maximum variance direction in theFace Recognition: Disadvantages original image space. This new subspace is normally lower dimensional (t<<s). If the image elements are  A face needs to be well lighted by controlled considered as random variables, the PCA basis light sources in automated face vectors are defined as eigenvectors of the scatter matrix. 1323 | P a g e
  3. 3. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328 Principal component analysis (PCA) is a data in a way which finestdescribes the variance inmathematical procedure that uses an orthogonal the data. If a multivariate dataset is pictured as a settransformation to convert a set of observations of of coordinates in a high-dimensional data space (1possibly correlated variables into a set of values of axis per variable), PCA can supply the user with alinearly uncorrelated variables called principal lower-dimensional picture, a "shadow" of this objectcomponents. The number of principal components is when viewed from its (in some sense) mostless than or equal to the number of original variables. informative viewpoint. This is done by using only theThis transformation is defined in such a way that the first few principal components so that thefirst principal component has the dimensionality of the transformed data is reduced.majorconceivablevariance (that is, accounts for as PCA is meticulously related to factor analysis. Factormuch of the variability in the data as possible), and analysis normally incorporates more domain specificeach following component in turn has the highest suppositions about the underlying structure andvariance possible under the constraint that it be solves eigenvectors of a slightly different matrix.orthogonal to (i.e., uncorrelated with) the precedingcomponents. Principal components are guaranteed to 3.2 Independent Component Analysis (ICA)be independent only if the data set is jointly normally Independent Component Analysis (ICA)distributed. PCA is sensitive to the relative scaling of minimizes both second-order and higher-orderthe original variables. Depending on the field of dependencies in the input data and attempts to findapplication, it is also named the discrete Karhunen– the basis along which the data (when projected ontoLoève transform (KLT), the Hotelling transform or them) are - statistically independent. Bartlett et al.proper orthogonal decomposition (POD). provided two architectures of ICA for face PCA was invented in 1901 by Karl Pearson recognition task: Architecture I - statistically[6].Now it is mostly used as a tool in exploratory data independent basis images, and Architecture II -analysis and for creatingpredictive models. PCA can factorial code representation. Independent componentbe done by eigenvalue decomposition of a data analysis (ICA) is a computational method fromcovariance (or correlation) matrix or singular value statistics and signal processing which is a specialdecomposition of a data matrix, usually after mean case of blind source separation. ICA seeks to separatecentering (and normalizing or using Z-scores) the a multivariate signal into additive subcomponentsdata matrix for each attribute[7]. The results of a supposing the mutual statistical independence of thePCA are usually discussed in terms of component non-Gaussian source signals. The general frameworkscores, sometimes called factor scores (the of ICA was introduced in the early 1980s (Héraulttransformed variable values corresponding to a and Ans 1984; Ans, Hérault and Jutten 1985; Hérault,particular data point), and loadings (the weight by Jutten and Ans 1985), but was most clearly stated bywhich each standardized original variable should be Pierre Comon in 1994 (Comon1994). For a good text,multiplied to get the component score)[7]. see Hyvärinen, Karhunen and Oja (2001).ICA findsPCA is the simplest of the true eigenvector-based the independent components (aka factors, latentmultivariate analyses. Often, its operation can be variables or sources) by maximizing the statisticalthought of as revealing the internal structure of the independence of the estimated components. Fig:1 Represents PCA methodWe may choose one of many ways to define the ICA algorithms. The two broadest definitions ofindependence, and this choice governs the form of independence for ICA are 1324 | P a g e
  4. 4. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328 JADE, but there are many others.In general, ICA1) Minimization of Mutual Information cannot identify the actual number of source signals, a2) Maximization of non-Gaussianity uniquely correct ordering of the source signals, norThe Minimization-of-Mutual information (MMI) the proper scaling (including sign) of the sourcefamily of ICA algorithms uses measures like signals. ICA is important to blind signal separationKullback-leiber Divergence and maximum-entropy. and has many practical applications. It is closelyThe Non-Gaussianity family of ICA algorithms, related to (or even a special case of) the search for amotivated by the central limit theorem, uses kurtosis factorial code of the data, i.e., anew vector-valuedand Negentrogy. Typical algorithms for ICA use representation of each data vector such that it getscentering, whitening (usually with the eigenvalue uniquely encoded by the resulting code vector (loss-decomposition), and dimensionality reduction as free coding), but the code components arepreprocessing steps in order to simplify and reduce statistically independent.the complexity of the problem for the actual iterative Comparison of PCA and ICAalgorithm. Whitening and dimension reduction can be PCAachieved with principal component analysis or – Focus on uncorrelated and Gaussian componentssingular value decomposition [2]. – Second-order statistics Independent component analysis (ICA) is a – Orthogonal transformationcomputational method for separating a multivariatesignal into additive subcomponents supposing the ICAmutual statistical independence of the non-Gaussian – Focus on independent and non-Gaussiansource signals. It is a special case of separation. componentsWhitening ensures that all dimensions are treated – Higher-order statisticsequally a priori before the algorithm is run. – Non-orthogonal transformationAlgorithms for ICA include infomax, FastICA, and We do not know Both unknowns  Some o ptimizatio n Function is require d!! categorize among classes. For all samples of all3.3 Linear Discriminant Analysis (LDA) classes thebetween-class scatter matrix SBand the Linear Discriminant Analysis (LDA) finds within-class scatter matrix SWare defined.The goal isthe vectors in the underlying space that best to maximize SBwhile minimizing SW, in other 1325 | P a g e
  5. 5. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328words, maximize the ratio det|S|/det|SW. This ratio is maximized when the column vectors of Nthe projection matrix are the eigenvectors of (SW^-1 2. SB= ∑ ni(mi-m)(mi-m)T× SB).Linear discriminant analysis (LDA) and the i=lrelated Fishers linear discriminantare dimensionality whereniis the number of images in the class, mi is thebefore later classification.LDA is closely linked to mean of the images in the class and m is the mean ofANOVA (analysis of variance) and regressionwhich all the images.also attempt to express one dependent variable as a 3.Solve the generalized eigenvalue problemlinear combination of other structures or SBV = ᶺSWVextents[10][3].In the other two methods however, thedependent variable is a numerical quantity, while for 3.4 Hidden Markov Models (HMM)LDA it is a categorical variable (i.e. the class label). Hidden Markov Models (HMM) are a set ofLogistic regression andprobitregression are more statistical models used to characterize the statisticalsimilar to LDA, as they also explain a categorical properties of a signal. HMM consists of twovariable. These other methods are preferable in interrelated processes:applications where it is not reasonable to assume that (1) An underlying, unobservable Markov chain withthe independent variables are normally distributed, a finite number of states, a state transition probabilitywhich is a fundamental assumption of the LDA matrix and an initial state probability distribution andmethod.LDA is also closely related to principal (2)A set of probability density functions associatedcomponent analysis (PCA) and factor analysis in that with each state.they both look for linear combinations of variableswhich best explain the data [3]. LDA explicitly A Hidden Markov Models (HMM) is a finiteattempts to model the difference between the classes state machine which has some fixed number of states.of data. PCA on the other hand does not take into It provides a probabilistic framework for modeling aaccount any difference in class, and factor analysis time series of multivariate observations.builds the feature combinations based on differences Hidden Markov models were introduced in therather than similarities. Discriminant analysis is also beginning of the 1970‟s as a tool in speechdifferent from factor analysis in that it is not an recognition. This model based on statistical methodsinterdependence technique: a distinction between has become progressively popular in the last severalindependent variables and dependent variables (also years due to its strong mathematical structure andcalled criterion variables) must be made.LDA works theoretical basis for use in a wide range ofwhen the measurements made on independent applications [1].variables for each observation are continuous A hidden Markov model is a doublyquantities. When dealing with categorical stochastic process, with an underlying stochasticindependent variables, the equivalent technique is process that is not observable (hence the worddiscriminant correspondence analysis[7][8]. hidden), but can be observed through another stochastic process that creates the sequence ofBasic steps of LDA algorithm [3] observations. The hidden process consists of a set of LDA uses PCA subspace as input data, i.e., conditions connected to each other by changes withmatrix V obtained from PCA. The advantage is probabilities, while the observed process consists of acutting the eigenvectors in matrix V that are not set of outputs or observations, each of which may beimportant for face recognition (this significantly emitted by each state according to some probabilityimproves computing performance). LDA considers density function (pdf). Depending on the nature ofbetween and also within class correspondence of this pdf, several HMM classes can be distinguished.data. It means that training images create a class for If the observations are naturally discrete or quantizedeach subject, i.e., one class = one subject (all his/her using vector quantization [11].training images). 3.5Face recognition using line edge map (LEM)1.Determine LDA subspace (i.e. determining the line This algorithm describes a new techniquein Fig. 2) from training data. Calculate the within based on line edge maps (LEM) to accomplish faceclass scatter matrix recognition. In addition, it proposes a line matching c technique to mark this task possible. In opposition with other algorithms, LEM uses physiologic features Sw∑i ,si= ∑(x-mi) (x-mi)T i=lx€xi from human faces to solve the problem; it mainly uses mouth, nose and eyes as the most characteristicwheremiis the mean of the images in the class and C ones.is the number of classes.Calculate the between classscatter matrix In order to degree the similarity of human faces the face images are initially converted into gray-level 1326 | P a g e
  6. 6. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328pictures. The images are encoded into binary edge certain parameter settings LDA produced the worstmaps using Sobel edge detection algorithm. This recognition rates from among all experiments.system is muchrelated to the way human beings Experiments with their proposed methodsperceive other people faces as it was stated in many PCA+SVM and LDA+SVM produced a betterpsychological studies. The main advantage of line maximum recognition rate than traditional PCA andedge maps is the low sensitiveness to illumination LDA methods. Combination LDA+SVM producedchanges, because it is an intermediate-level image more consistent results than LDA alone. Altogetherrepresentation derived from low-level edge map they made more than 300 tests and achievedrepresentation.[9] The algorithm has another maximum recognition rates near 100% (LDA+SVMimportant improvement, it is the low memory once actually 100%)[8].Chengjun Liu and Harryrequirements because the sensible of data used. In Wechsler address the relative usefulness of thethere is an example of a face line edge map; it can be independentcomponent analysis for Faceobserved that it keeps face features but in a very Recognition. The sensitivity analysis suggests that forabridgedlevel [13]. enhanced performance ICA should be carried out in a compressed and whitened space where most of the characteristic information of the original data is conserved and the small trailing eigenvalues unwanted. The dimensionality of the compressed subspace is decided founded on the eigenvalue spectrum from the training data. In the result their discriminant analysis shows that the ICA criterion, when carried out in the properly compressed and whitened space, performs better than the eigenfaces and Fisherfaces methods for face recognition[4]. 5. Conclusions: Face recognition is a challenging problem in the field ofimage analysis and computer vision that has received a great deal of attention over the last few years because of its many applications in various4. Face Recognition Techniques domains. Research has been conducted dynamically Taranpreet Singh Ruprahhas presented a in this area for the past four agesor so, and thoughface recognition systemusing PCA with neural huge progress has been made, encouraging resultsnetworks in the context of face verification and face have been obtained and current face recognitionrecognition using photometric normalization for systems have reached a certain degree of maturityassociation. The experimental results show the N.N. when operating under constrained conditions;Euclidean distance rules using PCA for overall however, they are far from achieving the ideal ofpresentation for verification. However, for being able to perform adequately in all the variousrecognition, E.D.classifier gives the highest accuracy situations that are commonly encountered byusing the original face image. Thus, applying applications utilizing these techniques in practicalhistogram equalization techniques on the face image life.do not give much impact to the performance of thesystem if conducted under controlled 6. References:environment[17].Byongjoo Ohproposes a face [1] A. El-Yacoubi, M. Gilloux,R. Sabourin,recognition algorithmand presented results of Member, IEEE, and C.Y. Suen, Fellow,performance test on the ORL database. The PCA IEEE , An HMM-Based Approach for Off-algorithm is first tried as a reference performance. Line Unconstrained Handwritten WordThen PCA+MLNN algorithm was proposed and Modeling and Recognition,IEEEtested in the same environments. The PCA+ MLNN Transactions On Pattern Analysis Andalgorithm shows better performance, and resulted Machine Intelligence,21(8),1999.in95.29% recognition rate. The introduction of [2] A.Hyvärinen, J.Karhunen andMLNN enhances the classification performance and E.Oja,Independent Component Analysis,providesrelatively robust performance for the Signal Processing, 36(3) 2001, 287–314variation of light. However this result does not [3] B. Kamgar-Parsi and B. Kamgar-Parsi,guarantee the PCA+ MLNN algorithm is always Methods of FacialRecognition,US Patent 7,better than PCA[12].Jan Mazanec and others 2010 684,595.experiments with FERET database imply that [4] C. Liu and H. Wechsler,ComparativeLDA+ldaSoft generally achieves the highest Assessment of Independent Componentmaximum recognition rate. In their experiment they Analysis (ICA) for Faceshow, LDA alone is not suitable for practical use. At Recognition,Appears in the Second 1327 | P a g e
  7. 7. MandeepkaurSandhuand, Ajay Kumar Dogra / International Journal of Engineering Research and Applications (IJERA) ISSN: 2248-9622 www.ijera.com Vol. 2, Issue4, July-august 2012, pp.1322-1328 International Conference on Audio- and [18] YAMBOR, S.WENDY, Analysis of PCA- Video-based Biometric Person Based and Fisher Discriminant-Based Authentication, AVBPA’99,Washington D. ImageRecognitionAlgorithms. C. USA, 1999. [19] Y. Gao and M.K.H.Leung,Face recognition [5] H.Abdi, and L.J.Williams, Principal using line edge map,.Pattern Analysis and component analysis,Wiley Interdisciplinary Machine Intelligence, IEEE Transactions on Reviews: Computational Statistics,2, 2010, ,24 (6) , 2002,764 -779. 433–459. [6] H. Abdi, Discriminant corresponding analysis, in N.J. Salkind (Ed.): Encyclopedia of Measurement and Statistic. Thousand Oaks (CA): Sage. 2007, 270–275. [7] J. H.Friedman, Regularized Discriminant Analysis,Journal of the American Statistical Association(405): 165–175. [8] J.Mazanec, M.Meli·sek, M. Oravec and J. Pavlovi·cova, Support vector machines, PCA and LDA in face recognition,journal of electrical engineering, 59(4), 2008, 203- 209. [9] K.Pearson, On Lines and Planes of Closest Fit to Systems of Points in Space, (PDF). Philosophical Magazine2 (6): 559–572. [10] M.Ahdesmäki and K.Strimmer, Feature selection in omics prediction problems usingcat scores and false non discovery rate control, Annals of Applied Statistics, 4 (1),2010 503–519. [11] Md. R.Hassan and B.Nath, StockMarket Forecasting Using Hidden Markov Model: A New Approach, Proceedings of the 2005 5th International Conference on Intelligent Systems Design and Applications (ISDA‟05)0-7695-2286-06/05 $20.00 © 2005 IEEE. [12] O. Byongjoo, Comparison of Face Recognition Techniques:PCA and Neural Networks with PCA,Journal of Korean Institute of Information Technology, 66 (3),5. [13] P.J.A.Shaw,Multivariate statistics for the Environmental Sciences, Hodder-Arnold. ISBN 0-340-80763-6,2003. [14] R. A.Fisher, The Use of Multiple Measurements in Taxonomic Problems, Annals of Eugenics7 (2): 179–188. [15] R. O.Duda, P. E.Hart and D. H.Stork,Pattern Classification (2nd ed.). Wiley Interscience.ISBN 0-471-05669 3.MR 1802993, 2000.[16] Signal Processing Institute, Swiss FederalInstitute of Technologyhttp://scgwww.epfl.ch [17] Taranpreet Singh Ruprah ,Face Recognition Based on PCA Algorithm,Special Issue of International Journal of Computer Science &Informatics (IJCSI), ISSN (PRINT) : 2231–5292, Vol.- II, Issue-1, 2 1328 | P a g e