Fl33971979

193 views

Published on

IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com

Published in: Technology, News & Politics
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
193
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Fl33971979

  1. 1. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979971 | P a g eAnalysis Of Face Recognition- A Case Study On Feature SelectionAnd Feature NormalizationB.Vijay1, A.Nagabhushana Rao21,2Assistant Professor, Dept of CSE, AITAM ,Tekkali, Andhra Pradesh, INDIA.AbstractIn this the effects of feature selection andfeature normalization to the face recognitionscheme is presented. From the local features thatare extracted using block-base discrete cosinetransform (DCT), three feature sets are derived.These local feature vectors are normalized in twodifferent ways by making them unit norm and bydividing each coefficient to its standard deviationthat is learned from the training set. In this workuse local information by using block baseddiscrete cosine transform. Here the main idea isto mitigate the effects of expression, illuminationand occlusion variations by performing localanalysis and by fusing the outputs of extractedlocal features at the feature and at the decisionlevel.Keywords- Image Processing, DCT, Facerecognition;I. INTRODUCTIONFace recognition has been an activeresearch area over the last 30 years. Scientists fromdifferent areas of psychophysical sciences and thosefrom different areas of computer sciences havestudied it. Psychologists and neuroscientists mainlydeal with the human perception part of the topic,whereas engineers studying on machine recognitionof human faces deal with the computational aspectsof face recognition. Face recognition hasapplications mainly in the fields of biometrics,access control, law enforcement, and security andsurveillance systems. Biometrics is methods toautomatically verify or identify individuals usingtheir physiological or behavioral characteristics [1-4].A. Human Face RecognitionWhen building artificial face recognitionsystems, scientists try to understand the architectureof human face recognition system. Focusing on themethodology of human face recognition system maybe useful to understand the basic system. However,the human face recognition system utilizes morethan that of the machine recognition system which 7is just 2-D data. The human face recognition systemuses some data obtained from some or all of thesenses visual, auditory, tactile, etc. All these data isused either individually or collectively for storageand remembering of faces. In many cases, thesurroundings also play an important role in humanface recognition system. It is hard for a machinerecognition system to handle so much data and theircombinations. However, it is also hard for a humanto remember many faces due to storage limitations.A key potential advantage of a machine system is itsmemory capacity [1-4], whereas for a human facerecognition system the important feature is itsparallel processing capacity. The issue ―whichfeatures humans use for face recognition‖ has beenstudied and it has been argued that both global andlocal features are used for face recognition. It isharder for humans to recognize faces, which theyconsider as neither ―attractive‖ nor ―unattractive‖.The low spatial frequency components are used toclarify the sex information of the individual whereashigh frequency components are used to identify theindividual. The low frequency components are usedfor the global description of the individual while thehigh frequency components are required for finerdetails needed in the identification process. Bothholistic and feature information are important for thehuman face recognition system. Studies suggest thepossibility of global descriptions serving as a frontend for better feature-based perception [1-4]. If thereare dominant features present such as big ears, asmall nose, etc. holistic descriptions may not beused. Also, recent studies show that an inverted face(i.e. all the intensity values are subtracted from 255to obtain the inverse image in the gray scale) ismuch harder to recognize than a normal face. Hair,eyes, mouth, face outline have been determined to bemore important than nose for perceiving andremembering faces. It has also been found that theupper part of the face is more useful than the lowerpart of the face for recognition. Also, aestheticattributes (e.g. beauty, attractiveness, pleasantness,etc.) play an important role in face Recognition; themore attractive the faces are easily remembered.For humans, photographic negatives offaces are difficult to recognize. But, there is notmuch study on why it is difficult to recognizenegative images of human faces. Also, a study onthe direction of illumination showed the importanceof top lighting it is easier for humans to recognizefaces illuminated from top to bottom than the facesilluminated from bottom to top. According to theneurophysicists, the analyses of facial Expressionsare done in parallel to face recognition in humanface recognition system. Some prosopagnosicpatients, who have difficulties in identifying familiarfaces, seem to recognize facial expressions due toemotions. Patients who suffer from organic brain
  2. 2. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979972 | P a g esyndrome do poorly at expression analysis butperform face recognition quite well.B. Machine Recognition of FacesHuman face recognition were expected to bea reference on machine recognition of faces, researchon machine recognition of faces has developedindependent of studies on human face recognition.During 1970‘s, typical pattern classificationtechniques, which use measurements betweenfeatures in faces or face profiles, wereUsed. During the 1980‘s, work on face recognitionremained nearly stable. Since the early 1990‘s,research interest on machine recognition of faces hasgrown tremendously. The reasons may be: An increase in emphasis oncivilian/commercial research projects, The studies on neural network classifierswith emphasis on real-time computationand adaptation, The availability of real time hardware. The growing need for surveillanceapplications.a. Statistical ApproachesStatistical methods include templatematching based systems where the training and testimages are matched by measuring the correlationbetween them. Moreover, statistical methods includethe projection-based methods such as PrincipalComponent Analysis (PCA), Linear DiscriminatorAnalysis (LDA), etc. In fact, projection basedsystems came out due to the shortcomings of thestraightforward template matching basedapproaches; that is, trying to carry out the requiredclassification task in a space of extremely highdimensionality Template Matching: Brunelli andPoggio [3] suggest that the optimal strategy for facerecognition is holistic and corresponds to templatematching. In their study, they compared a geometricfeature based technique with a template matchingbased system. In the simplest form of templatematching, the image (as 2-D intensity values) is 11compared with a single template representing thewhole face using a distance metric. Althoughrecognition by matching raw images has beensuccessful under limited circumstances, it suffersfrom the usual shortcomings of straightforwardcorrelation-based approaches, such as sensitivity toface orientation, size, variable lighting conditions,and noise. The reason for this vulnerability of directmatching methods lies in their attempt to carry outthe required classification in a space of extremelyhigh dimensionality. In order to overcome the curseof dimensionality, the connectionist equivalent ofdata compression methods is employed first.However, it has been successfully argued that theresulting feature dimensions do not necessarilyretain the structure needed for classification, and thatmore general and powerful methods for featureextraction such as projection based systems arerequired. The basic idea behind projection-basedsystems is to construct low dimensional projectionsof a high dimensional point cloud, by maximizing anobjective function such as the deviation fromnormality.b. Face Detection and Recognition by PCAThe Eigenface Method of Turk andPentland [5] is one of the main methods applied inthe literature, which is based on the Karhunen-Loeveexpansion. Their study is motivated by the earlierwork of Sirowich and Kirby [6], [7]. It is based onthe application of Principal Component Analysis tothe human faces. It treats the face images as 2-Ddata, and classifies the face images by projectingthem to the eigenface space, which is composed ofeigenvectors obtained by the variance of the faceimages. Eigenface recognition derives its name fromthe German prefix Eigen, meaning own orindividual. The Eigenface method of facialrecognition is considered the first working facialrecognition technology. When the method was firstproposed by Turk and Pentland [5], they worked onthe image as a whole. Also, they used Nearest Meanclassifier two classify the face images. By using theobservation that the projection of a face image andnon-face image are quite different, a method ofdetecting the face in an image is obtained. Theyapplied the method on a database of 2500 faceimages of 16 subjects, digitized at all combinationsof 3 head orientations, 3 head sizes and 3 lightingconditions. They conducted several experiments totest the robustness of their approach to illuminationchanges, variations in size, head orientation, and thedifferences between training and test conditions.They reported that the system was fairly robust toillumination changes, but degrades quickly as thescale changes [5]. This can be explained by thecorrelation between images obtained under differentillumination conditions the correlation between faceimages at different scales is rather low. Theeigenface approach works well as long as the testimage is similar to the training images used forobtaining the eigenfaces.C. Face Recognition by LDAEtemad and Chellappa [8] proposed amethod on appliance of Linear/Fisher DiscriminantAnalysis for the face recognition process. LDA iscarried out via scatter matrix analysis. The aim is tofind the optimal projection which maximizes betweenclass scatter of the face data and minimizes withinclass scatter of the face data. As in the case of PCA,where the eigenfaces are calculated by the eigenvalueanalysis, the projections of LDA are calculated by thegeneralized eigenvalue equation.Subspace LDA: An alternative method, whichcombines PCA and LDA is studied. This methodconsists of two steps the face image is projected into
  3. 3. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979973 | P a g ethe eigenface space, which is constructed by PCA,and then the eigenface space projected vectors areprojected into the LDA classification space toconstruct a linear classifier. In this method, thechoice of the number of eigenfaces used for the firststep is critical; the choice enables the system togenerate class separable features via LDA from theeigenface space representation. Thegeneralization/over fitting problem can be solved inthis manner. In these studies, a weighted distancemetric guided by the LDA eigenvalues was alsoemployed to improve the performance of thesubspace LDA method.II. BACKGROUND INFORMATIONMost research on face recognition falls intotwo main categories feature-based and holistic.Feature-based approaches to face recognitionbasically rely on the detection and characterizationof individual facial features and their geometricalrelationships. Such features generally include theeyes, nose, and mouth. The detection of faces andtheir features prior to performing verification orrecognition makes these approaches robust topositional variations of the faces in the input image.Holistic or global approaches to face recognition, onthe other hand, involve encoding the entire facialimage. Thus, they assume that all faces areconstrained to particular positions, orientations, andscales[10-14].Feature-based approaches were morepredominant in the early attempts at automating theprocess of face recognition. Some of this early workinvolved the use of very simple image processingtechniques (such as edge detection, signatures, andso on) for detecting faces and their features. In thisapproach an edge map was first extracted from aninput image and then matched to a large ovaltemplate, with possible variations in position andsize. The presence of a face was then confirmed bysearching for edges at estimated locations of certainfeatures like the eyes and mouth[10-14].A successful holistic approach to facerecognition uses the Karhunen-Loeve transform(KLT). This transform exhibits pattern recognitionproperties that were largely overlooked initiallybecause of the complexity involved in itscomputation. But the KLT does not achieveadequate robustness against variations in faceorientation, position, and illumination. That is why itis usually accompanied by further processing toimprove its performance. An alternative holisticmethod for face recognition and compares it to thepopularly approach. The basic idea is to use thediscrete cosine transform (DCT) as a means offeature extraction for later face classification. TheDCT is computed for a cropped version of an inputimage containing a face, and only a small subset ofthe coefficients is maintained as a feature vector. Toimprove performance, various normalizationtechniques are invoked prior to recognition toaccount for small perturbations in facial geometryand illumination. The main merit of the DCT is itsrelationship to the KLT. Of the deterministic discretetransforms, the DCT best approaches the KLT. Thus,it is expected that it too will exhibit desirable patternrecognition capabilities[15-21].A. Transformation Based SystemsPodilchuk and zang proposed a method,which finds the feature vectors using DCT.Their todetect the critical areas of the face. The system isbased on matching the image to a map of invariantfacial attributes associated with specific areas of theface. This technique is quite robust, since it relies onglobal operations over a whole region of the face. Acodebook of feature vectors or code words isdetermined for each from the training set. Theyexamine recognition performance based on featureselection, number of features or codebook size andfeature dimensionality. For this feature selection, wehave several block-based transformations. Amongthese block-based DCT coefficients produce goodlow dimensional feature vectors with highrecognition performance. The main merit of theDCT is its relationship to the KLT. The KLT is anoptimal transform based on various performancecriteria. The DCT in face recognition becomes ofmore value than the KLT because of itscomputational speed. The KLT is not only morecomputationally intensive, but it must also beredefined every time the statistics of its input signalschange. Therefore, in the context of face recognition,the eigenvectors of the KLT (eigenfaces) shouldideally be recomputed every time a new face isadded to the training set of known faces[15-21].B. Karhunen-Loeve TransformThe Karhunen-Loeve Transform (KLT) is arotation transformation that aligns the data with theeigenvectors, and decor relates the input image data.Here, the transformed image may make evidentfeatures not discernable in the original data, oralternatively, possibly preserve the essentialinformation content of the image, for a givenapplication with a reduced number of thetransformed dimensions. The KLT develops a newcoordinate system in the multi spectral vector space,in which the data can be represented withoutcorrelation as defined by:GxY  … (1)Where Y is a new coordinate system, G is alinear transformation of the original Co-ordinatesthat is the transposed matrix of eigenvector of thepixel datas covariance in x space, and x is anoriginal coordinate system. By the equation for Y,we can get the principal components and choose thefirst principal component from this transformationthat seems to be the best representation of the inputimage.
  4. 4. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979974 | P a g eThe KLT, a linear transform, takes the basisfunctions from the statistics of the signal. The KLTtransform has been researched extensively for use inrecognition systems because it is an optimaltransform in the sense of energy compaction, i.e., itplaces as much energy as possible in as fewcoefficients as possible. The KLT is also calledPrincipal Component Analysis (PCA), and issometimes referred to as the Singular ValueDecomposition (SVD) in literature. The transform isgenerally not separable, and thus the full matrixmultiplication must be performed.C. QuantisationDCT-based image compression relies ontwo techniques to reduce the data required torepresent the image. The first is quantization of theimages DCT coefficients; the second is entropycoding of the quantized coefficients. Quantization isthe process of reducing the number of possiblevalues of a quantity, thereby reducing the number ofbits needed to represent it. In lossy imagecompression the transformation decompose theimage into uncorrelated parts projected onorthogonal basis of the transformation. These basesare represented by eigenvectors, which areindependent and orthogonal in nature. Takinginverse of the transformed values will result in theretrieval of the actual image data. For compressionof the image, the independent characteristic of thetransformed coefficients is considered; truncatingsome of these coefficients will not affect the others.This truncation of the transformed coefficients isactually the lossy process involved in compressionand known as quantization. So we can say that DCTis perfectly reconstructing, when all the coefficientsare calculated and stored to their full resolution. Forhigh compression, the DCT coefficients arenormalized by different scales, according to thequantization matrix [22]. Vector quantization, (VQ)mainly used for reducing or compressing the imagedata. Application VQ on images for compressionstarted from early 1975by Hilbert mainly for thecoding of multispectral Landsat imaginary[15-21].D. CodingAfter the DCT coefficients have beenquantized, the DC coefficients are DPCM coded andthen they are entropy coded along with the ACcoefficients. The quantized AC and DC coefficientvalues are entropy coded in the same way, butbecause of the long runs in the AC coefficient, anadditional run length process is applied to them toreduce their redundancy. The quantized coefficientsare all rearranged in a zigzag order. The run lengthin this zigzag order is described by a RUN-SIZEsymbol. The RUN is a count of how many zerosoccurred before the quantized coefficient and theSIZE symbol is used in the same way as it was forthe DC coefficients, but on their AC counter parts.The two symbols are combined to form a RUN-SIZEsymbol and this symbol is then entropy coded.Additional bits are also transmitted to specify theexact value of the quantized coefficient. A size ofzero in the AC coefficient is used to indicate that therest of the 8 × 8 block is zeros (End of Block orEOB) [1].Huffman Coding: Huffman coding is an efficientsource-coding algorithm for source symbols that arenot equally probable. A variable length-encodingalgorithm was suggested by Huffman in 1952, basedon the source symbol probabilities P (xi), i=1,2… L.The algorithm is optimal in the sense that theaverage number of bits required to represent thesource symbols is a minimum provided the prefixcondition is met[15-21]. The steps of Huffmancoding algorithm are given below: Arrange the source symbols in increasingorder of heir probabilities. Take the bottom two symbols & tie themtogether. Add the probabilities of the twosymbols & write it on the combined node.Label the two branches with a ‗1‘ & a ‗0‘ Treat this sum of probabilities as a newprobability associated with a new symbol.Again pick the two smallest probabilities tiethem together to form a new probability.Each time we perform the combination oftwo symbols we reduce the total number ofsymbols by one. Whenever we tie togethertwo probabilities (nodes) we label the twobranches with a ‗0‘ & ‗1‘. Continue the procedure until only oneprocedure is left (& it should be one if youraddition is correct). This completesconstruction of the Huffman tree. To find out the prefix codeword for anysymbol, follow the branches from the finalnode back to the symbol. While tracingback the route read out the labels on thebranches. This is the codeword for thesymbol.Huffman Decoding: The Huffman Code is aninstantaneous uniquely decodable block code. It is ablock code because each source symbol is mappedinto a fixed sequence of code symbols. It isinstantaneous because each codeword in a string ofcode symbols can be decoded without referencingsucceeding symbols. That is, in any given Huffmancode no codeword is a prefix of any other codeword.And it is uniquely decodable because a string ofcode symbols can be decoded only in one way. Thusany string of Huffman encoded symbols can bedecoded by examining the individual symbols of thestring in left to right manner. Because we are usingan instantaneous uniquely decodable block code,there is no need to insert delimiters between theencoded pixels. For Example consider a 19 bit string
  5. 5. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979975 | P a g e1010000111011011111 which can be decodeduniquely as x1 x3 x2 x4 x1 x1 x7 [7]. A left to rightscan of the resulting string reveals that the first validcode word is 1 which is a code symbol for, nextvalid code is 010 which corresponds to x1,continuing in this manner, we obtain a completelydecoded sequence given by x1 x3 x2 x4 x1 x1 x7.E. Comparison with the KLTThe KLT completely decor relates a signalin the transform domain, minimizes MSE in datacompression, contains the most energy (variance) inthe fewest number of transform coefficients, andminimizes the total representation entropy of theinput sequence All of these properties, particularlythe first two, are extremely useful in patternrecognition applications. The computation of theKLT essentially involves the determination of theeigenvectors of a covariance matrix of a set oftraining sequences (images in the case of facerecognition. As for the computational complexity ofthe DCT and KLT, it is evident from the aboveoverview that the KLT requires significantprocessing during training, since its basis set is data-dependent. This overhead in computation, albeitoccurring in a non-time-critical off-line trainingprocess, is alleviated with the DCT. As for onlinefeature extraction, the KLT of an N X N image canbe computed in O (M_N2) time where M_ is thenumber of KLT basis vectors. In comparison, theDCT of the same image can be computed in O(N2log2N) time because of its relation to the discreteFourier transform—which can be implementedefficiently using the fast Fourier transform Thismeans that the DCT can be computationally moreefficient than the KLT depending on the size of theKLT basis set^2.It is thus concluded that the discrete cosinetransform is very well suited to application in facerecognition. Because of the similarity of its basisfunctions to those of the KLT, the DCT exhibitsstriking feature extraction and data compressioncapabilities. In fact, coupled with these, the ease andspeed of the computation of the DCT may evenfavor it over the KLT in face recognition[15-21].III. SYSTEM STRUCTUREThe process of embedding local appearance-basedface representation approach, a detected andnormalized face image is divided into blocks of 8x8pixels size. On each 8x8 pixels block, DCT isperformed. The obtained DCT coefficients areordered using zigzag scanning from the orderedcoefficients, according to the feature selectionstrategy, M of them are selected resulting an Mdimensional local feature vector[15-21]. Finally, theDCT coefficients extracted from each block areconcatenated to construct the feature vector. Thiswill be shown in Fig 1 and 2.Fig 1 Embedding Local Appearance-Based Face RepresentationA. Algorihtm ExplanationThe Discrete Cosine Transform (DCT): A DCTexpresses a sequence of finitely many data points interms of a sum of cosine functions. DCT areimportant to numerous applications in science andengineering. The use of cosine rather than sinefunctions is critical in these applications forcompression, it turns out that cosine functions aremuch more efficient. In particular, a DCT is a Fourierrelated transform related to the discrete Fouriertransform DFT, but using only real numbers. Thistype of frequency transform is orthogonal, real, andseparable and algorithms for its computation haveproved to be computationally efficient. The DCT is awidely used frequency transform because it closelyapproximates the optimal KLT transform, while notsuffering from the drawbacks of applying the KLT.However KLT is constructed from the Eigen valuesand the corresponding eigenvectors of the data is tobe transformed, it is signal dependent and there is nogeneral algorithm for its fast computation. The DCTdoes not suffer from these drawbacks due to data-independent basis functions and several algorithmsfor fast implementation. A discrete cosine transform(DCT) is a sinusoidal unitary transform. The DCThas been used in digital signal and image processingand particularly in transform coding systems for datacompression/decompression[15-21].
  6. 6. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979976 | P a g eFig.2 System ArchitectureThis type of frequency transform is real,orthogonal, and separable, and algorithms for itscomputation have proved to be computationallyefficient. In fact, the DCT has been employed as themain processing tool for datacompression/decompression in international imageand video coding standards. All present digital signaland image processing applications involve only eventypes of the DCT. As this is the case, the discussionin this dissertation is restricted to four even types ofDCT.In subsequent sections, N is assumed to be aninteger power of 2, i.e., N = 2m. A Subscript of amatrix denotes its order, while a superscript denotesthe version number. Four even types of DCT in thematrix form are defined asDCT−I ∶ 𝐶𝑁+1𝐼𝑛𝑘 =2𝑁𝜀𝑛 𝜀𝑘 cos𝜋𝑛𝑘𝑁Wheren,k=0,1,…N-2DCT−II ∶ 𝐶𝑁𝐼𝐼𝑛𝑘 =2𝑁𝜀𝑘 cos𝜋(2𝑛+1)𝑘2𝑁Wheren,k=0,1,…N-1DCT−III ∶ 𝐶𝑁𝐼𝐼𝐼𝑛𝑘 =2𝑁𝜀𝑛 cos𝜋(2𝑛+1)𝑛2𝑁Wheren,k=0,1,…N-1DCT−IV ∶ 𝐶𝑁𝐼𝑉𝑛𝑘 =2𝑁cos𝜋 2𝑛+1 (2𝑘+1)4𝑁Where 𝜀𝑝 =12𝑝 = 0 𝑜𝑟 𝑝 = 𝑁1 𝑜𝑡ℎ𝑒𝑟 𝑤𝑖𝑠 𝑒DatabaseImagesNormalization(Geometry)Normalization(Illumination)DCT-based featureExtractionProbe Normalization(Geometry)Normalization(Illumination)DCT-based featureextractionQuick SearchStoreddatabasefeaturesDatabase Images Normalization(Geometry)Normalization(Illumination)DCT-based featureExtractionProbe Normalization(Geometry)Normalization(Illumination)DCT-based featureextractionQuick SearchStoreddatabasefeaturesMATCHRUNTIME
  7. 7. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979977 | P a g eDCT matrices are real and orthogonal. AllDCTs are separable transforms; the Multi-dimensional transform can be decomposed into asuccessive application of one-dimensional transforms(1-D) in the appropriate directions. The DCT is awidely used frequency transform because it closelyapproximates the optimal KLT transform, while notsuffering from the drawbacks of applying the KLT.This close approximation is based upon theasymptotic equivalence of the family of DCTs withrespect to KLT for a first-order stationary Markovprocess, in terms of the transform size and theinterelement correlation coefficient. Recall that theKLT is an optimal transform for data compression ina statistical sense because it decorrelates a signal inthe transform domain, packs the most information ina few coefficients, and minimizes mean-square errorbetween the reconstructed and original signalcompared to any other transform. However, KLT isconstructed from the eigen values and thecorresponding eigenvectors of a covariance matrix ofthe data to be transformed it is signal-dependent, andthere is no general algorithm for its fastcomputation[15-21]. The DCT does not suffer fromthese drawbacks due to data-independent basisfunctions and several algorithms for fastimplementation.The DCT provides a good trade-off betweenenergy packing ability and computational complexity.The energy packing property of DCT is superior tothat of any other unitary transform. This is importantbecause these images transforms pack the mostinformation into the fewest coefficients and yield thesmallest reconstruction errors. DCT basis images areimage independent as opposed to the optimal KLT,which is data dependent. Another benefit of the DCT,when compared to the other image independenttransforms, is that it has been implemented in a singleintegrated circuit.The performance of DCTII is closest to thestatistically optimal KLT based on a number ofperformance criteria. Such criteria include energypacking efficiency, variance distribution, ratedistortion, residual correlation, and possessingmaximum reducible bits Furthermore, a characteristicof the DCT-II is superiority in bandwidthcompression (redundancy reduction) of a wide rangeof signals and by existence of fast algorithms for itsimplementation. As this is the case, the DCT-II andits inversion, DCT-III, have been employed in theinternational image/video coding standards: JPEG forcompression of still images, MPEG for compressionof video including HDTV (High DefinitionTelevision), H.261 for compression telephony andteleconferencing, and H.263 for visualcommunication over telephone lines.One-dimensional DCT: The one-dimensional DCT-II (1-D DCT) is a technique that converts a spatialDomain waveform into its constituent frequencycomponents as represented by a set of Coefficients.The one-dimensional DCT-III is the processes ofreconstructing a set of spatial domain samples arecalled the Inverse Discrete Cosine Transform (1-DIDCT).The 1-D DCT has most often been used inapplying the two-dimensional DCT (2-D DCT), byemploying the row-column decomposition, which isalso more suitable for hardware implementation.Two-dimensional DCT: The Discrete CosineTransform is one of many transforms that takes itsinput and transforms it into a linear combination ofweighted basis functions. These basis functions arecommonly in the form of frequency components. The2-D DCT is computed as a 1-D DCT applied twice,once in the x direction, and again in the y direction.The discrete Cosine transform is of a N x M imagef(x, y) is defined by: In computing the 2-D DCT,factoring reduces the problem to applying a series of1-D DCT computations. The two interchangeablesteps in calculating the 2-D DCT are: Apply 1-D DCT (vertically) to thecolumns. Apply 1-D DCT (horizontally) to result ofStep 1.In most compression schemes such as JPEGand MPEG, typically an 8x8 or 16x16 sub image, orblock, of pixels (8x8 is optimum for trade-offbetween compression efficiency and computationalcomplexity) is used in applying the 2-D DCT. TheDCT helps separate the image into parts (or spectralsub-bands) of differing importance (with respect tothe images visual quality). DCT transforms the inputinto a linear combination of weighted basis functions.These basis functions are the frequency componentsof the input data. For most images, much of the signalenergy lies at low frequencies these are relocated tothe upper-left corner of the DCT. Conversely, thelower-right values of the DCT array represent higherfrequencies, and turn out to be smaller in magnitude,especially as u and v approach the sub image widthand height, respectively[15-21]. Here we show someapproach to basic algorithm1. Maintain a database with images in JPEGformat.2. Select a Probe for testing.3. Convert the probe image and databaseimage by apply NormalizationTechniques.4. Process of Normalizationi. Probe and database images are convertedinto grayscale images.ii. Converted images are resized into 8 X 8 or16 X 16.iii. Feature vector values are derived for theresized images.iv. DCT technique is applied to the Featurevector values.v. Finally covariance is displayed for bothdatabase and probe.5. Subtract the covariance and feature vectorvalues of probe and database image.
  8. 8. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979978 | P a g e6. If the output value is Zero or negative thenthe image is recognized. Otherwiseimage is not recognizedIV. CASE STUDYHere we study some of the results with differentimagesResult Objectives: Both Images are same so image isrecognized, here we Enter the angle value 0. S = -1.8084e+003The output is Zero or Negative so object isrecognized,Conclusion: When both the images are in same sizebut in different angles. If angle is equal then result ispass and image is recognized. Shown in Fig 3Fig 3 Analysis 1Result Objectives: Here we taken the same imagewith equal sizes but they are in different angles. Herewe Enter the angle value 8, s = 6.8758e+003, Theoutput is positive so not recognized the objectConclusion: When both images are in same sizes butin different angles. If Angle is equal then result ispass otherwise fail.so image is not Recognizedbecause Positive value. Shown in Fig 4Fig 4 Analysis 2Result Objectives: Different images are notmatched, enter the input imagepic5.jpg, enter thedatabase imagepic1.jpgS = 4.8084e+003, The both images are not same,Conclusion: The input and database images are notsame. So they are not matched. Shown in Fig 5Fig 5 Analysis 3Result Objectives: Both images are matched. Enter the input imagepic5.jpg, Enter the databaseimagepic5.jpg, S=0The output is Zero or Negative so object is recognized, Both Images are sameConclusion: The input and database images are same. Shown in Fig 6Fig 6 Analysis 4V. CONCLUSION AND FUTURE WORKThe Effects Of Feature Selection AndFeature Normalization To The Performance Of ALocal Appearance Based Face Recognition Scheme.Here, Three Different Feature Sets And TwoNormalization Techniques Which Analyze The
  9. 9. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research andApplications (IJERA) ISSN: 2248-9622 www.ijera.comVol. 3, Issue 3, May-Jun 2013, pp.971-979979 | P a g eEffects Of Distance Metrics On The Performance.This Approach Was Based On The Discrete CosineTransform, And Experimental Evidence To ConfirmIts Usefulness And Robustness Was Presented. TheMathematical Relationship Between The DiscreteCosine Transform (Dct) And The Karhunen-LoeveTransform (Klt) Explains The Near-OptimalPerformance Of The Former. Experimentally, TheDct Was Shown To Perform Very Well In FaceRecognition, Just Like The Klt. Face NormalizationTechniques Were Also Incorporated In The FaceRecognition System Discussed Here. It Is RequiredThat A Perfect Alignment Between A Gallery And AProbe Image. We Can Also Extend It To PoseInvariant Face Recognition Method, Centered OnModeling Joint Appearance Of Gallery And ProbeImages Across Pose, That Do Not Require The FacialLandmarks To Be Detected .REFERENCES[1]. R. Gottumukkal, V.K. Asari, ―An ImprovedFace Recognition Technique Based OnModular Pca Approach‖,Pattern RecognitionLetters, 25(4), 2004.[2]. B. Heisele Et Al, ―Face Recognition:Component-Based Versus GlobalApproaches‖, Computer Vision And ImageUnderstanding, 91:6-21, 2003.[3]. T. Kim Et Al, ―Component-Based Lda FaceDescription For Image Retrieval And Mpeg-7 Standardisation‖, Image And VisionComputing, 23(7):631-642, 2005.[4]. A.Pentland, B. Moghaddam, T. Starner AndM. Turk,―View Based And ModularEigenspaces For Face Recognition‖,Proceedings Of Ieee Cvpr, Pp. 84-91, 1994.[5]. Z. Pan And H. Bolouri, ―High Speed FaceRecognition Based On Discrete CosineTransforms And Neural Networks‖,Technical Report, University OfHertfordshire, Uk, 1999.[6]. Z. M. Hafed And M. D. Levine, ―FaceRecognition Using The Discrete CosineTransform‖, International Journal OfComputer Vision, 43(3), 2001.[7]. C. Sanderson And K. K. Paliwal, ―FeaturesFor Robust Facebased Identity Verification‖,Signal Processing, 83(5), 2003.[8]. T. Sim, S. Baker, And M. Bsat, ―The CmuPose, Illumination, And Expression (Pie)Database‖, Proc. Of Intl.Conf. OnAutomatic Face AndGesturerecognition,2002[9]. W. L. Scott, ―Block-Level Discrete CosineTransform Coefficients For Autonomic FaceRecognition‖, Phd Thesis,Louisiana StateUniversity, Usa, May 2003.[10]. A. Nefian, ―A Hidden Markov Model-BasedApproach For Face Detection AndRecognition―, Phd Thesis, Georgia InstituteOf Technology, 1999.[11]. H.K. Ekenel, R. Stiefelhagen, "A GenericFace (Frgc), Arlington, Va,Usa,March2006.[12]. H.K. Ekenel, R. Stiefelhagen, "A GenericFace Representation Approach For LocalAppearance Based Face Verification", CvprIeee Workshop On Frgc Experiments, 2005.[13]. H.K. Ekenel, A. Pnevmatikakis, ―Video-Based Face Recognition Evaluation In TheChil Project –Run 1‖, 7thIntl. Conf. OnAutomatic Face And Gesture Recogniti On(Fg 2006), Southampton, Uk, April 2006.[14]. A.M. Martinez, R. Benavente, “The Ar FaceDatabase‖, Cvc Technical Report #24, 1998.[15]. M. Bartlett Et Al, ―Face Recognition ByIndependent Component Analysis‖, IeeeTrans. On Neural Networks,13(6):1450–1454, 2002.[16]. Kim, K.I., Jung, K., Kim, H.J., FaceRecognition Using Kernel PrincipalComponent Analysis. Ieee Signal Process.Lett. 9 (2), 40–42, 2002.[17]. C. Yeo , Y. H. Tan And Z. Li "Low-Complexity Mode-Dependent Klt ForBlock-Based Intra Coding", Ieee Icip-2011,2011.[18]. H. Seyedarabi, A. Aghagolzadeh And S.Khanmohammadi, "Recognition Of SixBasic Facial Expressions By Feature-PointsTracking Using Rbf Neural Network AndFuzzy Inference System," Ieee Int Conf. OnMultimedia And Expo (Icme), Pp. 1219-1222, 2004.[19]. H. Seyedarabi, A. Aghagolzadeh And S.Khanmohammadi, "Facial ExpressionAnimation And Lip Tracking Using FacialCharacteristic Points And DeformableModel," Transaction On Engineering,Computer And Technology Issn, Vol. 1, Pp.416-419, Dec 2004.[20]. D. Vukadinovic And M. Panitc, "FullyAutomatic Facial Feature Point DetectionUsing Gabor Feature Based BoostedClassifiers," Proceedings Of IeeeInternational Conference On Man, SystemAnd Cybernetics, Pp. 1692-1698, 2005.[21]. Mokhayeri, F.; Akbarzadeh-T, M.-R. "ANovel Facial Feature Extraction MethodBased On Icm Network For AffectiveRecognition", Neural Networks (Ijcnn), The2011 International Joint Conference On, OnPage(S): 1988 – 199,2011.[22]. H.K. Ekenel, R. Stiefelhagen, ―LocalAppearance Based Face Recognition UsingDiscrete Cosine Transform‖,Eusipco 2005,Antalya, Turkey, 2005.

×