The International Journal of Engineering and Science (IJES)


Published on

The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The International Journal of Engineering and Science (IJES)

  1. 1. The International Journal of Engineering And Science (IJES)||Volume|| 1 ||Issue|| 2 ||Pages|| 215-220 ||2012||ISSN: 2319 – 1813 ISBN: 2319 – 1805 Face Recognition using DCT - DWT Interleaved Coefficient Vectors with NN and SVM Classifier 1 Divya Nayak (M. Tech.) 1Mr. Sumit Sharma 1, 2 Shri Ram Institute of Technology Jabalpur (M.P.)---------------------------------------------------------------Abstract-------------------------------------------------------------Face recognition applications are continuously gaining demands because of their requirements personauthentication, access control and surveillance systems. The researchers are continuously working for makingthe system more accurate and faster in the part of that research this paper presents a face recognition systemwhich uses DWT, DCT interleaved component for feature vector formation & tested the Support VectorMachine and ANN for classification. Finally the proposed technique is implemented and comprehensivelyanalyzes to test its efficiency we also compared these two classification method.Key Words: Face Recognition, Support Vector Machine (SVM), Artificial Neural Network (ANN).----------------------------------------------------------------------------------------------------------------------------- ---------- Date of Submission: 06, December, 2012 Date of Publication: 25, December 2012---------------------------------------------------------------------------------------------------------------------------------------I Introduction II Related Work This section presents some of the mostIncreasing security demands are forcing the relevant work recent work presented by otherscientists and researchers to develop more researchers. Ignas Kukenys and Brendan McCaneadvanced security systems one of them is biometric [2] describe a component-based face detector usingsecurity system this system is particularly preferred support vector machine classifiers. They presentbecause of its proven natural uniqueness and user current results and outline plans for future workhas no need to carry additional devices like cares, required to achieve sufficient speed and accuracy toremote etc. the biometric security systems refers to use SVM classifiers in an online face recognitionthe identification of humans by their characteristics system. They used a straightforward approach inor traits. Biometrics is used in computer science as implementing SVM classifier with a Gaussiana form of identification and access control [1]. It is kernel that detects eyes in grayscale images, a firstalso used to identify individuals in groups that are step towards a component-based face detector.under surveillance. One of the biometric Details on design of an iterative bootstrappingidentification is done by the face of the person, this process are provided, and we show which trainingmethod has several application from online parameter values tend to give best results. Jixiong(person surveillance) to offline (scanned image Wang (jameswang) [3] presented a detailed reportidentification) etc. face recognition system has its on using support vector machine and application ofown advantage over other biometric methods that it different kernel functions (Linear kernel,can be detected from much more distance without Polynomial kernel, Radial basis kernel, Sigmoidneed of special sensors or scanning devices. kernel) and multiclass classification methods and parameter optimization. Face recognition in the There are several methods are proposed so presence of pose changes remains a largelyfar for the face recognition system using different unsolved problem. Severe pose changes, resultingfeature extraction techniques or different training in dramatically different appearances, are one ofapproaches or different classification approaches to the main difficulties and one solution approach isimprove the efficiency of the system. The rest of presented by Antony Lam and Christian R. Sheltonthe paper is arrange as the second section presents a [4] they present a support vector machine (SVM)short review of the work done so far, the third based system that learns the relations betweensection presents the details of technical terms used corresponding local regions of the face in differentin the algorithm, the fourth section presents poses as well as a simple SVM based system forproposed algorithm followed by analysis an automatic alignment of faces in differing poses.conclusion in next chapters. Jennifer Huang, Volker Blanz and Bernd Heisele [5] present an approach to pose and illumination invariant face recognition that combines two recent advances in the computer vision field: The IJES Page 215
  2. 2. Face Recognition using DCT - DWT Interleaved Coefficient Vectors With NN and SVM Classifierbased recognition and 3D morph able models. In passed through a low pass filter with impulsepreliminary experiments we show the potential of response g resulting in a convolution of the two:our approach regarding pose and illuminationinvariance. A Global versus Component basedApproach for Face Recognition with SupportVector Machines is presented by Bernd Heisele, The signal is also decomposed simultaneouslyPurdy Ho, Tomaso Poggio [6]. They present a using a high-pass filter h. The output gives thecomponent based method and two global methods detail coefficients (from the high-pass filter) andfor face recognition and evaluate them with respect approximation coefficients (from the low-pass). Itto robustness against pose changes. In the is important that the two filters are related to eachcomponent system they first locate facial other and they are known as a quadrature mirrorcomponents, extract them and combine them into a filter. However, since half the frequencies of thesingle feature vector which is classified by a signal have now been removed, half the samplesSupport Vector Machine (SVM). The two global can be discarded according to Nyquist’s rule. Thesystems recognize faces by classifying a single filter outputs are then sub sampled by 2 (Mallatsfeature vector consisting of the gray values of the and the common notation is the opposite, g- highwhole face image. The component system clearly pass and h- low pass):outperformed both global systems on all tests.Yongmin Li, Shaogang Gong, Jamie Sherrah,Heather Liddell [7] presented face detection acrossmultiple views (frontal view, owing to thesignificant non-linear variation caused by rotationin depth, self-occlusion and self-shadowing). Intheir approach the view sphere is separated intoseveral small segments. On each segment, a face This decomposition has halved the time resolutiondetector is constructed. They explicitly estimate the since only half of each filter output characterizespose of an image regardless of whether or not it is a the signal. However, each output has half theface. A pose estimator is constructed using Support frequency band of the input so the frequencyVector Regression. The pose information is used to resolution has been doubled.choose the appropriate face detector to determine ifit is a face. With this pose-estimation based The DWT and IDWT for an one-dimensionalmethod, considerable computational efficiency is signal can be also described in the form of twoachieved. Meanwhile, the detection accuracy is also channel tree-structured filter banks. The DWTimproved since each detector is constructed on a and IDWT for a two-dimensional image x[m,n] cansmall range of views. Rectangle Features based be similarly defined by implementing DWT andmethod is presented by Qiong Wang, Jingyu Yang, IDWT for each dimension m and n separatelyand Wankou Yang [8] presents an efficient DWT n [DWT m [x[m,n]], which is shown inapproach to achieve accurate face detection in still Figure 1.gray level images. Characteristics of intensity andsymmetry in eye region are used as robust cues tofind possible eye pairs. Three rectangle features aredeveloped to measure the intensity relations andsymmetry. According to the eye-pair-like regionswhich have been found, the correspondingsquare image patches are considered to be facecandidates, and then all the candidates areverified by using SVM. Finally, all the faces inthe image are detected. Figure 1: DWT for two-dimensional images [9]III Discrete Wavelet Transform (Dwt) In numerical analysis and functional An image can be decomposed into aanalysis, a discrete wavelet transform (DWT) is pyramidal structure, which is shown in Figure 2,any wavelet transform for which the wavelets are with various band information: low-low frequencydiscretely sampled. As with other wavelet band LL, low-high frequency band LH, high-lowtransforms, a key advantage it has over Fourier frequency band HL, high-high frequency bandtransforms is temporal resolution: it captures both HH.frequency and location information (location intime).The DWT of a signal x is calculated by passing itthrough a series of filters. First the samples The IJES Page 216
  3. 3. Face Recognition using DCT - DWT Interleaved Coefficient Vectors With NN and SVM Classifier nonlinearly projecting the training data in the input space to a feature space of higher dimension by use of a kernel function. This results in a linearly separable dataset that can be separated by a linear classifier. This process enables the classification of datasets which are usually nonlinearly separable in the input space. The functions used to project the data from input space to feature space are called kernels (or kernel machines), examples of which include polynomial, Gaussian (more commonly referred to as radial basis functions) and quadratic functions. By their nature SVMs are intrinsically binary classifiers however there are strategies by Figure 2: Pyramidal structure which they can be adapted to multiclass tasks. But in our case we not need multiclass classification.IV Discrete Cosine Transform (DCT) A discrete cosine transform (DCT) 5.1 SVM classificationexpresses a sequence of finitely many data points Let xi∈ Rm be a feature vector or a set ofin terms of a sum of cosine functions oscillating at input variables and let yi∈ {+1, −1} be adifferent frequencies. DCTs are important to corresponding class label, where m is thenumerous applications in science and engineering, dimension of the feature vector. In linearlyfrom lossy compression of audio (e.g. MP3) and separable cases a separating hyper-plane satisfiesimages (e.g. JPEG) (where small high-frequency [8].components can be discarded), to spectral methodsfor the numerical solution of partial differentialequations. Formally, the discrete cosine transform isa linear, invertible function f: RN → RN (where Rdenotes the set of real numbers), or equivalently aninvertible N × N square matrix. There are severalvariants of the DCT with slightly modifieddefinitions. The N real numbers x0... xN-1 aretransformed into the N real numbers X0... XN-1 according to one of the formulas:The 2D DCT is given by Figure 3: Maximum-margin hyper-plane and margins for an SVM trained with samples from two classes. Samples on the margin are called the support vectors. Where the hyper-plane is denoted by a vector of weights w and a bias term b. The optimal5. Support Vector Machine (SVM) separating hyper-plane, when classes have equalSupport Vector Machines (SVMs) have developed loss-functions, maximizes the margin between thefrom Statistical Learning Theory [6]. They have hyper-plane and the closest samples of classes. Thebeen widely applied to fields such as character, margin is given byhandwriting digit and text recognition, and morerecently to satellite image classification. SVMs,like ANN and other nonparametric classifiers havea reputation for being robust. SVMs function The IJES Page 217
  4. 4. Face Recognition using DCT - DWT Interleaved Coefficient Vectors With NN and SVM Classifier the solution of the Lagrangian, all data points withThe optimal separating hyper-plane can now be nonzero (and nonnegative) Lagrange multiplierssolved by maximizing (2) subject to (1). The are called support vectors (SV).solution can be found using the method of Often the hyperplane that separates the trainingLagrange multipliers. The objective is now to data perfectly would be very complex and wouldminimize the Lagrangian not generalize well to external data since data generally includes some noise and outliers. Therefore, we should allow some violation in (1) and (6). This is done with the nonnegative slack variable ζiand requires that the partial derivatives of w and bbe zero. In (3), αi is nonnegative Lagrangemultipliers. Partial derivatives propagate to The slack variable is adjusted by the regularizationconstraints . constant C, which determines the tradeoff betweenSubstituting w into (3) gives the dual form complexity and the generalization properties of the classifier. This limits the Lagrange multipliers in the dual objective function (5) to the range 0 ≤ αi ≤ C. Any function that is derived from mappings to the feature space satisfies the conditions for thewhich is not anymore an explicit function of w or kernel function.b. The optimal hyper-plane can be found bymaximizing (4) subject to and all The choice of a Kernel depends on the problem atLagrange multipliers are nonnegative. However, in hand because it depends on what we are trying tomost real world situations classes are not linearly model.separable and it is not possible to find a linearhyperplane that would satisfy (1) for all i = 1. . . n. The SVM gives the following advantages overIn these cases a classification problem can be made neural networks or other AI methods (link for morelinearly separable by using a nonlinear mapping details the feature space where classes are linearly SVM training always finds a global minimum, andseparable. The condition for perfect classification their simple geometric interpretation providescan now be written as fertile ground for further investigation. Most often Gaussian kernels are used, when the resulted SVM corresponds to an RBF network with Gaussian radial basis functions. As the SVMwhere Φ is the mapping into the feature space. approach “automatically” solves the networkNote that the feature mapping may change the complexity problem, the size of the hidden layer isdimension of the feature vector. The problem now obtained as the result of the QP procedure. Hiddenis how to find a suitable mapping Φ to the space neurons and support vectors correspond to eachwhere classes are linearly separable. It turns out other, so the center problems of the RBF network isthat it is not required to know the mapping also solved, as the support vectors serve as theexplicitly as can be seen by writing (5) in the dual basis function centers.form Classical learning systems like neural networks suffer from their theoretical weakness, e.g. back- propagation usually converges only to locally optimal solutions. Here SVMs can provide a significant improvement.and replacing the inner product in (6) with asuitable kernel function . The absence of local minima from the above algorithms marks a major departure fromThis form arises from the same procedure as was traditional systems such as neural networks.done in the linearly separable case that is, writingthe Lagrangian of (6), solving partial derivatives, SVMs have been developed in the reverse order toand substituting them back into the Lagrangian. the development of neural networks (NNs). SVMsUsing a kernel trick, we can remove the explicit evolved from the sound theory to thecalculation of the mapping Φ and need to only implementation and experiments, while the NNssolve the Lagrangian (5) in dual form, where the followed more heuristic path, from applications andinner product has been transposed with the extensive experimentation to the theory.kernel function in nonlinearly separable cases. The IJES Page 218
  5. 5. Face Recognition using DCT - DWT Interleaved Coefficient Vectors With NN and SVM Classifier Face No. TP TN FP FN Accuracy Precision Recall F-Meas.V Proposed Algorithm 1 0.6 0.9889 0.0111 0.4 0.95 0.8571 0.6 0.7059 The proposed algorithm can be described 2 0.8 1 0 0.2 0.98 1 0.8 0.8889 3 1 0.9889 0.0111 0 0.99 0.9091 1 0.9524in following steps. 4 1 0.9889 0.0111 0 0.99 0.9091 1 0.9524 5 0.9 0.9667 0.0333 0.1 0.96 0.75 0.9 0.81821. Firstly divide the image into 8x8 blocks then 6 1 0.9444 0.0556 0 0.95 0.6667 1 0.8take the 2D DCT of the image & then select the 7 8 1 0.9 0.9778 1 0.0222 0 0 0.1 0.98 0.99 0.8333 1 1 0.9 0.9091 0.9474components in zigzag manner. 9 0.8 0.9889 0.0111 0.2 0.97 0.8889 0.8 0.8421 10 0.8 0.9889 0.0111 0.2 0.97 0.8889 0.8 0.84212. Secondly we take the 2D DWT of the image Average 0.88 0.9833 0.0167 0.12 0.973 0.8703 0.88 0.8658blocks & then select the zigzag components of onlyLL components. Table 2: Result for 40 Faces and 10 Samples Each using NN3. Now the feature vectors are formed by Face No. TP TN FP FN Accuracy Precision Recall F-Meas. 1 0.9 0.9889 0.0111 0.1 0.98 0.9 0.9 0.9interleaving these two components (even 2 0.8 1 0 0.2 0.98 1 0.8 0.8889components from DWT and odd from DCT). 3 1 0.9889 0.0111 0 0.99 0.9091 1 0.9524 4 0.8 1 0 0.2 0.98 1 0.8 0.8889 5 0.9 0.9889 0.0111 0.1 0.98 0.9 0.9 0.9 6 0.8 0.9667 0.0333 0.2 0.95 0.7273 0.8 0.7619 7 0.9 0.9889 0.0111 0.1 0.98 0.9 0.9 0.9 8 0.6 0.9778 0.0222 0.4 0.94 0.75 0.6 0.6667 9 1 0.9889 0.0111 0 0.99 0.9091 1 0.9524 10 0.7 1 0 0.3 0.97 1 0.7 0.8235 Average 0.84 0.9889 0.0111 0.16 0.974 0.8995 0.84 0.8635 Table 3: Result for Training Time and Matching Time for ANNFigure 4: let the left block represents the DCT Number of ANN Training ANN Matchingcomponents and the right block represents the LL Faces Time (Sec.) Time (Sec.)components of DWT. 10 0.061151 0.011579According to figure 4 the feature vector will be 20 0.13404 0.017074formed by [A1, B1, A2, B2, A3, B3………..]. 40 0.35419 0.030042 .4. Like above step these vectors are created for all Table 4: Result for Training Time and Matchingclasses of faces. Time for SVM5. These vectors are used to train the N*(N-1)/2 (N Number of SVM Training SVM Matchingis the number of classes) SVM classifiers as we Faces Time (Sec.) Time (Sec.)used one against one method. 10 0.53541 0.0677866. For detection purpose the input image vectors 20 2.2608 0.31951are calculated in same way as during training and 40 9.2771 1.3173then it is applied on each classifier.7. Finally the decision is mode on the basis of VII Conclusionmajority of class returned by N*(N-1)/2 vectors. This paper presents a DCT, DWT Mixed approach for feature extraction and during theVI Simulation Results classification phase, the Support Vector Machine We used the ORL database for testing of (SVM) and Neural Network is tested for robustour algorithm. The ORL database contains 40 decision in the presence of wide facialdifferent faces with 10 samples of each face. The variations. The experiments that we haveaccuracy of the algorithm is tested for different conducted on the ORL database vindicated that thenumber of faces, samples and vector length. SVM method performs better than ANN when compared for detection accuracy but when compared for training time and detection time the neural network out performs the SVM. In future we can also compare them with using different kernel The comparison of the proposed method functions and learning techniques.with Neural Network and PCA based methodshows that the proposed method outperforms theother two. Table 1: Result for 40 Faces and 10 Samples Each using SVM References [1] "Biometrics: Overview", 6 September 2007. Retrieved The IJES Page 219
  6. 6. Face Recognition using DCT - DWT Interleaved Coefficient Vectors With NN and SVM Classifier[2] Ignas Kukenys and Brendan McCane “Support Vector Machines for Human Face Detection”, NZCSRSC ’08 Christchurch New Zealand.[3] Jixiong Wang (jameswang) “CSCI 1950F Face Recognition Using Support Vector Machine: Report”, Spring 2011, May 24, 2011.[4] Antony Lam & Christian R. Shelton “Face Recognition and Alignment using Support Vector Machines”, 2008 IE.[5] Jennifer Huang, Volker Blanz and Bernd Heisele “Face Recognition Using Component- Based SVM Classification and Morphable Models”, SVM 2002, LNCS 2388, pp. 334– 341, 2002. Springer-Verlag Berlin Heidelberg 2002.[6] Bernd Heisele, Purdy Ho and Tomaso Poggio “Face Recognition with Support Vector Machines: Global versus Component-based Approach”, Massachusetts Institute of Technology Center for Biological and Computational Learning Cambridge, MA 02142.[7] Yongmin Li, Shaogang Gong, Jamie Sherrah and Heather Liddell “Support vector machine based multi-view face detection and recognition”, Image and Vision Computing 22 (2004) 413–427.[8] Qiong Wang, Jingyu Yang, and Wankou Yang “Face Detection using Rectangle Features and SVM”, International Journal of Electrical and Computer Engineering 1:7 2006.[9] Nataša Terzija, Markus Repges, Kerstin Luck, and Walter Geisselhardt “DIGITAL IMAGE WATERMARKING USING DISCRETE WAVELET TRANSFORM: PERFORMANCE COMPARISON OF ERROR CORRECTION CODES”, The IJES Page 220