fuzzy LBP for face recognition ppt

  • 679 views
Uploaded on

face features are extracted from fuzzy and LBP method tested on SVM and KNN

face features are extracted from fuzzy and LBP method tested on SVM and KNN

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
679
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
48
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. 1 A NOVEL LBP FUZZY FEATURE EXTRACTION METHOD FOR FACE RECOGNITION By ABDULLAH GUBBI PA College Of Engineering Karnataka MADASU HANMANDLU Senior IEEE Member, EE Dept., IIT Delhi, New Dehi, India MOHAMMAD FAZLE AZEEM Senior IEEE Member, EE Dept.AMU UP
  • 2. Agenda 2  Face recognition.  Fuzzy logic.  Local Binary Pattern.  Information set.  K-Nearest Neighbour classifier; Support Vector Machine.  Results and Conclusion.
  • 3. 3 Face recognition  Face recognition has received much attention during the past few decades.  Challenges (I) pose variation (front, non-front), (ii) occlusion, (iii) image orientation, (iv) illumination condition and (v) facial expression.  Algorithms (I)Structure-based schemes that make use of shape and other texture of the face along with 3D depth information.(II) Appearance-based schemes that make use of the holistic texture features.  The Eigen faces (PCA) and Fisher faces (LDA) methods are based on the holistic approach .
  • 4. 4 PCA & LDA  The Eigen faces (PCA) and Fisher faces (LDA) methods are based on the holistic approach for face recognition.  Eigen-faces approach is likely to find the wrong components on images when there is a large variation in illumination, since the data points with the maximum variance over all classes are not necessarily useful for classification.  The Fisher-faces are computationally expensive.  Looking at the shortcoming of the above techniques and taking advantage of Information sets, which have been developed by Hanmandlu to enlarge the scope of fuzzy sets.
  • 5. 5 Experiment using Haar 10 wavelet de-noising[4] Recognition rate PCA LDA ORL database 66.07% 86.07% (with noise) Proposed database 66.43% 89.29% (without noise) KPCA FA 49.29% 85.70% 51.07% 86.07% Where, PCA- Principle component analysis, LDA-Linear Discriminant Analysis, KPCA-Kernel Principle component analysis and FA- Fisher Analysis.
  • 6. 6 Local Binary Patterns  The Local Binary Pattern (LBP) method is widely used in 2D texture analysis. The LBP operator is a non-parametric 3x3 kernel which describes the local spatial structure of an image.  Introduced by Ojala et al.  LBP is defined as an ordered set of binary comparisons of pixel intensities between the centre pixel and its eight surrounding pixels. The decimal values of the resulting 8-bit word (LBP code) leads to 28 possible combinations, which are called Local Binary Patterns
  • 7. 7 Fuzzy Logic  Fuzzy Logic (FL) theory is the extension of conventional (crisp) set theory.  It was introduced by Zadeh .  Deals with the fuzzy sets having imprecise and uncertain data. It handles the concept of partial truth (truth values between completely true and completely false.  Used to model the vagueness and ambiguity in complex systems for which there is no mathematical model to describe.  The drawback of fuzzy sets is that they treat the property or attribute values which we say information source values and their membership function values separately for all the problems dealing with the fuzzy logic theory.  consider a set of students graded based on the performance of the class topper as the benchmark and the student’s individual performance (information source value) is determined by comparing his performance with that of the topper (µ).
  • 8. 8 Proposed method  Instead of considering the whole image and their gray values, divide the image into sub images of size 3x3 (enclosed in the window)non overlapping .In case of LBP it is overlapping.  Extract local information using LBP method  We compute a membership function for each window based on the centre pixel of the window using: ic window i( x, y ) i( x, y )  Where i c represent the centre pixel in the window and represents the sum of the gray values in the same window.  The information at the central pixel is given by the product of the membership value and central pixel value as per the concept of information value. Hc window ic
  • 9. 9 Proposed method conti….  This information is modified by a scale factor = F max with the result the scaled information is HS F max H c where λ is a scaling parameter which is greater than one (λ>1). and Fmax 0 . 50 In order to take account of the information from the neighbourhood pixels (information sources), we compute the LBP value for the window. Lw LBP ( x c , y c ) By taking a clue from the communication theory, the information about the neighbourhood pixels is taken to be the log of the decimal value: HN log L w
  • 10. 10 Proposed method conti….  Having the information at the central pixel and the neighbourhood pixels, the total information is taken as the product of these two types of information, given by H HSHN  The contribution of this method is that it eliminates the shortcoming of LBP approach that ignores the central pixel value by accounting for the information of both the central and the neighbourhood pixels
  • 11. 11 Support Vector Machine (SVM)  A classifier derived from statistical learning theory by Vapnik, et al. in 1992  SVM became famous when, using images as input, it gave accuracy comparable to neural-network with hand-designed features in a handwriting recognition task  Currently, SVM is widely used in object detection & recognition, content-based image retrieval, text recognition, biometrics, speech recognition, etc.  Also used for regression. V. Vapnik
  • 12. 12 KNN Classifier  The k-Nearest Neighbour classifier is amongst the simplest of all the machine learning methods.  It is a non-parametric method for classifying objects. Nonparametric in the sense that one need not worry about the underlying structure.  Classification is done based on how much close the test feature vector is to the training feature vectors in the feature space.  An object is classified based on the majority votes of its neighbours. If k = 1, then the object is simply assigned to the class of its nearest neighbour.
  • 13. 13 some of the subjects in databases  Fig 2. Near-infrared face images of some of the subjects in the CSIST database. Fig 3. Gray scale face images of some of the subjects in ORL database .
  • 14. Experimental Results 14 The Recognition rates obtained on ORL database with SVM and KNN classifiers ORL Database Poly1 Poly 2 KNN Training images 7 Testing images 3 Training images 3 Testing images 7 Training images 5 Testing images 5 Training images 4 Testing images 6 96.66% 96.66% 92.5% 82.55% 80.35% 82.55% 87.5% 87.5% 88% 88.33% 87.916% 87.08% Recognition rate for CSIST near infra red database with SVM and KNN classifier. CSIST Near Infra Red Database Training images 1 Testing images 3 Training images 2 Testing images 2 Poly1 Poly 2 KNN 89.33% 87.66% 89.33% 92% 92% 91.55%
  • 15. 15 Flow chart of implementation Start For number of images Input image train or test tr Normalize image Divide into 3x3 windows For each window Compute µ, Compute sum Pick centre pixel and Compute information set or feature Store features in database Stop
  • 16. 16  Conclusion A novel approach is presented to account for the information from both the central pixel and the neighbourhood pixels of a face image while matching test sample with the training samples in the face recognition process .  The information value is defined as the product of information source value and its membership function value.  A comparison of performance of the proposed approach is made with that of PCA using two classifiers KNN and SVM. Better results are reported with SVM.  The proposed approach is found to be effective on images having variation in expression, illumination and pose. Further work needs to be done by changing the membership function values. This is possible one way by employing type-2 membership functions in which one of the parameters is changed.
  • 17. 17 References  [1]Laurenz Wiskott, Jean-Marc Fellous, Norbert Kr¨uger, and Christoph von der Malsburg. Face recognition by elastic bunch graph matching. IEEE Trans. On Pattern Analysis and Machine Intelligence, 19(7):775–779, 1997.  [2]M. Turk and A. P. Pentland, “Face Recognition Using Eigenfaces”, IEEE Conference on Computer Vision and Pattern Recognition, Maui, Hawaii, 1991.  [3]P. N. Belhumeur, J. P. Hespanha and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection”,IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, No. 7, 1997, pp. 711-720.  [4]Isra’a Abdul-Ameer Wavelet Based Image De-noising to Enhance the Face Recognition Rate, IJCSI International Journal of Computer Science Issues, Vol. 10, Issue 1, No 3, January 2013.  [5]Khan, M.M. Face Recognition using Sub-Holistic PCA Information and Communication Technologies, 2005. ICICT 2005. First International Conference on pages152 – 157( 27-28 Aug. 2005)  [6]M. Hanmandlu, Information sets and Information Processing, A Research Report, IIT Delhi, March 2011.  [7]T. Ojala, M. Pietikäinen, and D. Harwood. A comparative study of texture measures with classification based on feature distributions. Pattern Recognition, 29, 1996.  [8]Mamta and M., Hanmandlu, M., Robust Ear Based Authentication Using Local Principal Independent Components, Expert Systems with Applications, vol. 40, pp. 6478-6490, 2013.
  • 18. 18 THANK YOU