Uploaded on



  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads


Total Views
On Slideshare
From Embeds
Number of Embeds



Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

    No notes for slide


  • 1. INFAREC Intelligent Face Recognition
  • 2. INFAREC
    • M.S. Computing (I.T) Spring 2008
    • Project Advisor
    • Dr. Najmi Haider
    • Lecturer, SZABIST
    • Submitted by
    • Shamama Tul Umber Parwaiz
    • 0772122
    • The paper principally deals with the comparison of different methods for face recognition. This research is based on different sections that describe different techniques.
    • For each of the techniques, a short description of how it accomplishes the described task will be given.
    • This report only show a comparison of already made research
  • 4. Introduction To INFAREC
  • 5. INFAREC
    • I ntelligent F ace Rec ognition.
    • The face is our primary focus of attention.
    • Role in conveying identity.
    • Face recognition has become an important issue in many applications such as security systems, credit card verification and criminal identification.
    • Comparing different FACE Recognition Algorithms.
  • 6. INFAREC
    • Goals & Objectives.
    • I m p o r t a n c e
    • Reason For Choosing INFAREC
  • 7. Why Face Recognition?
    • Application of Biometric Systems .
    • Highly secured
    • Much Reliable
  • 8.
    • For forensic and civilian applications
    • For business environment
    Importance Of Face Recognition
  • 9. Goals & Objectives
    • Implementation of security measures
    • Accurate recognition.
    • Increase of performance .
  • 10. Reason For Choosing INFAREC
    • Flexible
    • Self interest
  • 11. Architecture Of INFAREC
  • 12. Architecture Of INFAREC
    • Input Phase.
    • Face Detection Phase.
    • Face Recognition
  • 13. Architecture Of INFAREC
    • Input Phase
  • 14. Architecture Of INFAREC Contd.. Input Phase Recognition Phase Knowledge Representation
  • 15.
    • Pattern Recognition
  • 16. Pattern Recognition
    • Pattern recognition can be defined as the categorization of input data into identifiable classes via the extraction of significant features or attributes of the data from a background of irrelevant detail.
    • A pattern class is a category determined by some given common attributes or features.
    • A pattern is the description of any member of a category representing a pattern class.
    • Supervised pattern recognition is characterized by the fact that the correct classification of every training pattern is known
  • 17. Pattern Recognition
  • 18. Face Recognition
    • Acquisition module
    • Pre-processing module
    • Feature extraction module
    • Classification module
    • Training set
    • Face library or face database
  • 19. Face Recognition
  • 20. Two Major Approaches in Feature Based Face Recognition
    • First-order features values
    • Second-order features values
  • 21. First-order features values
  • 22. Second-order features values
  • 23.
    • Eigenface
  • 24. Eigenface
    • Eigenfaces are a set of eigenvectors used in the computer vision problem of human face recognition. The approach of using eigenfaces for recognition was developed by Sirovich and Kirby (1987) and used by Matthew Turk and Alex Pentland in face classification. It is considered the first successful example of facial recognition technology
  • 25. Eigen Face Algorithm
  • 26. Eigen Face Algorithm
    • The basic idea of the algorithm is develop a system that can compare not images themselves, but these feature weights explained before. The algorithm can be reduced to the next simple steps.
    • 1. Acquire a database of face images, calculate the eigenfaces and determine the face space with all them. It will be necessary for further recognitions.
    • 2. When a new image is found, calculate its set of weights.
    • 3. Determine if the image is a face; to do so, we have to see of it is close enough to the face space.
    • 4. Finally, it will be determined if the image corresponds to a known face of the database of not.
  • 27. Eigen Face Algorithm
    • M = the highest value
    • =image difference from average face
  • 28. Eigen Face Algorithm contd ..
    • It’s the average face
    • It representing the image
    • Covariance Vector
  • 29. Eigen Face Algorithm contd..
    • Weight w is calculated as:
    • The term weight is used for recognition
  • 30. Eigen Face Algorithm contd.. (a) (b) Figure 2.1 (a) Sample, training set face images. (b) Average face image of the Training set.
  • 31. Eigenfaces vs Feature Based Face Recognition
    • Speed and simplicity
    • Learning capability
    • Face background
    • Scale and orientation
    • Presence of small details
  • 32. Line Edge Map
  • 33. Line Edge Map
    • LEM uses physiologic features from human faces to solve the problem; it mainly uses mouth, nose and eyes as the most characteristic ones.
  • 34. Line Edge Map Algorithm
  • 35. Line Edge Map Algorithm
    • In order to measure the similarity of human faces the face images are firstly converted into gray-level pictures.
    • The images are encoded into binary edge maps using Sobel edge detection algorithm.
    • The main advantage of line edge maps is the low sensitiveness to illumination changes, because it is an intermediate-level image representation derived from low-level edge map representation.[3]
    • The algorithm has another important improvement, it is the low memory requirements because the kind of data used.
  • 36. Line Edge Map Algorithm
  • 37. Line Edge Map Algorithm
    • One of the most important parts of the algorithm is the Line Segment Hausdorff Distance (LHD) described to accomplish an accurate matching of face images The images are encoded into binary edge maps using Sobel edge detection algorithm.
    • The main strength of this distance measurement is that measuring the parallel distance, we choose the minimum distance between edges
  • 38. Line Edge Map Algorithm
  • 39. RESULTS
  • 40. RESULTS of Eigen Face
    • The eigenfaces algorithm is related to the threshold to determine a match in the input image It was demonstrated that the accuracy of recognition could achieve perfect recognition; however, the quantity of image rejected as unknown increases.
    • The results show that there is not very much changes with lighting variations; whereas size changes make accuracy fall very quickly
  • 41. RESULTS Of LEM
    • For lighting variations the LEM algorithm kept high levels of correct recognitions.
    • LEM method always managed the highest accuracy compared with eigenfaces and edge map
  • 42. RESULTS Comparison
    • LEM algorithm demonstrated a better accuracy than the eigenfaces methods with size variations. While eigenfaces difficultly achieved an acceptable accuracy, LEM manage to obtain percentages around 90%, something very good for a face recognition algorithm.
    • The results from [4] for orientation changes, LEM algorithm could not beat eigenfaces method. LEM hardly reach a 70% for all different poses.
    • LEM is based on face features, while eigenfaces uses correlation and eigenvector to do so.
  • 43. C onclusion
  • 44. Research Conclusion
    • Eigenfaces approach excels in its speed and simplicity and delivers good recognition performances under controlled conditions
    • eigenfaces approach is very sensitive to face background and head orientations. Illumination and presence of details are reasonably simple problems for the proposed face recognition system
  • 45. Research Conclusion
    • LEM, as a more recent research; allows better results for lighting and size variations.
    • It beats eigenfaces method with size variation; where it has its most important weakness.
    • The basis of the algorithm. LEM is based on face features, while eigenfaces uses correlation and eigenvector to do so.
  • 46. Self Conclusion
    • Both the algorithms has different approach to recognize a face the merit of one algorithms is the drawback of another one but the combination of these two algorithms results in providing maximum accuracy in face recognition .
  • 47. Future Enhancements And Recommendations
  • 48. Future Enhancements And Recommendations
    • After conducting this research work the Line
    • Edge Map is found more reliable now as future direction
    • Research of a background removal algorithm.
    • Recognition from multiple views involving neural networks.
    • Scanner and camera support.
    • Migration to client/server architecture.
  • 49. Other Techniques at a Glance
  • 50. Other Techniques at a Glance
    • TECH 1: Humans can recognize familiar faces in very low-resolution images.
    • TECH 2: The ability to tolerate degradations increases with familiarity.
    • TECH 3: High-frequency information by itself is insufficient for good face recognition performance.
    • The nature of processing: Piecemeal versus holistic
    • TECH 4: Facial features are processed holistically.
    • TECH 5: Of the different facial features, eyebrows are among the most important for recognition.
    • TECH 6: The important configures relationships appear to be independent across the width and height dimensions.
    • The nature of cues used: Pigmentation, shape and motion
    • TECH 7: Face-shape appears to be encoded in a slightly caricatured manner.
    • TECH 8: Prolonged face viewing can lead to high level after effects, which suggest prototype-based encoding.
    • TECH 9: Pigmentation cues are at least as important as shape cues.
    • TECH 10: Color cues play a significant role, especially when shape cues are degraded.
    • TECH 11: Contrast polarity inversion dramatically impairs recognition performance, possibly due to compromised ability to use pigmentation cues.
    • TECH 12: Illumination changes influence generalization
  • 51. Other Techniques at a Glance
    • TECH 13: View-generalization appears to be mediated by temporal association.
    • TECH 4: Motion of faces appears to facilitate subsequent recognition.
    • Developmental progression
    • TECH 15: The visual system starts with a rudimentary preference for face-like patterns.
    • TECH 16: The visual system progresses from a piecemeal to a holistic strategy over the first several years of life.
    • Neural underpinnings
    • TECH 17: The human visual system appears to devote specialized neural resources for face perception.
    • TECH 18: Latency of responses to faces in infer temporal (IT) cortex is about 120 ms, suggesting a largely feed forward computation.
    • TECH 19: Facial identity and expression might be processed by separate systems.
  • 52. References
  • 53. References
    • [2] Aurélio Campilho, Mohamed Kamel, “Image analysis and recognition :international conference, ICIAR 2004, Porto, Portugal, September 29-October 1, 2004” . Berlin ; New York : Springer, c2004.
    • [3] Yongsheng Gao; Leung, M.K.H., “Face recognition using line edge map” .Pattern Analysis and Machine Intelligence, IEEE Transactions on , Volume: 24 Issue: 6 , June 2002, Page(s): 764 -779.
    • [4] M.A. Turk, A.P. Pentland, “Face Recognition Using Eigenfaces” . Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3-6 June 1991, Maui, Hawaii, USA, pp. 586-591.
    • [5] Pentland, A.; Choudhury, T. , “Face recognition for smart environments “ .Computer, Volume: 33 Issue: 2 , Feb. 2000, Page(s): 50 -55.
    • [6] De Vel, O.; Aeberhard, S., “Line-based face recognition under varying pose” .Pattern Analysis and Machine Intelligence, IEEE Transactions on , Volume: 21 Issue: 10 , Oct. 1999, Page(s): 1081 -1088.
    • [7] W. Zhao, R. Chellappa, A. Rosenfeld, and J. Phillips, “Face Recognition: A Literature Survey” . ACM Computing Surveys, Vol. 35, No. 4, December 2003, pp. 399–458.
    • [8] Face recognition home page: http://www.face-rec.org /
    • [9] Face Recognition by Humans :Nineteen Results All Computer Vision Researchers Should Know About By Pawan Sinha, Benjamin Balas, Yuri Ostrovsky, and Richard Russell
    • [10] W. Zhao, R. Chellappa, and A. Rosenfeld, “Face recognition: a literature survey”. ACM Computing Surveys , Vol. 35:pp. 399–458, December 2003.
    • [11] J.E. Meng, W. Chen and W. Shiqian, “Highspeed face recognition based on discrete cosine transform and RBF neural networks” ; IEEE Transactions on Neural Networks, Vol. 16, Issue 3, Page(s):679 – 691, May 2005.
    • [12] Fast Face Recognition Karl B. J. Axnick1 and Kim C. Ng11 Intelligent Robotics Research Centre (IRRC), ARC Centre for Perceptive and Intelligent Machines in Complex Environments (PIMCE) Monash University, Melbourne, Australia.
  • 54. References
    • http:// attrasoft.com/products/imagefinderdos/#Software
    • http://www.cs.washington.edu/research/imagedatabase/demo/cbir.html
    • http://dbvis.inf.uni-konstanz.de/research/projects/SimSearch/related.html
    • http://elib.cs.berkeley.edu/vision.html
    • http://www.attrasoft.com/imagefinder42/fbi.html
    • http://www10.org/cdrom/posters/p1142/
    • http://portal.acm.org/citation.cfm?id=973276
    • http://archive.nlm.nih.gov/pubs/reports/bosc02/node6.html
    • http://csdl.computer.org/comp/proceedings/iscv/1995/7190/00/71900085abs.htm
  • 55. THANK YOU