Presentación tesis

149 views

Published on

Published in: Technology, News & Politics
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
149
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Presentación tesis

  1. 1. Robust face recognition using wavelets and neural networks Ph.D Rubén Machucho Cadena Istambul, Turkey September 2013
  2. 2. Introduction Methodology Conclusions Contents 1 Introduction Motivation Objectives 2 Methodology Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results 3 Conclusions 2 / 38
  3. 3. Introduction Methodology Conclusions Motivation Objectives Introduction Automatic face recognition system In the last years, face recognition has become a popular area of research. 3 / 38
  4. 4. Introduction Methodology Conclusions Motivation Objectives Introduction Automatic face recognition system In the last years, face recognition has become a popular area of research. More accurate identification/verification technique than traditional systems. 3 / 38
  5. 5. Introduction Methodology Conclusions Motivation Objectives Introduction Automatic face recognition system In the last years, face recognition has become a popular area of research. More accurate identification/verification technique than traditional systems. Increased computing capabilities. 3 / 38
  6. 6. Introduction Methodology Conclusions Motivation Objectives Introduction Automatic face recognition system In the last years, face recognition has become a popular area of research. More accurate identification/verification technique than traditional systems. Increased computing capabilities. A large number of application areas. 3 / 38
  7. 7. Introduction Methodology Conclusions Motivation Objectives Introduction Automatic face recognition system In the last years, face recognition has become a popular area of research. More accurate identification/verification technique than traditional systems. Increased computing capabilities. A large number of application areas. Government Law Enforcement Security Immigration 3 / 38
  8. 8. Introduction Methodology Conclusions Motivation Objectives Introduction Automatic face recognition system In the last years, face recognition has become a popular area of research. More accurate identification/verification technique than traditional systems. Increased computing capabilities. A large number of application areas. Government Law Enforcement Security Immigration Commercial Missing Children/Runaways Internet, E-commerce Gaming Industry 3 / 38
  9. 9. Introduction Methodology Conclusions Motivation Objectives Introduction Biometric Systems Biometric systems are automated, mostly computerized systems using distinctive physio-biological or behavioural measurements of the human body that serve as a (supposedly) unique indicator of the presence of a particular individual. 4 / 38
  10. 10. Introduction Methodology Conclusions Motivation Objectives Introduction Biometric Systems Biometric systems are automated, mostly computerized systems using distinctive physio-biological or behavioural measurements of the human body that serve as a (supposedly) unique indicator of the presence of a particular individual. 4 / 38
  11. 11. Introduction Methodology Conclusions Motivation Objectives Introduction Biometric Systems Biometric systems are automated, mostly computerized systems using distinctive physio-biological or behavioural measurements of the human body that serve as a (supposedly) unique indicator of the presence of a particular individual. 4 / 38
  12. 12. Introduction Methodology Conclusions Motivation Objectives Introduction Biometric Systems Biometric systems are automated, mostly computerized systems using distinctive physio-biological or behavioural measurements of the human body that serve as a (supposedly) unique indicator of the presence of a particular individual. Face images are easy to get. 4 / 38
  13. 13. Introduction Methodology Conclusions Motivation Objectives Introduction Biometric Systems Biometric systems are automated, mostly computerized systems using distinctive physio-biological or behavioural measurements of the human body that serve as a (supposedly) unique indicator of the presence of a particular individual. Face images are easy to get. Contactless authentication. 4 / 38
  14. 14. Introduction Methodology Conclusions Motivation Objectives Introduction Biometric Systems Biometric systems are automated, mostly computerized systems using distinctive physio-biological or behavioural measurements of the human body that serve as a (supposedly) unique indicator of the presence of a particular individual. Face images are easy to get. Contactless authentication. Low hardware cost. 4 / 38
  15. 15. Introduction Methodology Conclusions Motivation Objectives Motivation Despite the progress made in the last years, the face recognition problem has not been completely solved. 5 / 38
  16. 16. Introduction Methodology Conclusions Motivation Objectives Motivation Despite the progress made in the last years, the face recognition problem has not been completely solved. 5 / 38
  17. 17. Introduction Methodology Conclusions Motivation Objectives Motivation Despite the progress made in the last years, the face recognition problem has not been completely solved. The need of systems with a higher level of accuracy and robustness still remains as an open research topic. 5 / 38
  18. 18. Introduction Methodology Conclusions Motivation Objectives Objectives 1.- Propose a feature extraction technique, which uses the discrete wavelet transform. 6 / 38
  19. 19. Introduction Methodology Conclusions Motivation Objectives Objectives 1.- Propose a feature extraction technique, which uses the discrete wavelet transform. 2.- Determine the most situable wavelet base and decomposition levels for its use in face recognition systems. 6 / 38
  20. 20. Introduction Methodology Conclusions Motivation Objectives Objectives 1.- Propose a feature extraction technique, which uses the discrete wavelet transform. 2.- Determine the most situable wavelet base and decomposition levels for its use in face recognition systems. 3.- Design a neural network for classify faces. 6 / 38
  21. 21. Introduction Methodology Conclusions Motivation Objectives Objectives 1.- Propose a feature extraction technique, which uses the discrete wavelet transform. 2.- Determine the most situable wavelet base and decomposition levels for its use in face recognition systems. 3.- Design a neural network for classify faces. 4.- Determine the best parameter configuration for the proposed NN. 6 / 38
  22. 22. Introduction Methodology Conclusions Motivation Objectives Objectives 1.- Propose a feature extraction technique, which uses the discrete wavelet transform. 2.- Determine the most situable wavelet base and decomposition levels for its use in face recognition systems. 3.- Design a neural network for classify faces. 4.- Determine the best parameter configuration for the proposed NN. 5.- Compare the proposed net with a backprop net. 6 / 38
  23. 23. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Methodology Stage 1: State of the art 7 / 38
  24. 24. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Methodology Stage 1: State of the art Review of face recognition algorithms that use neural networks and wavelets. 7 / 38
  25. 25. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Methodology Stage 1: State of the art Review of face recognition algorithms that use neural networks and wavelets. Stage 2: Proposed Solution Design of the face recognition system. 7 / 38
  26. 26. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Methodology Stage 1: State of the art Review of face recognition algorithms that use neural networks and wavelets. Stage 2: Proposed Solution Design of the face recognition system. Stage 3: Implementation and Results Implementation of the proposed system. 7 / 38
  27. 27. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Methodology Stage 1: State of the art Review of face recognition algorithms that use neural networks and wavelets. Stage 2: Proposed Solution Design of the face recognition system. Stage 3: Implementation and Results Implementation of the proposed system. System experimentation and validation. 7 / 38
  28. 28. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Methodology Stage 1: State of the art Review of face recognition algorithms that use neural networks and wavelets. Stage 2: Proposed Solution Design of the face recognition system. Stage 3: Implementation and Results Implementation of the proposed system. System experimentation and validation. Conclusions. 7 / 38
  29. 29. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Wavelet theory Wavelet transform can be successfully applied for analysis and processing of non stationary signals e.g., speech and image processing, data compression, communications, etc 8 / 38
  30. 30. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Wavelet theory Wavelet transform can be successfully applied for analysis and processing of non stationary signals e.g., speech and image processing, data compression, communications, etc Wavelet transform is able to construct a high resolution time-frequency representation of the signal. 8 / 38
  31. 31. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Wavelet theory Wavelet transform can be successfully applied for analysis and processing of non stationary signals e.g., speech and image processing, data compression, communications, etc Wavelet transform is able to construct a high resolution time-frequency representation of the signal. A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation"like one might see recorded by a seismograph or heart monitor 8 / 38
  32. 32. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Wavelet theory Wavelet transform can be successfully applied for analysis and processing of non stationary signals e.g., speech and image processing, data compression, communications, etc Wavelet transform is able to construct a high resolution time-frequency representation of the signal. A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation"like one might see recorded by a seismograph or heart monitor 8 / 38
  33. 33. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Wavelet theory Wavelet transform can be successfully applied for analysis and processing of non stationary signals e.g., speech and image processing, data compression, communications, etc Wavelet transform is able to construct a high resolution time-frequency representation of the signal. A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation"like one might see recorded by a seismograph or heart monitor 8 / 38
  34. 34. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Discrete Wavelet Transform (DWT) 9 / 38
  35. 35. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Discrete Wavelet Transform (DWT) 9 / 38
  36. 36. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Discrete Wavelet Transform (DWT) The filters increase to double the original data. It makes necessary to downsample. 9 / 38
  37. 37. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Discrete Wavelet Transform (DWT) The filters increase to double the original data. It makes necessary to downsample. 9 / 38
  38. 38. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image 10 / 38
  39. 39. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image 10 / 38
  40. 40. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image LL: Apprroximations. 10 / 38
  41. 41. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image LL: Apprroximations. LH: Horizontal details. 10 / 38
  42. 42. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image LL: Apprroximations. LH: Horizontal details. HL: Vertical details. 10 / 38
  43. 43. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image LL: Apprroximations. LH: Horizontal details. HL: Vertical details. HH: Diagonal details. 10 / 38
  44. 44. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Bidimensional DWT Apply low pass filter (L) and high pass filter (H) to the rows and columns of the image LL: Apprroximations. LH: Horizontal details. HL: Vertical details. HH: Diagonal details. 10 / 38
  45. 45. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Neural Networks Artificial neural networks are models inspired by animal central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. 11 / 38
  46. 46. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Neural Networks Artificial neural networks are models inspired by animal central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. 11 / 38
  47. 47. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Neural Networks Artificial neural networks are models inspired by animal central nervous systems (in particular the brain) that are capable of machine learning and pattern recognition. Name E/S Relation Hard Limit a = 0 n < 0 a = 1 n >= 0 Linear a = n Log-Sigmoid a = 1 1 + e−n 11 / 38
  48. 48. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Neural Network Architecture Neural network architectures refers to the organization and disposition of their neurons forming layers or groups of neurons. 12 / 38
  49. 49. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Neural Network Architecture Neural network architectures refers to the organization and disposition of their neurons forming layers or groups of neurons. 12 / 38
  50. 50. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Neural Network Architecture Neural network architectures refers to the organization and disposition of their neurons forming layers or groups of neurons. 12 / 38
  51. 51. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Training an Artificial Neural Network Once a network has been structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. Then, the training, or learning, begins. 13 / 38
  52. 52. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Training an Artificial Neural Network Once a network has been structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. Then, the training, or learning, begins. Supervised Training In supervised training, both the inputs and the outputs are provided. 13 / 38
  53. 53. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Training an Artificial Neural Network Once a network has been structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. Then, the training, or learning, begins. Supervised Training In supervised training, both the inputs and the outputs are provided. The network then processes the inputs and compares its resulting outputs against the desired outputs. 13 / 38
  54. 54. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 1: State of the art Training an Artificial Neural Network Once a network has been structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. Then, the training, or learning, begins. Supervised Training In supervised training, both the inputs and the outputs are provided. The network then processes the inputs and compares its resulting outputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights which control the network 13 / 38
  55. 55. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Related work E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognition techniques using PCA, wavelets and SVM”, 2010 This work shows the use of the wavelet transform and PCA technique for feature extraction stage. Distance classifier and Support Vector Machines (SVMs) are used for classification step. Autors reported a recognition rate above 95 %. 14 / 38
  56. 56. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Related work E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognition techniques using PCA, wavelets and SVM”, 2010 This work shows the use of the wavelet transform and PCA technique for feature extraction stage. Distance classifier and Support Vector Machines (SVMs) are used for classification step. Autors reported a recognition rate above 95 %. S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for Face Recognition”, 2010 Authors propose the use of the wavelet transform to get a set of principal characteristics of each face and the correlation method for classification stage. They have reported a good performance when they use frontal and side-view images. 14 / 38
  57. 57. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Related work E. Gumus, N. Kilic, A. Sertbas and O. N. Ucan “Evaluation of face recognition techniques using PCA, wavelets and SVM”, 2010 This work shows the use of the wavelet transform and PCA technique for feature extraction stage. Distance classifier and Support Vector Machines (SVMs) are used for classification step. Autors reported a recognition rate above 95 %. S. Kakarwal and R. Deshmukh “Wavelet Transform based Feature Extraction for Face Recognition”, 2010 Authors propose the use of the wavelet transform to get a set of principal characteristics of each face and the correlation method for classification stage. They have reported a good performance when they use frontal and side-view images. M. Mazloom and S. Kasaei “Face Recognition using Wavelet, PCA, and Neural Networks”, 2005 Authors propose a face recognition method which combines the use of wavelets, PCA and a backpropagation neural network. They reported a recognition rate of 90.35 %. 14 / 38
  58. 58. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Proposed System Architecture 15 / 38
  59. 59. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Histogram equalization Histogram equalization is a method in image processing of contrast adjustment using the image’s histogram. 16 / 38
  60. 60. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Histogram equalization Histogram equalization is a method in image processing of contrast adjustment using the image’s histogram. 16 / 38
  61. 61. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Histogram equalization Histogram equalization is a method in image processing of contrast adjustment using the image’s histogram. 16 / 38
  62. 62. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face detection and segmentation The ViolaJones object detection framework is the first object detection framework to provide competitive object detection rates in real-time proposed in 2001 by Paul Viola and Michael Jones. 17 / 38
  63. 63. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face detection and segmentation The ViolaJones object detection framework is the first object detection framework to provide competitive object detection rates in real-time proposed in 2001 by Paul Viola and Michael Jones. 17 / 38
  64. 64. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face detection and segmentation The ViolaJones object detection framework is the first object detection framework to provide competitive object detection rates in real-time proposed in 2001 by Paul Viola and Michael Jones. 17 / 38
  65. 65. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face detection and segmentation The ViolaJones object detection framework is the first object detection framework to provide competitive object detection rates in real-time proposed in 2001 by Paul Viola and Michael Jones. 17 / 38
  66. 66. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. 18 / 38
  67. 67. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. 18 / 38
  68. 68. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. Nearest neighbor 18 / 38
  69. 69. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. Nearest neighbor Bilinear interpolation 18 / 38
  70. 70. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. Nearest neighbor Bilinear interpolation Bicubic Interpolation 18 / 38
  71. 71. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. Nearest neighbor Bilinear interpolation Bicubic Interpolation 18 / 38
  72. 72. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Image Preprocessing: Face size normalization Image interpolation works in two directions, and tries to achieve a best approximation of a pixel’s color and intensity based on the values at surrounding pixels. Nearest neighbor Bilinear interpolation Bicubic Interpolation 18 / 38
  73. 73. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Log-Polar conversion Useful for dealing with rotation and scale issues. 19 / 38
  74. 74. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Log-Polar conversion Useful for dealing with rotation and scale issues. Log-polar images are based on a polar plane represented by rings and sectors. 19 / 38
  75. 75. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Log-Polar conversion Useful for dealing with rotation and scale issues. Log-polar images are based on a polar plane represented by rings and sectors. ξ = x2 + y2, η = arctan x y 19 / 38
  76. 76. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Log-Polar conversion Useful for dealing with rotation and scale issues. Log-polar images are based on a polar plane represented by rings and sectors. ξ = x2 + y2, η = arctan x y 19 / 38
  77. 77. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: DWT 1 Let J be the number of decomposition levels. 20 / 38
  78. 78. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: DWT 1 Let J be the number of decomposition levels. 2 Let F be the wavelet filter used for the decomposition. 20 / 38
  79. 79. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: DWT 1 Let J be the number of decomposition levels. 2 Let F be the wavelet filter used for the decomposition. 20 / 38
  80. 80. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: DWT 1 Let J be the number of decomposition levels. 2 Let F be the wavelet filter used for the decomposition. 3 Apply the discrete wavelet transform to the detected face, using the low-pass and high-pass filters obtained from F, as many times as directed by J. 20 / 38
  81. 81. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: DWT 1 Let J be the number of decomposition levels. 2 Let F be the wavelet filter used for the decomposition. 3 Apply the discrete wavelet transform to the detected face, using the low-pass and high-pass filters obtained from F, as many times as directed by J. 4 Take the approximation coefficients, discarding the detail coefficients. 20 / 38
  82. 82. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply entropy H(X) = −k n i=1 p(xi ) log p(xi ) 21 / 38
  83. 83. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply entropy H(X) = −k n i=1 p(xi ) log p(xi ) 21 / 38
  84. 84. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply entropy H(X) = −k n i=1 p(xi ) log p(xi ) 21 / 38
  85. 85. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply entropy H(X) = −k n i=1 p(xi ) log p(xi ) 21 / 38
  86. 86. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply entropy H(X) = −k n i=1 p(xi ) log p(xi ) 21 / 38
  87. 87. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply autocorrelation It is the correlation of a signal with itself. 22 / 38
  88. 88. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply autocorrelation It is the correlation of a signal with itself. Provides information about the structure of an image. 22 / 38
  89. 89. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply autocorrelation It is the correlation of a signal with itself. Provides information about the structure of an image. G(a, b) = M a N a i(x, y) ∗ i(x − a, y − b) 22 / 38
  90. 90. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply autocorrelation It is the correlation of a signal with itself. Provides information about the structure of an image. G(a, b) = M a N a i(x, y) ∗ i(x − a, y − b) 22 / 38
  91. 91. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply autocorrelation It is the correlation of a signal with itself. Provides information about the structure of an image. G(a, b) = M a N a i(x, y) ∗ i(x − a, y − b) 22 / 38
  92. 92. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply autocorrelation It is the correlation of a signal with itself. Provides information about the structure of an image. G(a, b) = M a N a i(x, y) ∗ i(x − a, y − b) 22 / 38
  93. 93. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply sampling Reduce the dimensionality of the characteristic vector, that will be send to the neural network. 23 / 38
  94. 94. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply sampling Reduce the dimensionality of the characteristic vector, that will be send to the neural network. Supposing that the size of detected face is 80 x 80 pixels, and we are using a second decomposition level.... 23 / 38
  95. 95. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply sampling Reduce the dimensionality of the characteristic vector, that will be send to the neural network. Supposing that the size of detected face is 80 x 80 pixels, and we are using a second decomposition level.... 23 / 38
  96. 96. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Feature extraction: (Optional) Apply sampling Reduce the dimensionality of the characteristic vector, that will be send to the neural network. Supposing that the size of detected face is 80 x 80 pixels, and we are using a second decomposition level.... 23 / 38
  97. 97. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 2: Proposed solution Classification: Proposed neural network 24 / 38
  98. 98. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Faces database Public face database Faces94 1 1 http://cswww.essex.ac.uk/mv/allfaces/faces94.html 25 / 38
  99. 99. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Faces database Public face database Faces94 1 Images of 153 persons with 20 snapshots by each one of them. 1 http://cswww.essex.ac.uk/mv/allfaces/faces94.html 25 / 38
  100. 100. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Faces database Public face database Faces94 1 Images of 153 persons with 20 snapshots by each one of them. Image resolution: 180 by 200 pixels (portrait format). 1 http://cswww.essex.ac.uk/mv/allfaces/faces94.html 25 / 38
  101. 101. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Faces database Public face database Faces94 1 Images of 153 persons with 20 snapshots by each one of them. Image resolution: 180 by 200 pixels (portrait format). Minor variation in image lighting, head pose and head scale. 1 http://cswww.essex.ac.uk/mv/allfaces/faces94.html 25 / 38
  102. 102. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Faces database Public face database Faces94 1 Images of 153 persons with 20 snapshots by each one of them. Image resolution: 180 by 200 pixels (portrait format). Minor variation in image lighting, head pose and head scale. 1 http://cswww.essex.ac.uk/mv/allfaces/faces94.html 25 / 38
  103. 103. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Experimental design Validation and results of feature extraction phase Experiments at this stage will allow us to know the best method combination (log-polar, autocorrelation, entropy), wavelet base y and decomposition level for use in a face recognition system. 26 / 38
  104. 104. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Experimental design Validation and results of feature extraction phase Experiments at this stage will allow us to know the best method combination (log-polar, autocorrelation, entropy), wavelet base y and decomposition level for use in a face recognition system. Validation and results of classification phase Experiments will be directed to detect the best configuration parameters for the proposed neural network. 26 / 38
  105. 105. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Experimental design Validation and results of feature extraction phase Experiments at this stage will allow us to know the best method combination (log-polar, autocorrelation, entropy), wavelet base y and decomposition level for use in a face recognition system. Validation and results of classification phase Experiments will be directed to detect the best configuration parameters for the proposed neural network. Validation and results of the preprocessing phase This test will allow us to know the benefit of implement a preprocessing stage in the proposed system. 26 / 38
  106. 106. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Method combination: 27 / 38
  107. 107. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Method combination: Log-polar (Optional). DWT. Entropy or autocorrelation (Optional). 27 / 38
  108. 108. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Method combination: Log-polar (Optional). DWT. Entropy or autocorrelation (Optional). Wavelet base: Bior 1.3, Daubechies 4 and Coif 5. 27 / 38
  109. 109. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Method combination: Log-polar (Optional). DWT. Entropy or autocorrelation (Optional). Wavelet base: Bior 1.3, Daubechies 4 and Coif 5. For classification we use the proposed neural net, with the following configuration parameters: 27 / 38
  110. 110. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Method combination: Log-polar (Optional). DWT. Entropy or autocorrelation (Optional). Wavelet base: Bior 1.3, Daubechies 4 and Coif 5. For classification we use the proposed neural net, with the following configuration parameters: Number of neurons Layer 1, 2, 3 y 4 3 Layer 5 1 Minimum error 0,01 Activation function Layer 1, 2, 3 y 4 Sigmoid Layer 5 Linear 27 / 38
  111. 111. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Recognition rate using the available method combinations. 28 / 38
  112. 112. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Recognition rate using the available method combinations. ND BW Train patterns Recognition rate(test patterns) W W_A LP_W LP_W_A 2 Daub 4 100 % 85 % 86.6 % 65 % 55 % Bior 1.3 100 % 77 % 79 % 66.7 % 71.7 % Coif 5 100 % 72 % 72 % 58.3 % 50 % 3 Daub 4 100 % 80 % 85 % 68.3 % 18.3 % Bior 1.3 100 % 84 % 83 % 45 % 56.6 % Coif 5 100 % 78 % 82 % 36.6 % 26.6 % W: Wavelet, A: Autocorrelation, LP: Log-polar 28 / 38
  113. 113. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of feature extraction phase Recognition rate using the available method combinations. ND BW Train patterns Recognition rate(test patterns) W W_A LP_W LP_W_A 2 Daub 4 100 % 85 % 86.6 % 65 % 55 % Bior 1.3 100 % 77 % 79 % 66.7 % 71.7 % Coif 5 100 % 72 % 72 % 58.3 % 50 % 3 Daub 4 100 % 80 % 85 % 68.3 % 18.3 % Bior 1.3 100 % 84 % 83 % 45 % 56.6 % Coif 5 100 % 78 % 82 % 36.6 % 26.6 % W: Wavelet, A: Autocorrelation, LP: Log-polar 29 / 38
  114. 114. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of classification phase Recognition rate obtained by varying the number of neurons and network minimum error. 30 / 38
  115. 115. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of classification phase Recognition rate obtained by varying the number of neurons and network minimum error. Neurons Error Train. time Recognition rate Training Test 2 .3 >1 seg. 100 % 78.3 % .2 >1 seg. 100 % 76.6 % .1 1 seg. 100 % 88.3 % .01 2 seg. 100 % 91.6 % .001 3 seg. 100 % 70 % .0001 5 seg. 100 % 66.6 % .00001 5 seg. 100 % 75 % .000001 7 seg. 100 % 65 % 30 / 38
  116. 116. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of classification phase Recognition rate obtained by varying the number of neurons and network minimum error. Neurons Error Train. time Recognition rate Training Test 4 .3 >1 seg. 100 % 78.3 % .2 >1 seg. 100 % 78.3 % .1 1 seg. 100 % 88 % .01 2 seg. 100 % 95.33 % .001 2 seg. 100 % 81.6 % .0001 4 seg. 100 % 85 % .00001 6 seg. 100 % 81 % .000001 7 seg. 100 % 81.6 % 31 / 38
  117. 117. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of classification phase Recognition rate obtained by varying the number of neurons and network minimum error. Neurons Error Train. time Recognition rate Training Test 6 .3 1 seg. 100 % 76.6 % .2 2 seg. 100 % 76.6 % .1 2 seg. 100 % 83.3 % .01 3 seg. 100 % 85 % .001 6 seg. 100 % 83.3 % .0001 7 seg. 100 % 76.6 % .00001 7 seg. 100 % 78.33 % .000001 8 seg. 100 % 80 % 32 / 38
  118. 118. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of classification phase Recognition rate obtained by varying the number of neurons and network minimum error. Neurons Error Train. time Recognition rate Training Test 8 .3 2 seg. 100 % 73.3 % .2 2 seg. 100 % 83.3 % .1 1 seg. 100 % 81.66 % .01 6 seg. 100 % 90 % .001 7 seg. 100 % 88.33 % .0001 11 seg. 100 % 85 % .00001 14 seg. 100 % 85 % .000001 16 seg. 100 % 76.6 % 33 / 38
  119. 119. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of preprocessing phase Comparison of recognition rates obtained by applying a preprocessing stage in contrast to the omission of such activity. 34 / 38
  120. 120. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of preprocessing phase Comparison of recognition rates obtained by applying a preprocessing stage in contrast to the omission of such activity. 34 / 38
  121. 121. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of preprocessing phase Comparison with a backpropagation neural net. 35 / 38
  122. 122. Introduction Methodology Conclusions Stage 1: State of the art Stage 2: Proposed Solution Stage 3: Implementation and Results Stage 3: Implementation and Results Validation and results of preprocessing phase Comparison with a backpropagation neural net. 35 / 38
  123. 123. Introduction Methodology Conclusions Conclusions We presented a new framework for face recognition, using discrete wavet transform and neural networks. 36 / 38
  124. 124. Introduction Methodology Conclusions Conclusions We presented a new framework for face recognition, using discrete wavet transform and neural networks. The following relevant results were obtained: 36 / 38
  125. 125. Introduction Methodology Conclusions Conclusions We presented a new framework for face recognition, using discrete wavet transform and neural networks. The following relevant results were obtained: Preprocessing We detected an increase of approximately 5 % in the recognition rates obtained, which determines that the application of techniques that improve the visual quality of the image have a positive influence on the overall system performance. 36 / 38
  126. 126. Introduction Methodology Conclusions Conclusions Feature extraction The use of the wavelet Daubechies 4, the second decomposition level and the autocorrelation method give a recognition rate of 95.33 %; this allow us to ascertain that the use of the wavelet transform is an excellent image decompostion and texture description tool. 37 / 38
  127. 127. Introduction Methodology Conclusions Conclusions Feature extraction The use of the wavelet Daubechies 4, the second decomposition level and the autocorrelation method give a recognition rate of 95.33 %; this allow us to ascertain that the use of the wavelet transform is an excellent image decompostion and texture description tool. Classification It was proved that the proposed neural network is a feasible and efficient option to perform face recognition tasks, since it outperformed the recognition rates, and decreased training time in comparison with a backpropagation network. 37 / 38
  128. 128. Introduction Methodology Conclusions Thank you Questions 38 / 38
  129. 129. Introduction Methodology Conclusions [1] R.C. Gonzalez and R.E. Woods. Digital Image Processing. Springer US, 2008. [2] Ergun Gumus, Niyazi Kilic, Ahmet Sertbas, and Osman N. Ucan. Evaluation of face recognition techniques using pca, wavelets and {SVM}. Expert Systems with Applications, 37(9):6404 – 6408, 2010. [3] R Jafri and H.R. Arabnia. A survey of face recognition techniques. Journal of Information Processing Systems, 5(2):41–68, June 2009. [4] SN Kakarwal and Dr RR Deshmukh. Wavelet transform based feature extraction for face recognition. 38 / 38
  130. 130. Introduction Methodology Conclusions IJCSA Issue-I June, pages 0974–0767, 2010. [5] F. Khalid and L. N. A. 3D face recognition using multiple features for local depth information. IJCSNS International Journal of Computer Science and Network Security, 9(1):27–32, 2009. [6] Masoud Mazloom and Shohreh Kasaei. Face recognition using wavelet, pca, and neural networks. 2005. [7] W. Zhao, R. Chellappa, A. Rosenfeld, and P. J. Phillips. Face Recognition: A Literature Survey. ACM Computing Surveys, pages 399–458, 2003. 38 / 38

×