This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a
texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand
to develop robust methods to verify facial images when they age. In this paper, a deep learning method
based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG)
and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and
classification
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
This document presents a face recognition algorithm that uses a multi-local feature selection approach based on genetic algorithms and pseudo Zernike moment invariants. The algorithm involves five stages: 1) face detection using an ellipse to approximate the face region, 2) extraction of facial features (eyes, nose, mouth) within regions using genetic algorithms to locate templates with maximum edge density, 3) generation of moment invariants from the facial features using pseudo Zernike polynomials, 4) classification of facial features using radial basis function neural networks, and 5) selection of multiple local features for face identification. The algorithm was tested on over 3000 images from three databases, achieving recognition rates over 89% which is higher than global or single local feature approaches and
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
This document summarizes a research paper on a face recognition system that uses a multi-local feature selection approach. The proposed system consists of five stages: face detection, extraction of facial features like eyes, nose and mouth, generation of moments to represent the features, classification of facial features using RBF neural networks, and face identification. The system was tested on over 3000 images from three facial databases and achieved recognition rates over 89%, outperforming global feature-based and single local feature approaches. The technique was also found to be robust to variations in translation, orientation and scaling.
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition. Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH (Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH (Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.
FACIAL AGE ESTIMATION USING TRANSFER LEARNING AND BAYESIAN OPTIMIZATION BASED...sipij
The document summarizes research on facial age estimation using transfer learning and Bayesian optimization based on gender information. Specifically:
1) A convolutional neural network is trained to classify gender from facial images. This gender classification CNN is then used as input for an age estimation model.
2) Bayesian optimization is applied to the pre-trained gender classification CNN to fine-tune it for the age estimation task. This reduces error on validation data.
3) Experiments on the FERET and FG-NET datasets show the proposed approach of using gender information and Bayesian optimization outperforms state-of-the-art methods, achieving a mean absolute error of 1.2 and 2.67 respectively.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Progression in Large Age-Gap Face VerificationIRJET Journal
1. The document discusses progression in large age-gap face verification. It summarizes various techniques used for face recognition from conventional methods to deep neural networks.
2. A typical face recognition system includes image acquisition, face detection, normalization, feature extraction, and then matching. The document reviews several feature extraction methods like Eigenfaces, Fisherfaces, and local binary patterns.
3. Recent deep learning approaches like convolutional neural networks have improved face recognition accuracy. One study used CNNs with metric learning and achieved high accuracy on the Labeled Faces in the Wild dataset. Encoding image microstructures and identity factor analysis has also been effective for matching similar faces.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
This document presents a face recognition algorithm that uses a multi-local feature selection approach based on genetic algorithms and pseudo Zernike moment invariants. The algorithm involves five stages: 1) face detection using an ellipse to approximate the face region, 2) extraction of facial features (eyes, nose, mouth) within regions using genetic algorithms to locate templates with maximum edge density, 3) generation of moment invariants from the facial features using pseudo Zernike polynomials, 4) classification of facial features using radial basis function neural networks, and 5) selection of multiple local features for face identification. The algorithm was tested on over 3000 images from three databases, achieving recognition rates over 89% which is higher than global or single local feature approaches and
International Journal of Image Processing (IJIP) Volume (1) Issue (2)CSCJournals
This document summarizes a research paper on a face recognition system that uses a multi-local feature selection approach. The proposed system consists of five stages: face detection, extraction of facial features like eyes, nose and mouth, generation of moments to represent the features, classification of facial features using RBF neural networks, and face identification. The system was tested on over 3000 images from three facial databases and achieved recognition rates over 89%, outperforming global feature-based and single local feature approaches. The technique was also found to be robust to variations in translation, orientation and scaling.
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition. Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH (Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH (Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.
FACIAL AGE ESTIMATION USING TRANSFER LEARNING AND BAYESIAN OPTIMIZATION BASED...sipij
The document summarizes research on facial age estimation using transfer learning and Bayesian optimization based on gender information. Specifically:
1) A convolutional neural network is trained to classify gender from facial images. This gender classification CNN is then used as input for an age estimation model.
2) Bayesian optimization is applied to the pre-trained gender classification CNN to fine-tune it for the age estimation task. This reduces error on validation data.
3) Experiments on the FERET and FG-NET datasets show the proposed approach of using gender information and Bayesian optimization outperforms state-of-the-art methods, achieving a mean absolute error of 1.2 and 2.67 respectively.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Progression in Large Age-Gap Face VerificationIRJET Journal
1. The document discusses progression in large age-gap face verification. It summarizes various techniques used for face recognition from conventional methods to deep neural networks.
2. A typical face recognition system includes image acquisition, face detection, normalization, feature extraction, and then matching. The document reviews several feature extraction methods like Eigenfaces, Fisherfaces, and local binary patterns.
3. Recent deep learning approaches like convolutional neural networks have improved face recognition accuracy. One study used CNNs with metric learning and achieved high accuracy on the Labeled Faces in the Wild dataset. Encoding image microstructures and identity factor analysis has also been effective for matching similar faces.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Robust face recognition by applying partitioning around medoids over eigen fa...ijcsa
An unsupervised learning methodology for robust face recognition is proposed for enhancing invariance to
various changes in the face. The area of face recognition in spite of being the most unobtrusive biometric
modality of all has encountered challenges with high performance in uncontrolled environment owing to
frequently occurring, unavoidable variations in the face. These changes may be due to noise, outliers,
changing expressions, emotions, pose, illumination, facial distractions like makeup, spectacles, hair growth
etc. Methods for dealing with these variations have been developed in the past with different success.
However the cost and time efficiency play a crucial role in implementing any methodology in real world.
This paper presents a method to integrate the technique of Partitioning Around Medoids with Eigen Faces
and Fisher Faces to improve the efficiency of face recognition considerably. The system so designed has
higher resistance towards the impact of various changes in the face and performs well in terms of success
rate, cost involved and time complexity. The methodology can therefore be used in developing highly robust
face recognition systems for real time environment.
Review of face detection systems based artificial neural networks algorithmsijma
This document provides a review of face detection systems that are based on artificial neural network algorithms. It summarizes several studies that have used different types of neural networks for face detection, including:
1) Retinal connected neural networks and rotation invariant neural networks.
2) Principal component analysis combined with neural networks.
3) Convolutional neural networks, multilayer perceptrons, backpropagation neural networks, and polynomial neural networks.
4) Fast neural networks, evolutionary optimization of neural networks, and Gabor wavelet features with neural networks. Strengths and limitations of these different approaches are discussed.
Implementation of features dynamic tracking filter to tracing pupilssipij
The objective of this paper is to show the implementation of an artificial vision filter capable of tracking the
pupils of a person in a video sequence. There are several algorithms that can achieve this objective, for this
case, features dynamic tracking selected, which is a method that traces patterns between each frame that
form a video scene, this type of processing offers the advantage of eliminating the problems of occlusion
patterns of interest. The implementation was tested on a base of videos of people with different physical
characteristics of the eyes. An additional goal is to obtain information of the eye movements that are
captured and pupil coordinates for each of these movements. These data could help some studies related to
eye health.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
IRJET - Facial In-Painting using Deep Learning in Machine LearningIRJET Journal
This document describes a deep learning approach for facial in-painting to generate plausible facial structures for missing pixels in face images. The approach uses the observable region of a face as well as inferred attributes like gender, ethnicity, and expression. It selects a guidance face matching the attributes to in-paint missing areas like the mouth or eyes. In-painting is done on intrinsic image layers instead of RGB to handle illumination differences. The proposed method uses face detection, alignment, representation, and matching with real-time datasets to fill missing or masked regions with realistic synthesized content.
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
Critical evaluation of frontal image based gender classification techniquesSalam Shah
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in the area of face detection, identification, verification, and gender classification to correctly identify humans. The focus of this paper is on gender classification, for which various methods have been formulated based on the measurements of face features. An efficient technique of gender classification helps in accurate identification of a person as male or female and also enhances the performance of other applications like Computer-User Interface, Investigation, Monitoring, Business Profiling and Human Computer Interaction (HCI). In this paper, the most prominent gender classification techniques have been evaluated in terms of their strengths and limitations.
This document presents an audio-visual emotion recognition system that uses multiple modalities and machine learning techniques. It extracts audio features like MFCCs and visual features like facial landmarks from video clips. It uses classifiers like CNNs and stacks their confidence outputs to predict emotions. The system achieves state-of-the-art performance on several databases according to experiments. It represents an improvement over previous work by combining audio, visual and classifier fusion approaches for multimodal emotion recognition.
Early detection of glaucoma through retinal nerve fiber layer analysis using ...eSAT Publishing House
This document summarizes a research paper that proposes a new method for classifying healthy retinal nerve fiber layer (RNFL), medium RNFL loss, and severe RNFL loss using fractal dimension and texture feature analysis of color fundus images. The method extracts texture and fractal dimension features from regions of RNFL in fundus images. It finds that the features are correlated and the correlation is highest for healthy RNFL and lowest for severe RNFL loss. The features can potentially help detect glaucoma at early stages through classification of RNFL health status.
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
This document proposes a new robust subspace method called Proposed Euclidean Distance Score Level Fusion (PEDSLF) for recognizing happiness facial expressions with age variations. PEDSLF performs score level fusion of three subspace methods - PCA, ICA, and SVD. It normalizes the scores from each method and takes their maximum value for classification. The method is tested on two databases from FGNET and achieves recognition rates of 81.8% for ages 1-5 training and 10-15 testing, and 72% for ages 20-25 training and 30-35 testing. The results show PEDSLF performs better than the individual subspace methods for facial expression recognition with age variations.
Recognition of Surgically Altered Face ImagesIRJET Journal
This document discusses a multiple granular algorithm to match face images before and after plastic surgery. It extracts 40 face granules from images at different levels of granularity. Features are extracted from granules using SIFT and EUCLBP descriptors. A genetic algorithm is used to select features and optimize weights for each granule. The algorithm combines information from multiple granules to address nonlinear variations caused by plastic surgery. Experiments show it achieves higher accuracy than existing algorithms in recognizing surgically altered faces.
IRJET- A Comprehensive Survey and Detailed Study on Various Face Recognition ...IRJET Journal
This document provides a comprehensive survey and detailed study of various face recognition methods. It begins with an introduction to face recognition and its advantages over other biometric methods. It then categorizes face recognition methods into four types: knowledge-based, feature-based, template matching, and appearance-based. The majority of the document discusses these methods in further detail and provides examples of algorithms that fall under each type, such as Eigenfaces, LDA, ICA, and neural networks. It concludes by stating that reviewing existing face recognition algorithms may lead to improved methods for solving this fundamental problem.
This document summarizes a research paper on face recognition using principal component analysis (PCA). It discusses how PCA can be used to reduce the dimensionality of face images for recognition. The system detects faces in images, extracts features using PCA, and then compares new faces to those in a training database to recognize identities. The results showed an accuracy of 87.09% on a test set of 30 images using this PCA-based approach for face recognition. While effective, the system has limitations when faces vary significantly from the training data. Overall, PCA provides a way to analyze face patterns and identify faces with reasonable accuracy under controlled conditions.
Face recognition is the ability of categorize a set of images based on certain discriminatory features. Classification of the recognition patterns can be difficult problem and it is still very active field of research. The paper introduces conceptual framework for descriptive study on techniques of face recognition systems. It aims to describe the previous researches have been study the face recognition system, in order scope on the algorithms, usages, benefits , challenges and problems in this felids, the paper proposed the face recognition as sensitive learning task experiments on a large face databases demonstrate of the new feature. The researcher recommends that there's a needs to evaluate the previous studies and researches, especially on face recognition field and 3D, hopeful for advanced techniques and methods in the near future.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
A NEW APPROACH OF BRAIN TUMOR SEGMENTATION USING FAST CONVERGENCE LEVEL SETijbesjournal
Segmentation of region of interest has a critical task in image processing applications. The accuracy of Segmentation is based on processing methodology and limiting value used. In this paper, an enhanced approach of region segmentation using level set (LS) method is proposed, which is achieved by using cross over point in the valley point as a new dynamic stopping criterion in the level set segmentation. The proposed method has been tested with developed database of MR Images. From the test results, it is found that proposed method improves the convergence performance such as complexity in terms of number of iterations, delay and resource overhead as compared to conventional level set based segmentation
approach.
IRJET- Age Analysis using Face Recognition with Hybrid AlgorithmIRJET Journal
1) The document presents a study on age analysis of human faces using a hybrid algorithm combining multiple methods including convolutional neural networks.
2) Facial images are collected then preprocessed and trained on a detection model to extract features. A convolutional neural network model is used to classify ages from the extracted features.
3) The results found the proposed hybrid algorithm approach achieved 96.7% accuracy in age prediction from faces, outperforming existing methods. This shows promise for accurate automatic age analysis.
Innovative Analytic and Holistic Combined Face Recognition and Verification M...ijbuiiir1
Automatic recognition and verification of human faces is a significant problem in the development and application of Human Computer Interaction (HCI).In addition, the demand for reliable personal identification in computerized access control has resulted in an increased interest in biometrics to replace password and identification (ID) card. Over the last couple of years, face recognition researchers have been developing new techniques fuelled by the advances in computer vision techniques, Design of computers, sensors and in fast emerging face recognition systems. In this paper, a Face Recognition and Verification System has been designed which is robust to variations of illumination, pose and facial expression but very sensitive to variations of the features of the face. This design reckons in the holistic or global as well as the analyticor geometric features of the face of the human beings. The global structure of the human face is analysed by Principal Component Analysis while the features of the local structure are computed considering the geometric features of the face such as the eyes, nose and the mouth. The extracted local features of the face are trained and later tested using Artificial Neural Network (ANN). This combined approach of the global and the local structure of the face image is proved very effective in the system we have designed as it has a correct recognition rate of over 90%.
This document summarizes a research paper that proposes a method for detecting and recognizing faces using the Viola Jones algorithm and Back Propagation Neural Network (BPNN).
The paper first discusses face detection and recognition challenges. It then provides background on Viola Jones algorithm and BPNN. The proposed methodology uses Viola Jones for face detection, converts the image to grayscale and binary, then trains segments or the whole image with BPNN. Results are analyzed using training, testing and validation curves in the MATLAB neural network tool to minimize error. In under 3 sentences, this document outlines the key techniques, proposed method, and analysis approach discussed in the source research paper.
This document summarizes several research papers on human face recognition using feature extraction and measurements. It discusses using face recognition for applications like surveillance, access control, and banking validation. Key steps in face recognition systems include extracting features from captured images, comparing them to known images in a training database, and identifying errors like false acceptance and false rejection rates. Methods discussed for feature extraction and dimensionality reduction include Linear Discriminant Analysis and Principal Component Analysis. The document also examines factors that affect face recognition performance like illumination changes, aging, and expressions. Quantifying uncertainty in face recognition algorithms is identified as important for evaluating system performance.
Age and Gender Classification using Convolutional Neural NetworkIRJET Journal
This document describes a study that aims to accurately identify the gender and age range of facial images using convolutional neural networks. It begins with an introduction to age and gender classification and some of the challenges. It then discusses the system analysis and design, including the use of convolutional neural networks and machine learning techniques. The implementation section notes that Python, TensorFlow, Keras and OpenCV will be used to build a convolutional neural network model to detect faces in images and predict age and gender through training on available datasets. The overall goal is to develop an accurate system for age and gender detection from facial images.
This document discusses feature selection and feature normalization techniques for face recognition. It analyzes using discrete cosine transform (DCT) to extract local features from blocks of images, resulting in three feature sets. The local feature vectors are then normalized in two ways: making them unit norm or dividing each coefficient by its standard deviation learned from the training set. The document aims to improve face recognition by performing local analysis of images and fusing outputs from local features.
Robust face recognition by applying partitioning around medoids over eigen fa...ijcsa
An unsupervised learning methodology for robust face recognition is proposed for enhancing invariance to
various changes in the face. The area of face recognition in spite of being the most unobtrusive biometric
modality of all has encountered challenges with high performance in uncontrolled environment owing to
frequently occurring, unavoidable variations in the face. These changes may be due to noise, outliers,
changing expressions, emotions, pose, illumination, facial distractions like makeup, spectacles, hair growth
etc. Methods for dealing with these variations have been developed in the past with different success.
However the cost and time efficiency play a crucial role in implementing any methodology in real world.
This paper presents a method to integrate the technique of Partitioning Around Medoids with Eigen Faces
and Fisher Faces to improve the efficiency of face recognition considerably. The system so designed has
higher resistance towards the impact of various changes in the face and performs well in terms of success
rate, cost involved and time complexity. The methodology can therefore be used in developing highly robust
face recognition systems for real time environment.
Review of face detection systems based artificial neural networks algorithmsijma
This document provides a review of face detection systems that are based on artificial neural network algorithms. It summarizes several studies that have used different types of neural networks for face detection, including:
1) Retinal connected neural networks and rotation invariant neural networks.
2) Principal component analysis combined with neural networks.
3) Convolutional neural networks, multilayer perceptrons, backpropagation neural networks, and polynomial neural networks.
4) Fast neural networks, evolutionary optimization of neural networks, and Gabor wavelet features with neural networks. Strengths and limitations of these different approaches are discussed.
Implementation of features dynamic tracking filter to tracing pupilssipij
The objective of this paper is to show the implementation of an artificial vision filter capable of tracking the
pupils of a person in a video sequence. There are several algorithms that can achieve this objective, for this
case, features dynamic tracking selected, which is a method that traces patterns between each frame that
form a video scene, this type of processing offers the advantage of eliminating the problems of occlusion
patterns of interest. The implementation was tested on a base of videos of people with different physical
characteristics of the eyes. An additional goal is to obtain information of the eye movements that are
captured and pupil coordinates for each of these movements. These data could help some studies related to
eye health.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
IRJET - Facial In-Painting using Deep Learning in Machine LearningIRJET Journal
This document describes a deep learning approach for facial in-painting to generate plausible facial structures for missing pixels in face images. The approach uses the observable region of a face as well as inferred attributes like gender, ethnicity, and expression. It selects a guidance face matching the attributes to in-paint missing areas like the mouth or eyes. In-painting is done on intrinsic image layers instead of RGB to handle illumination differences. The proposed method uses face detection, alignment, representation, and matching with real-time datasets to fill missing or masked regions with realistic synthesized content.
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
Critical evaluation of frontal image based gender classification techniquesSalam Shah
The face describes the personality of humans and has adequate importance in the identification and verification process. The human face provides, information as age, gender, face expression and ethnicity. Research has been carried out in the area of face detection, identification, verification, and gender classification to correctly identify humans. The focus of this paper is on gender classification, for which various methods have been formulated based on the measurements of face features. An efficient technique of gender classification helps in accurate identification of a person as male or female and also enhances the performance of other applications like Computer-User Interface, Investigation, Monitoring, Business Profiling and Human Computer Interaction (HCI). In this paper, the most prominent gender classification techniques have been evaluated in terms of their strengths and limitations.
This document presents an audio-visual emotion recognition system that uses multiple modalities and machine learning techniques. It extracts audio features like MFCCs and visual features like facial landmarks from video clips. It uses classifiers like CNNs and stacks their confidence outputs to predict emotions. The system achieves state-of-the-art performance on several databases according to experiments. It represents an improvement over previous work by combining audio, visual and classifier fusion approaches for multimodal emotion recognition.
Early detection of glaucoma through retinal nerve fiber layer analysis using ...eSAT Publishing House
This document summarizes a research paper that proposes a new method for classifying healthy retinal nerve fiber layer (RNFL), medium RNFL loss, and severe RNFL loss using fractal dimension and texture feature analysis of color fundus images. The method extracts texture and fractal dimension features from regions of RNFL in fundus images. It finds that the features are correlated and the correlation is highest for healthy RNFL and lowest for severe RNFL loss. The features can potentially help detect glaucoma at early stages through classification of RNFL health status.
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
This document proposes a new robust subspace method called Proposed Euclidean Distance Score Level Fusion (PEDSLF) for recognizing happiness facial expressions with age variations. PEDSLF performs score level fusion of three subspace methods - PCA, ICA, and SVD. It normalizes the scores from each method and takes their maximum value for classification. The method is tested on two databases from FGNET and achieves recognition rates of 81.8% for ages 1-5 training and 10-15 testing, and 72% for ages 20-25 training and 30-35 testing. The results show PEDSLF performs better than the individual subspace methods for facial expression recognition with age variations.
Recognition of Surgically Altered Face ImagesIRJET Journal
This document discusses a multiple granular algorithm to match face images before and after plastic surgery. It extracts 40 face granules from images at different levels of granularity. Features are extracted from granules using SIFT and EUCLBP descriptors. A genetic algorithm is used to select features and optimize weights for each granule. The algorithm combines information from multiple granules to address nonlinear variations caused by plastic surgery. Experiments show it achieves higher accuracy than existing algorithms in recognizing surgically altered faces.
IRJET- A Comprehensive Survey and Detailed Study on Various Face Recognition ...IRJET Journal
This document provides a comprehensive survey and detailed study of various face recognition methods. It begins with an introduction to face recognition and its advantages over other biometric methods. It then categorizes face recognition methods into four types: knowledge-based, feature-based, template matching, and appearance-based. The majority of the document discusses these methods in further detail and provides examples of algorithms that fall under each type, such as Eigenfaces, LDA, ICA, and neural networks. It concludes by stating that reviewing existing face recognition algorithms may lead to improved methods for solving this fundamental problem.
This document summarizes a research paper on face recognition using principal component analysis (PCA). It discusses how PCA can be used to reduce the dimensionality of face images for recognition. The system detects faces in images, extracts features using PCA, and then compares new faces to those in a training database to recognize identities. The results showed an accuracy of 87.09% on a test set of 30 images using this PCA-based approach for face recognition. While effective, the system has limitations when faces vary significantly from the training data. Overall, PCA provides a way to analyze face patterns and identify faces with reasonable accuracy under controlled conditions.
Face recognition is the ability of categorize a set of images based on certain discriminatory features. Classification of the recognition patterns can be difficult problem and it is still very active field of research. The paper introduces conceptual framework for descriptive study on techniques of face recognition systems. It aims to describe the previous researches have been study the face recognition system, in order scope on the algorithms, usages, benefits , challenges and problems in this felids, the paper proposed the face recognition as sensitive learning task experiments on a large face databases demonstrate of the new feature. The researcher recommends that there's a needs to evaluate the previous studies and researches, especially on face recognition field and 3D, hopeful for advanced techniques and methods in the near future.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
A NEW APPROACH OF BRAIN TUMOR SEGMENTATION USING FAST CONVERGENCE LEVEL SETijbesjournal
Segmentation of region of interest has a critical task in image processing applications. The accuracy of Segmentation is based on processing methodology and limiting value used. In this paper, an enhanced approach of region segmentation using level set (LS) method is proposed, which is achieved by using cross over point in the valley point as a new dynamic stopping criterion in the level set segmentation. The proposed method has been tested with developed database of MR Images. From the test results, it is found that proposed method improves the convergence performance such as complexity in terms of number of iterations, delay and resource overhead as compared to conventional level set based segmentation
approach.
IRJET- Age Analysis using Face Recognition with Hybrid AlgorithmIRJET Journal
1) The document presents a study on age analysis of human faces using a hybrid algorithm combining multiple methods including convolutional neural networks.
2) Facial images are collected then preprocessed and trained on a detection model to extract features. A convolutional neural network model is used to classify ages from the extracted features.
3) The results found the proposed hybrid algorithm approach achieved 96.7% accuracy in age prediction from faces, outperforming existing methods. This shows promise for accurate automatic age analysis.
Innovative Analytic and Holistic Combined Face Recognition and Verification M...ijbuiiir1
Automatic recognition and verification of human faces is a significant problem in the development and application of Human Computer Interaction (HCI).In addition, the demand for reliable personal identification in computerized access control has resulted in an increased interest in biometrics to replace password and identification (ID) card. Over the last couple of years, face recognition researchers have been developing new techniques fuelled by the advances in computer vision techniques, Design of computers, sensors and in fast emerging face recognition systems. In this paper, a Face Recognition and Verification System has been designed which is robust to variations of illumination, pose and facial expression but very sensitive to variations of the features of the face. This design reckons in the holistic or global as well as the analyticor geometric features of the face of the human beings. The global structure of the human face is analysed by Principal Component Analysis while the features of the local structure are computed considering the geometric features of the face such as the eyes, nose and the mouth. The extracted local features of the face are trained and later tested using Artificial Neural Network (ANN). This combined approach of the global and the local structure of the face image is proved very effective in the system we have designed as it has a correct recognition rate of over 90%.
This document summarizes a research paper that proposes a method for detecting and recognizing faces using the Viola Jones algorithm and Back Propagation Neural Network (BPNN).
The paper first discusses face detection and recognition challenges. It then provides background on Viola Jones algorithm and BPNN. The proposed methodology uses Viola Jones for face detection, converts the image to grayscale and binary, then trains segments or the whole image with BPNN. Results are analyzed using training, testing and validation curves in the MATLAB neural network tool to minimize error. In under 3 sentences, this document outlines the key techniques, proposed method, and analysis approach discussed in the source research paper.
This document summarizes several research papers on human face recognition using feature extraction and measurements. It discusses using face recognition for applications like surveillance, access control, and banking validation. Key steps in face recognition systems include extracting features from captured images, comparing them to known images in a training database, and identifying errors like false acceptance and false rejection rates. Methods discussed for feature extraction and dimensionality reduction include Linear Discriminant Analysis and Principal Component Analysis. The document also examines factors that affect face recognition performance like illumination changes, aging, and expressions. Quantifying uncertainty in face recognition algorithms is identified as important for evaluating system performance.
Age and Gender Classification using Convolutional Neural NetworkIRJET Journal
This document describes a study that aims to accurately identify the gender and age range of facial images using convolutional neural networks. It begins with an introduction to age and gender classification and some of the challenges. It then discusses the system analysis and design, including the use of convolutional neural networks and machine learning techniques. The implementation section notes that Python, TensorFlow, Keras and OpenCV will be used to build a convolutional neural network model to detect faces in images and predict age and gender through training on available datasets. The overall goal is to develop an accurate system for age and gender detection from facial images.
This document discusses feature selection and feature normalization techniques for face recognition. It analyzes using discrete cosine transform (DCT) to extract local features from blocks of images, resulting in three feature sets. The local feature vectors are then normalized in two ways: making them unit norm or dividing each coefficient by its standard deviation learned from the training set. The document aims to improve face recognition by performing local analysis of images and fusing outputs from local features.
Technique for recognizing faces using a hybrid of moments and a local binary...IJECEIAES
The face recognition process is widely studied, and the researchers made great achievements, but there are still many challenges facing the applications of face detection and recognition systems. This research contributes to overcoming some of those challenges and reducing the gap in the previous systems for identifying and recognizing faces of individuals in images. The research deals with increasing the precision of recognition using a hybrid method of moments and local binary patterns (LBP). The moment technique computed several critical parameters. Those parameters were used as descriptors and classifiers to recognize faces in images. The LBP technique has three phases: representation of a face, feature extraction, and classification. The face in the image was subdivided into variable-size blocks to compute their histograms and discover their features. Fidelity criteria were used to estimate and evaluate the findings. The proposed technique used the standard Olivetti Research Laboratory dataset in the proposed system training and recognition phases. The research experiments showed that adopting a hybrid technique (moments and LBP) recognized the faces in images and provide a suitable representation for identifying those faces. The proposed technique increases accuracy, robustness, and efficiency. The results show enhancement in recognition precision by 3% to reach 98.78%.
Age and gender detection using deep learningIRJET Journal
This document summarizes a research paper on age and gender detection using deep learning. It discusses using convolutional neural networks (CNNs) to extract features from images of faces and classify them by age and gender. The methodology section describes using a CNN model trained on labeled image data to learn the features that distinguish ages and genders. It then discusses preprocessing images, detecting faces, extracting features using CNN layers, and classifying images into age and gender categories. The output section notes that testing the trained model on new images in real-time can classify faces by gender and assign them to an age group with results displayed on screen.
A Hybrid Approach to Face Detection And Feature Extractioniosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document presents a hybrid approach for face detection and feature extraction. It combines the Viola-Jones face detection framework with a neural network classifier to first classify images as containing a face or not. If a face is detected, Viola-Jones algorithms like integral images and cascading classifiers are used to detect the face features. Edge-based feature maps and feature vectors are also extracted and used as inputs to the neural network classifier and for future facial feature extraction. The proposed approach aims to leverage the strengths of Viola-Jones and neural networks to accurately detect faces and then extract facial features from images.
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...IRJET Journal
This document proposes a system for real-time human face detection, tracking, and estimation of age, weight, and gender from face images using a Raspberry Pi processor. The system uses OpenCV for face detection and extracts facial features to classify age, estimate weight, and determine gender through a probabilistic framework. The system allows for real-time detection of multiple faces with high efficiency even from low-quality images. Evaluation shows the low-cost Raspberry Pi provides fast execution speeds suitable for real-time applications.
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Face detection is one of the most suitable applications for image processing and biometric programs. Artificial neural networks have been used in the many field like image processing, pattern recognition, sales forecasting, customer research and data validation. Face detection and recognition have become one of the most popular biometric techniques over the past few years. There is a lack of research literature that provides an overview of studies and research-related research of Artificial neural networks face detection. Therefore, this study includes a review of facial recognition studies as well systems based on various Artificial neural networks methods and algorithms.
Human Face Detection and Tracking for Age Rank, Weight and Gender Estimation ...IJERA Editor
This paper describes the technique for real time human face detection and tracking for age rank, weight and gender estimation. Face detection is involved with finding whether there are any faces in a given image and if there are any faces present, track the face and returns the face region with features of each face. Here it describes a simple and convenient hardware implementation of face detection method using Raspberry Pi Processor, which itself is a minicomputer of a credit card size. This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age evaluation based on face images. Two main components for building an efficient age estimator are facial feature extraction and estimator learning. Using feature extraction and comparing with our input database in which we have different age group face images with weight is specified according to that we also specify weight category i.e. under weight, normal weight and overweight . In this article we present gender estimation technique, which effectively integrates the head as well as mouth motion information with facial appearance by taking advantage of a unified probabilistic framework. Facial appearance as well as head and mouth motion possess a potentially relevant discriminatory power, and that the integration of different sources of biometric data from video sequences is the key approach to develop more precise and reliable realization systems.
Age Estimation And Gender Prediction Using Convolutional Neural Network.pptxBulbul Agrawal
Identifying the attributes of humans such as age, gender, ethnicity, emotions etc. using computer vision have been given increased attention in recent years. Such attributes can play an important role in many applications such as human-computer interaction, surveillance, searching, biometrics, sale of product, entertainment, and cosmetology. Generally, it is possible to classify human life into one of four age groups: Children, Young, Adult, and Old. The image of a person’s face exhibits many variations which may affect the ability of a computer vision system to recognize the gender. In this dissertation, we evaluate the CNN architecture along with the PCA for gaining good performance.
Adversarial Multi Scale Features Learning for Person Re Identificationijtsrd
Person re identification Re ID is the task of matching a target person across different cameras, which has drawn extensive attention in computer vision and has become an essential component in the video surveillance system. Pried can be considered as a problem of image retrieval. Existing person re identification methods depend mostly on single scale appearance information. In this work, to address issues, we demonstrate the benefits of a deep model with Multi scale Feature Representation Learning MFRL using Convolutional Neural Networks CNN and Random Batch Feature Mask RBFM is proposed for pre id in this study. The RBFM is enlightened by the drop block and Batch Drop Block BDB dropout based approaches. However, great challenges are being faced in the pre id task. First, in different scenarios, appearance of the same pedestrian changes dramatically by reason of the body misalignment frequently, various background clutters, large variations of camera views and occlusion. Second, in a public space, different pedestrians wear the same or similar clothes. Therefore, the distinctions between different pedestrian images are subtle. These make the topic of pre id a huge challenge. The proposed methods are only performed in the training phase and discarded in the testing phase, thus, enhancing the effectiveness of the model. Our model achieves the state of the art on the popular benchmark datasets including Market 1501, duke mtmc re id and CUHK03. Besides, we conduct a set of ablation experiments to verify the effectiveness of the proposed methods. Mrs. D. Radhika | D. Harini | N. Kirujha | Dr. M. Duraipandiyan | M. Kavya "Adversarial Multi-Scale Features Learning for Person Re-Identification" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-4 , June 2021, URL: https://www.ijtsrd.compapers/ijtsrd42562.pdf Paper URL: https://www.ijtsrd.comengineering/computer-engineering/42562/adversarial-multiscale-features-learning-for-person-reidentification/mrs-d-radhika
Person identification based on facial biometrics in different lighting condit...IJECEIAES
Technological development is an inherent feature of this time, that reliance on electronic applications in all daily transactions (business management, banking, financial transfers, health, and other important aspects of life). Identifying and confirming identity is one of the complex challenges. Therefore, relying on biological properties gives reliable results. People can be identified in pictures, films, or real-time using facial recognition technology. A face individual is a unique identifying biological characteristic to authenticate them and prevents permits another person to assume that individual’s identity without their knowledge or consent. This article proposes the identification model by facial individual characteristics, based on the deep neural network (DNN). The proposed method extracts the spatial information available in an image, analysis this information to extract the salient features, and makes the identifying decision based on these features. This model presents successful and promising results, the accuracy achieves by the proposed system reaches 99.5% (+/- 0.16%) and the values of the loss function reach 0.0308 over the Pins Face Recognition dataset to identify 105 subjects.
Facial recognition based on enhanced neural networkIAESIJAI
Accurate automatic face recognition (FR) has only become a practical goal of biometrics research in recent years. Detection and recognition are the primary steps for identifying faces in this research, and The Viola-Jones algorithm implements to discover faces in images. This paper presents a neural network solution called modify bidirectional associative memory (MBAM). The basic idea is to recognize the image of a human's face, extract the face image, enter it into the MBAM, and identify it. The output ID for the face image from the network should be similar to the ID for the image entered previously in the training phase. The tests have conducted using the suggested model using 100 images. Results show that FR accuracy is 100% for all images used, and the accuracy after adding noise is the proportions that differ between the images used according to the noise ratio. Recognition results for the mobile camera images were more satisfactory than those for the Face94 dataset.
This document discusses face recognition technology. It begins with an abstract stating that face recognition is the identification of humans by unique facial characteristics. It then discusses how face recognition works by identifying distinguishing facial features from images and comparing them to stored data. The document then provides an introduction to biometrics and how face recognition can be used for applications like criminal identification. It describes different face recognition algorithms and provides summaries of several research papers on face recognition techniques.
Face Recognition for Human Identification using BRISK Feature and Normal Dist...ijtsrd
Face recognition is a kind of automatic human identification from face images has been performed widely research in image processing and machine learning. Face image, facial information of the person is presented and unique information for each person even two person possessed the same face. We propose a methodology for automatic human classification based on Binary Robust Invariant Scalable Keypoints BRISK feature of face images and the normal distribution model. In our proposed methodology, the normal distribution model is used to represent the statistical information of face image as a global feature. The human name is the output of the system according to the input face image. Our proposed feature is applied with Artificial Neural Networks to recognize face for human identification. The proposed feature is extracted from the face image of the Extended Yale Face Database B to perform human identification and highlight the properties of the proposed feature. Khin Mar Thi "Face Recognition for Human Identification using BRISK Feature and Normal Distribution Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd26589.pdfPaper URL: https://www.ijtsrd.com/computer-science/multimedia/26589/face-recognition-for-human-identification-using-brisk-feature-and-normal-distribution-model/khin-mar-thi
Comparative analysis of augmented datasets performances of age invariant face...journalBEEI
The popularity of face recognition systems has increased due to their non-invasive method of image acquisition, thus boasting the widespread applications. Face ageing is one major factor that influences the performance of face recognition algorithms. In this study, the authors present a comparative study of the two most accepted and experimented face ageing datasets (FG-Net and morph II). These datasets were used to simulate age invariant face recognition (AIFR) models. Four types of noises were added to the two face ageing datasets at the preprocessing stage. The addition of noise at the preprocessing stage served as a data augmentation technique that increased the number of sample images available for deep convolutional neural network (DCNN) experimentation, improved the proposed AIFR model and the trait aging features extraction process. The proposed AIFR models are developed with the pre-trained Inception-ResNet-v2 deep convolutional neural network architecture. On testing and comparing the models, the results revealed that FG-Net is more efficient over Morph with an accuracy of 0.15%, loss function of 71%, mean square error (MSE) of 39% and mean absolute error (MAE) of -0.63%.
Similar to Face Verification Across Age Progression using Enhanced Convolution Neural Network (20)
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Face Verification Across Age Progression using Enhanced Convolution Neural Network
1. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
DOI: 10.5121/sipij.2020.11504 61
FACE VERIFICATION ACROSS AGE PROGRESSION
USING ENHANCED CONVOLUTION NEURAL
NETWORK
Areeg Mohammed Osman1
and Serestina Viriri2
1
Faculty of Science and Technology, Sudan University of Science and Technology,
Khartoum, Sudan
2
School of Maths, Statistics and Computer Science, University of KwaZulu-Natal,
Durban, South Africa
ABSTRACT
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a
texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand
to develop robust methods to verify facial images when they age. In this paper, a deep learning method
based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG)
and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and
classification. The experiments are based on the facial images collected from MORPH and FG-Net
benchmarked datasets. Euclidean distance has been used to measure the similarity between pairs of feature
vectors with the age gap. Experiments results show an improvement in the validation accuracy conducted
on the FG-NET database, which it reached 100%, while with MORPH database the validation accuracy is
99.8%. The proposed method has better performance and higher accuracy than current state-of-the-art
methods.
KEYWORDS
Facial Aging, Face verification, GoogLeNet
1. INTRODUCTION
Facial aging defined from a computer vision perspective as a function of changing facial shape
and texture over time [3]. Since childhood aging affects the human face in several aspects.
However, creating robust face recognition systems is a challenge, especially when facial
variations, such as different levels of illumination, poses, and facial expressions, are present in
images.
Facial aging is a sophisticated process that affects the shape and texture of the human face [5],
and such changes degrade the performance of automatic face verification systems. The challenge
of these systems has based on the fact that facial aging involves both essential and unessential
factors [30]. Facial aging also influences individual facial components (such as the mouth, eyes,
and nose). Most problems in image recognition relate to identifying invariant facial features. This
study examines the issue of designing a model and an appropriate schema capable of identifying
an individual's features as they change due to aging and improving the performance by increasing
the accuracy of face verification on the aging system. Besides, this study aims to determine
whether the proposed method performs better in comparison to other methods and if it can
identify individuals through images as they age Changes in facial appearance due to aging
typically depend on several factors, including race; geographical location; eating habits; and
2. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
62
stress level, which makes the problem of recognizing faces across the aging process extremely
difficult. In addition, no simple statistical model for analyzing appearance changes due to aging.
Faces affected by gradual variations due to aging may necessitate the periodical updating of
facial image databases with more recent subject images for the success of facial verification
systems. Since updating large databases periodically would be a difficult task, a better alternative
would be to develop facial verification systems that can verify the identity of subjects from two
face images of different ages. Understanding how age progression affects the similarity between
two facial images of one subject is significant in such a task.
Facial recognition has categorized into two classes [5]: face identification and face verification.
The former aims to recognize an individual from a gallery of facial images or videos and find the
most similar one to the probe sample, while the latter identifies whether or not a given pair of
facial images belong to the same subject.
This paper will consider how a subject could be recognized despite age changes over the years
and other significant variations caused by lighting, expressions, poses, resolutions, and
backgrounds. Face verification in aging subjects is a challenging process, as human aging is non-
uniform. Besides, extracting textural and shape features from the images is another challenge.
Several methods have been used to extract facial features from images, hand-crafted descriptors
have been used for a while to extract facial features [3], texture descriptor is suitable to extract
general appearance facial changes, while shape descriptors are more suitable for face shape
changes.
Some researchers study the effect of Local Binary Pattern (LBP) features [6,7] in face
verification have achieved significant improvements. On the other hand, Histograms of Oriented
Gradient (HOG) is a shape descriptor used to detect objects like cars and humans, was chosen for
its advanced results in facial recognition [3].
Hand-crafted descriptors are not completely capable to represent the appearance of face [5], so
we take deep learning methods into account. The primary advantage of deep neural network is to
learn discriminative features through autonomous learning without supervision [4]. Thus,
combine both hand crafted feature descriptors and convolutional neural network to take both
sides advantages.
Convolutional Neural Network (CNN) is the method used in this paper that reveals significant
results in age estimation, facial recognition, and object recognition in general. The architecture
of CNNs consists of multiple layers, and each layer is responsible for performing a specific
process based on the output of the previous layers. CNN's were considered deep networks if it
has a large number of layers, a large database needed to optimize its parameters during the
training process.
This paper consists of five sections: section 1 deals with the introduction; section 2 deals with
previous studies on deep learning and the methods involved in the proposed methods; Section 3
describes the experiments carried out over the course of the study; section 4 summarizes the
results; section 5 provides the conclusion.
2. LITERATURE REVIEW
Over the last few years, attention on the use of deep learning in computer vision and image
processing has increased. The literature has shown several deep learning methods that have used
for face verification system [9,11,19].
3. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
63
Generally, deep learning methods aim to learn hierarchical feature representations by building
high-level features from low-level ones. There are three categories of deep learning methods:
unsupervised, supervised, and semi-supervised. These methods have successfully applied in
many visual analysis applications, such as object recognition [19] and human action recognition
[19]. Recently, deep convolutional neural networks (DCNNs) have made significant
improvements in object classification [9], facial analysis [19], and image super-resolution [23].
A study by Simone [1] on a large age-gap face verification task implemented a DCNN by
including a feature injection layer to increase verification accuracy through learning a similarity
measure of the external features. This method was evaluated according to the LAG (Large Age
Gap) dataset and found to perform better than current state-of-the-art products.
El Khiyari and Wechsler [5] evaluated the use of CNN's in feature extraction for the automatic
facial verification of subjects belonging to various age, ethnicity, and gender categories. For
multiple demographic groups, biometric performance in facial verification was relatively lower in
black female subjects 18-30 years old. Later, the VGG-Face convolutional neural network [4]
was used to extract features by activation layers. The features distance between subjects and the
similarity distances between their respective sets found to be the same. Its identification and
verification performance was evaluated using both singleton and set similarity distances. On the
other hand, Yanhai et al. [6] proposed an unsupervised learning model called PCN (PCA-Based
Convolutional Network) consisting of two feature extraction steps and output steps. However, the
feature extraction stage contains a convolutional layer that learns filters using Principal
Component Analysis (PCA), and the output feature maps reduce images resolution. The
nonlinear stage includes binary hashing and histogram statistics, and the output of all stages fed
into a Support Vector Machine (SVM) classifier.
Digit recognition experiments were carried out based on the MNIST database, face recognition
experiments on the Extended Yale Face database B, texture classification experiments on the
CUReT database, and texture classification experiments on the Outex dataset. The results show
that PCN performs competitively with and sometimes even better than current state-of-the-art
deep learning models.
The proposed idea by Zhai et al. [25] was to combine both local binary pattern (LBP) histograms
and 9-layer deep convolutional neural networks. This study confirmed that this fusion approach is
performed better than current state-of-the-art methods. Besides, the approach provides hairstyle
and facial expression features using models trained on the CACD and LFW datasets.
Hu et al. [7] proposed a discriminative deep metric learning (DDML) method for face verification
in the wild by building a DDML neural network to perform nonlinear transformations of features
such that the distance between two pairs of images belonging to the same person is less than a
calculated small threshold value and vice versa. Their experiments on the LFW and YouTube
Faces (YTF) datasets performed better than literature methods.
Finally, Moschoglou et al. [15] used the VGG-Face deep network and other state-of-the-art
algorithms on a new manually compiled dataset called the AgeDB (Age-Database). This dataset
is suitable for use with experiments on age-invariant recognition, face verification, age
estimation, and facial age progression in the wild.
3. METHODOLOGY
This paper proposes a methodology for performing facial verification in the context of age
progression using images of human faces. Figure 1 shows a flow diagram of the proposed
4. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
64
methodology, which has divided into multiple steps. In the first step, facial images pre-processed
through scaling, data augmentation, cropping, and normalization before training and testing. The
second step feature extraction it’s responsible for generating the CNN model based on the
augmented dataset for training and testing to calculate the validation accuracy. The third step is to
save the generated classifier to be used in the following step. Finally, in the prediction step, two
pairs of images are used as test input images after being pre-processed, and the trained model and
classifier then determine whether or not they belong to the same subject.
Figure 1. Proposed Methodology Framework
3.1. Facial Dataset
In this study, the FG-NET [17] and MORPH databases [20] were used to train and test the
proposed model. FG-NET dataset is a standard benchmark, and a freely available dataset for
facial recognition consists of 1002 images of 82 subjects ages between 0–6. MORPH dataset
comprises 2798 images of 672 subjects that vary in age. The images were split into classes with a
five years maximum of an age difference. Furthermore, the image datasets have divided into two
groups: 80% of the set was randomly selected to train the CNN network, and the remaining 20%
has used to test it.
3.2. Image Pre-processing
MORPH and FG-NET databases have pre-processed to improve the performance of the CNN
model. Facial images were reflected and scaled to the standard input layer size of 224 ×224 and
fed to the convolutional neural network using RGB color values to match the image input layer
which requires input images 224× 224 ×3 in size, where 3 is the number of color channels.
3.3. Feature Extraction and Classification
Convolutional neural networks [14,12,18] are artificial neural networks that include both fully
connected and connected layers known as convolutional layers. Other types of layers, such as
pooling, activation, and normalization (rectified linear units) layers, are often observed in deep
5. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
65
convolutional networks. CNN’s have recently been more successful in both object classification
[24] and automatic recognition than in handcrafted feature extraction.
3.3.1. Deep Convolutional Neural Networks and Transfer Learning
Training DCNNs from scratch is difficult, as it can require extensive computational resources and
large amounts of training data. If these resources are not available, one can use a pre-trained
network as a feature extractor and a classifier. The general architecture consists of a
convolutional layer and a pooling layer followed by a fully connected layer [12].
In this work, the chosen architecture is the GoogLeNet model was proposed by [22], which is a
convolutional neural network that was trained on images from ImageNet dataset [10]. The
network is 22 layers deep and can classify images into 1000 object categories, including
keyboard, mouse, pencil, and many animal classifications.
GoogLeNet model has almost 12 times fewer parameters (less than AlexNet model), as it reduces
the number of parameters to 4 million [22] because it based on small convolutions. As a result, it
has learned rich feature representations for a wide range of images, so it has become much more
accurate. The network used a CNN inspired through the inception modules, the module range
calculated then the fully connected layers has removed. Meanwhile, in the inception modules,
there is a pooling layer to minimize the number of parameters involved. A shadow network and
an auxiliary classifier added to facilitate better outputs. GoogLeNet has more layers (than
AlexNet) due to its 9 inception modules which include the convolutional, pooling and softmax
layers [22] in addition to concatenate processes
The idea behind the inception layer is to cover a larger area in the images while maintaining a
precise resolution for small image information. The goal is to convolve different sizes in parallel
from the most accurate detailing (1x1) to a larger one (5x5).
The most straight forward way to improve deep learning performance is to use more layers in the
network and more data for training.
3.3.2. Histograms of Oriented Gradient (HOG)
HOG is a shape descriptor used to detect objects like cars and humans. It is firstly introduced by
Dalal and Triggs to detect human [3]. The basic idea about HOG, the shape of objects, and
appearance inside the image could be defined by the distribution of intensity gradients or edge
directions. The image is dividing into cells, for each cell create a histogram to describe the
distribution of the directions. Histograms are normalized and concatenating into a vector, which
will be as large as the number of features and calculated as follows:
1. Gradient has to be computed by this equation [3].
𝑔 𝑥((𝑋, 𝑌) = 𝐼(𝑋 + 1, 𝑌) − 𝐼(𝑋 − 1, 𝑌) (1)
𝑔 𝑦((𝑋, 𝑌) = 𝐼(𝑋, 𝑌 + 1) − 𝐼(𝑋, 𝑌 − 1) (2)
2. Then Orientation θ and magnitude are calculated as in the following formula.
𝑚((𝑋, 𝑌) = √𝜕𝑥(𝑥, 𝑦)2 + 𝜕𝑦(𝑥, 𝑦)2 (3)
6. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
66
θ(x, y) = arctan
𝜕𝑦(𝑥,𝑦)
𝜕𝑥(𝑥,𝑦)
(4)
3. Divide image orientation and magnitude into cells such that the number of cells in row
and column is parameters to be chosen when implements HOG.
4. Orientations histogram computed for each block; then normalized by the formula below:
𝐻𝑖𝑠𝑡 𝑛𝑜𝑟𝑚=
𝐻𝑖𝑠𝑡
𝐻𝑖𝑠𝑡+∈
(5)
5. Finally, Concatenated normalized histograms into a vector.
3.3.3. Local Binary Pattern (LBP)
Local Binary Pattern (LBP) is a texture descriptor [19], it’s operated by dividing an image into
multiple cells, any pixel in the center of the cell is compared to its eight neighbors, starting from
the top-left direction. Starting clockwise manner if the pixel in the center is larger than its
neighbors it is replaced by zero, otherwise, it replaces by one. After that, calculate the decimal
value of all binary numbers, resulting in LBP code which replaced center pixel. To collect
information over larger regions, select larger cell sizes. The LBP code for P neighbors situated on
a circle of radius R is computed as follows [2]:
𝐿𝐵𝑃𝑃,𝑅( 𝑋, 𝑌) = ∑ 𝑆(𝑔 𝑝 − 𝑔𝑐 )2𝑝𝑝
𝑝=0 (6)
Where s (l)=1 if l ≥ 0 and 0 otherwise.
3.3.4. Support Vector Machine (SVM)
Instead of using a classification layer GoogLeNet as a classifier, we tried another classifier like
Support Vector Machine (SVM) [31] to make a performance comparison. Precisely, the study
included a linear multi-class SVM in order to constitute subjects/classes. The Multi-class SVM
technique is to use a one-versus-all classification approach to represent the output of the k-th
SVM as in (7).
𝑎 𝑘(x)=𝑊 𝑇
x (7)
The forecast class is:
𝑎𝑟𝑔 𝑘(max)=𝑎 𝑘(x) (8)
3.3.5. k-nearest neighbour (KNN)
The k-nearest neighbour (KNN) algorithm [32] is determine the k nearest neighbours to specific
case by using Euclidean distance (or other measures). KNN returns the most common value
among the k training examples nearest to the query.
Given a query instance 𝑥 𝑞 to be classified let 𝑥1,.., 𝑥 𝑘 denote the k instances from training
examples that are nearest to 𝑥 𝑞 return [32]:
f(𝑥 𝑞) → arg max ∑ 𝜎𝑘
𝑖=1 (𝑣, 𝑓( 𝑥𝑖)) (9)
7. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
67
Where δ(a ,b)=1 if a=b and δ(a ,b)=0
3.3.6. Majority Voting
Majority Voting [33] utilizes the standard class label values that have been retrieved from the
predicted label array obtained through the classifier. It counts class with most than half
occurrence from all feature extractors, finally, return class label as the final prediction as follows.
C(X)=mode{ℎ1(X), ℎ2(X), ℎ3(X)} (10)
Where X is the class label, h(x) is a prediction array.
3.3.7. Euclidean distance and Threshold
The performance has evaluated using Euclidean distance [4], which measures the similarity
between pairs of feature vectors. Given the two feature image vectors a and b, the similarity
distance is the Euclidean distance calculated in the following way:
𝑑( 𝑎, 𝑏) = ||𝑎 − 𝑏|| (11)
For two image feature sets, A ={𝑎1, 𝑎2, … . . , 𝑎 𝑛} and B ={𝑏1, 𝑏2, … . . , 𝑏 𝑛}, we define the
minimum similarity distances between the two sets as follows:
ℎmin( 𝐴, 𝐵) = min(𝑑(𝑎∈𝐴&𝑏∈𝐵)(𝑎, 𝑏) (12)
Euclidean distance takes the features vector returned by CNN network and calculate the distance
between them and compared with threshold. If the result is less than threshold faces are
considered for the same person otherwise it’s considered extra-personal.
3.4. Classifier Performance
All measures of performance based on four numbers obtained by applying the classifier to the
test set. These metrics are false positives (FP), true positives (TP), true negatives (TN), and false
negatives (FN). Thus, system validation accuracy has calculated as follows [13]:
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
(𝑇𝑃+𝑇𝑁)
(𝑇𝑃+𝑇𝑁+𝐹𝑃+𝐹𝑁)
(13)
The system validates the network in each iteration during the training process. The validation
images were classified using the fine-tuned CNN network, and their classification accuracy has
calculated.
4. EXPERIMENTS AND ANALYSIS
Images are collected from the MORPH dataset [20] and partitioned into multiple classes. Each
class is composed entirely of subject’s images of various ages. Images pre-processing, feature
extraction, and classification are performed as in previous sections. A common practice in
transfer learning is to remove the top layers of a DNN and replace them with new different layers
to handle the new dataset. In GoogLeNet, the top network layer is responsible for processing the
output of many underlying convolutional layers, we add new layers to make predictions for the
new task, then retrained the added layers from scratch and initialized with random weights.
8. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
68
The first ten layers are frozen while the training process to slow down the learning process, but
learning layers that have not been frozen are slowed down during the learning process by setting
the initial learning rate to a small value of 10. The result of these processes is rapid learning in
the new layers, slowing of learning in the middle layers, and not learning in the previously frozen
layers.
The convolutional layers of the network extract image features using the network’s last learnable
layer, and the final classification layer classifies the input images. Thus, these two layers in
GoogLeNet contain information on how to combine the features that the network extracts into
class probabilities, a loss value, and predicted labels.
To retrain a pre-trained network to classify new images, we replace these two layers with new
layers adapted to the new dataset. The last learnable layer in the network was replaced by a new
layer with an output size of 672, the weight of the learning rate factor was equal to 10, and the
classification layer was replaced by a new layer with an input and output size equal to 672, which
is the number of classes (subjects) in the dataset. To verify the facial images, we calculate the
minimum distance between each pair of images using equation No. 2, where each image in the
test dataset was compared with an image in the gallery set to see whether they belong to the same
person or not.
The images pairs with the lowest value are identical (belong to the same person), and the other
pairs are not identical. At this point, the network must be enhancing to improve its performance
and achieve high accuracy. Accordingly, we changed the training parameters and solved the
overfitting problem as described in subsequent sections.
4.1. Training Parameters
Training parameters are kept consistent unless otherwise specified in transfer learning techniques.
No need to train the model for many epochs when using transfer learning, the number of epochs
have set to 40, and the mini-batch size has set to 100.The ReLU activation function was used in
all weight layers, and the initial learning rate was set to 0.001. A fully connected layer was added
with the number of outputs equal to 672. The learning rate factor for the connected layer has been
increased to 20 so that the network can learn faster, the verification frequency has been set to 3,
and the learning rate drop factor has been set to 0.3.
4.2. Number of Epochs
An epoch is one pass through all the data in the training set [1], it’s one of the training parameters
to be considered during training the network. The number of epochs might be high or low.
Knowing the optimal value depends on the database used, organization techniques, and network
depth. If the number of epochs is low, the network will be under-learned, but if it is high, the
model becomes overfitted.
Figure 2 shows the validation accuracy of the model over increasing numbers of epochs. In this
experiment, the optimal number of epochs is 30, where the achieved validation accuracy was
99.8%.
9. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
69
Figure 2. Suitable Number of Epochs to Maximize Accuracy and Minimize Validation Loss
One of the most challenging problems in machine learning is overfitting, which occurs when a
model learns details and noise in the training data, also, when the validation accuracy is lower
than the training accuracy, which affects model performance. To overcome the overfitting
problem, we use a dropout and data augmentation [16].
4.3. Data Augmentation
Data augmentation helps prevent the network from overfitting by memorizing the exact details of
the training images [24]. Beginning, we reflected each image horizontally, then horizontal and
vertical translations were applied to input images in the [-30, 30] range. Finally, images have
scaled in horizontal and vertical directions, thus allowing the classifier to trained on additional
views of objects.
4.4. Dropout
One way to solve overfitting is to add dropout to weight layers. At each iteration, neurons
randomly selected for removal from the network. The number of neurons omitted from a layer is
called the dropout rate [21], which is set manually. When the dropout rate value is high, we get a
better regularization to prevent overfitting but slow down the learning process. Therefore, the
dropout rate value must be balanced so that it is suitable for both overfitting and the learning
process.
In table 1, we tested the model using different dropout values to decide which value would best to
prevent overfitting.
Table 1. The Optimal Dropout Rate Value.
Dropout 0.2 0.3 0.4 0.5 0.6
Loss 15.48 2.08 13.84 10.86 13.1
The lowest validation loss value was the optimal value obtained with a dropout rate of 0.3. We
evaluated the performance of our method against previous works by comparing verification
10. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
70
accuracy with several other results. As observed in table 2, our method outperforms state-of-the-
art methods.
Table 2. Comparison of Proposed Model and Current State-of-the-Art Methods.
Approach Dataset Method Accuracy
B.Simone [1] LAG Datset Siamense DCNN Injection 85.75%
El Khiyari, H., et al. [4] FG-NET Dataset VGG-Face 0.16 (EER)
Moschoglou, S., et al. [15] AgeDB Dataset VGG-Face 93.4%
Zhai et al. [25] LFW Dataset DCNN + LBPH 91.40%
Proposed Method MORPH
FG-NET
GoogLeNet,
LBP,HOG
99.8%
100%
The performance of the model is evaluated using the receiver operating characteristic (ROC)
curves. False accept errors were reported using the false accept rate (FAR), which is the
percentage of negative pairs labeled as positive. False reject errors were calculated using the false
reject rate (FRR), which is the percentage of positive pairs classified as negative. The ROC
curves represent the tradeoffs between the FARs and FRRs of different values [7].
Figure 3. Training Progress for the Model
Figure 4. ROC Curve for Classification
11. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
71
An example of a successfully classified subject’s images labelled as a match with a 5-year age
difference is shown in figure 5, and an example of a misclassified subject is shown in figure 6.
Figure 5. Example of a Successfully Classified Subject’s Images
Figure 6: Example of a Misclassified Subject’s Images
As it is shown in table 3 using GoogLeNet, HOG, and LBP for feature extraction and SVM for
classification by a majority voting is the best result than GoogleNet, in FG-NET the best results is
100% which is more best than MORPH. When comparing KNN with SVM as classifier, we find
that SVM has better performance than KNN as it produces high accuracy.
Table 3. Results using Morph Dataset
Method GoogLeNet for
Extraction and
Classification
GoogLeNet
for
Extraction
and SVM
for
Classificati
on
GoogLeNe
t and LBP
Majority
Voting
GoogLeNet,
SVM, LBP
Majority
Voting
GoogLeNet,
KNN, LBP
Majority
Voting
GoogLeNet,
HOG, LBP
Training
Accuracy
100% 100% 100% 100% 100% 100%
Testing
Accuracy
98.96% 93.30% 95.56% 96.42% 90.28% 99.8%
12. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
72
Table 4. Results using FG-Net Dataset
Method GoogLeNet
For Extraction
and
Classification
GoogLeNet
for
Extraction
and
SVM for
Classification
GoogLeN
et
and LBP
Majority Voting
GoogLeNet, SVM,
LBP
Majority
Voting
GoogLeNet,
KNN,
LBP
Majority
Voting
GoogLeNet
,HOG,LBP
Training
Accuracy
100% 100% 100% 100% 100% 100%
Testing
Accuracy
94% 95.4% 51% 99.2% 94.67% 100%%
5. CONCLUSIONS
This paper addresses the challenge of facial verification on aging subjects using a transfer
learning method based on deep learning. A CNN model (GoogLeNet) and a face dataset.
(MORPH) was used to pre-train a convolutional neural network to extract features from facial
images. Better results were observed when optimal dropout rate and numbers of epochs were
used. Euclidian distance was used to determine whether or not pairs of images belonged to the
same person. The model’s performance was evaluated using a training and validation set, and we
were able to show that GoogLeNet yields a better performance than other state-of-the-art
methods.
CONFLICTS OF INTEREST
The authors declare no conflict of interest.
REFERENCES
[1] Bengio, Y. (2009). Learning deep architectures for AI. Now Publishers Inc.
[2] Bouadjenek, N., Nemmour, H., & Chibani, Y. (2016, November). Writer’s gender classification
using HOG and LBP features. In International Conference on Electrical Engineering and Control
Applications (pp. 317-325). Springer, Cham.
[3] Dalal, N., & Triggs, B. (2005, June). Histograms of oriented gradients for human detection. In 2005
IEEE computer society conference on computer vision and pattern recognition (CVPR'05) (Vol. 1,
pp. 886-893). IEEE.
[4] El Khiyari, H., & Wechsler, H. (2017). Age invariant face recognition using convolutional neural
networks and set distances. Journal of Information Security, 8(03), 174.
[5] El Khiyari, H., & Wechsler, H. (2016). Face verification subject to varying (age, ethnicity, and
gender) demographics using deep learning. Journal of Biometrics and Biostatistics, 7(323), 11.
[6] Gan, Y., Liu, J., Dong, J., & Zhong, G. (2015). A PCA-based convolutional network. arXiv preprint
arXiv:1505.03703.
[7] Hu, J., Lu, J., & Tan, Y. P. (2014). Discriminative deep metric learning for face verification in the
wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1875-
1882).
[8] Huang, D., Shan, C., Ardabilian, M., Wang, Y., & Chen, L. (2011). Local binary patterns and its
application to facial image analysis: a survey. IEEE Transactions on Systems, Man, and Cybernetics,
Part C (Applications and Reviews), 41(6), 765-781.
[9] Huang, G. B., Lee, H., & Learned-Miller, E. (2012, June). Learning hierarchical representations for
face verification with convolutional deep belief networks. In 2012 IEEE Conference on Computer
Vision and Pattern Recognition (pp. 2518-2525). IEEE.
[10] ImageNet.http://http://www.image-net.org. Accessed: 29 March 2019.
13. Signal & Image Processing: An International Journal (SIPIJ) Vol.11, No.5, October 2020
73
[11] Ji, S., Xu, W., Yang, M., & Yu, K. (2012). 3D convolutional neural networks for human action
recognition. IEEE transactions on pattern analysis and machine intelligence, 35(1), 221-231.
[12] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep
convolutional neural networks. In Advances in neural information processing systems (pp. 1097-
1105).
[13] Le, Q. V., Zou, W. Y., Yeung, S. Y., & Ng, A. Y. (2011, June). Learning hierarchical invariant
spatio-temporal features for action recognition with independent subspace analysis. In CVPR 2011
(pp. 3361-3368). IEEE.
[14] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[15] Moschoglou, S., Papaioannou, A., Sagonas, C., Deng, J., Kotsia, I., & Zafeiriou, S. (2017). Agedb:
the first manually collected, in-the-wild age database. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition Workshops (pp. 51-59).
[16] Nowlan, S. J., & Hinton, G. E. (1992). Simplifying neural networks by soft weight-sharing. Neural
computation, 4(4), 473-493.
[17] Panis, G., Lanitis, A., Tsapatsoulis, N., & Cootes, T. F. (2016). Overview of research on facial ageing
using the FG-NET ageing database. Iet Biometrics, 5(2), 37-46.
[18] Parkhi, O. M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition.
[19] Ranzato, M. A., Huang, F. J., Boureau, Y. L., & LeCun, Y. (2007, June). Unsupervised learning of
invariant feature hierarchies with applications to object recognition. In 2007 IEEE conference on
computer vision and pattern recognition (pp. 1-8). IEEE.
[20] Ricanek, K., & Tesafaye, T. (2006, April). Morph: A longitudinal image database of normal adult
age-progression. In 7th International Conference on Automatic Face and Gesture Recognition
(FGR06) (pp. 341-345). IEEE.
[21] Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: a
simple way to prevent neural networks from overfitting. The journal of machine learning research,
15(1), 1929-1958.
[22] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015).
Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and
pattern recognition (pp. 1-9).
[23] Taylor, G. W., Fergus, R., LeCun, Y., & Bregler, C. (2010, September). Convolutional learning of
spatio-temporal features. In European conference on computer vision (pp. 140-153). Springer, Berlin,
Heidelberg.
[24] Van Dyk, D. A., & Meng, X. L. (2001). The art of data augmentation. Journal of Computational and
Graphical Statistics, 10(1), 1-50.
[25] Zhai, H., Liu, C., Dong, H., Ji, Y., Guo, Y., & Gong, S. (2015, June). Face verification across aging
based on deep convolutional networks and local binary patterns. In International conference on
intelligent science and big data engineering (pp. 341-350). Springer,Cham.
[26] Nimbarte, M., & Bhoyar, K. K. (2020). Biased face patching approach for age invariant face
recognition using convolutional neural network. International Journal of Intelligent Systems
Technologies and Applications, 19(2), 103-124.
[27] Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. J Big
Data 6(60):1–48.
[28] Kamarajugadda KK, Polipalli TR (2019) Age-invariant face recognition using multiple descriptors
along with modified dimensionality reduction approach. Multimed Tools Appl.
[29] Rafique, I., Hamid, A., Naseer, S., Asad, M., Awais, M., & Yasir, T. (2019, November). Age and
Gender Prediction using Deep Convolutional Neural Networks. In 2019 International Conference on
Innovative Computing (ICIC) (pp. 1-6). IEEE.
[30] Kasim, N. A. B. M., Rahman, N. H. B. A., Ibrahim, Z., & Mangshor, N. N. A. (2018). Celebrity Face
Recognition using Deep Learning. Indonesian Journal of Electrical Engineering and Computer
Science, 12(2), 476-481.
[31] Vapnik, V. (2013). The nature of statistical learning theory. Springer science & business media.
[32] Elkan, C..: Evaluating classifiers. University of San Diego, California, retrieved B,250. (2012).
[33] A.A. Ross, K. Nandakumar and A. Jain, “Handbook of multibiometrics (Vol. 6)”, Springer Science &
Business Media, 2006, pp.73-82.