Abstract Face recognition is a form of computer vision that uses faces to identify a person or verify a person’s claimed identity. In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality of input face image is reduced by the Principal component analysis and the Classification is by the neural back propagation network. This method is robust for a dataset of 300 face images and has better performance in terms of 80 – 90 % recognition rate.
This document summarizes a seminar presentation on face recognition using neural networks. It discusses face recognition, neural networks, the steps involved which include pre-processing, principle component analysis, and back propagation neural networks. Advantages of neural networks for face recognition are robustness to variations in faces and ability to learn from data. Face recognition has applications in security and identification.
AlexNet achieved unprecedented results on the ImageNet dataset by using a deep convolutional neural network with over 60 million parameters. It achieved top-1 and top-5 error rates of 37.5% and 17.0%, significantly outperforming previous methods. The network architecture included 5 convolutional layers, some with max pooling, and 3 fully-connected layers. Key aspects were the use of ReLU activations for faster training, dropout to reduce overfitting, and parallelizing computations across two GPUs. This dramatic improvement demonstrated the potential of deep learning for computer vision tasks.
Detection and recognition of face using neural networkSmriti Tikoo
This document describes research on face detection and recognition using neural networks. It discusses using the Viola-Jones algorithm for face detection and a backpropagation neural network for face recognition. The Viola-Jones algorithm uses haar features, integral images, AdaBoost training, and cascading classifiers for real-time face detection. A backpropagation network with sigmoid activation functions is trained on facial images for recognition. Results show the network can accurately recognize faces after training. The document concludes the approach allows face recognition from an input image and discusses limitations and potential improvements.
This document presents information on face detection techniques. It discusses image segmentation as a preprocessing step for face detection. Some common segmentation methods are thresholding, edge-based segmentation, and region-based segmentation. Face detection can be classified as implicit/pattern-based or explicit/knowledge-based. Implicit methods use techniques like templates, PCA, LDA, and neural networks, while explicit methods exploit cues like color, motion, and facial features. One method discussed is human skin color-based face detection, which filters for skin-colored regions and finds facial parts within those regions. Advantages include speed and independence from training data, while disadvantages include sensitivity to lighting and accessories.
This document provides an agenda for a presentation on deep learning, neural networks, convolutional neural networks, and interesting applications. The presentation will include introductions to deep learning and how it differs from traditional machine learning by learning feature representations from data. It will cover the history of neural networks and breakthroughs that enabled training of deeper models. Convolutional neural network architectures will be overviewed, including convolutional, pooling, and dense layers. Applications like recommendation systems, natural language processing, and computer vision will also be discussed. There will be a question and answer section.
Face Recognition Methods based on Convolutional Neural NetworksElaheh Rashedi
Convolutional neural networks (CNN) have improved the state of the art in many applications, especially the face recognition area. In this work, we present a review on latest face verification techniques based on Convolutional Neural Networks. In addition, we give a comparison on these techniques regarding their architecture, depth level, number of parameters in the network, and the obtained accuracy in identification and/or verification. Furthermore, as the availability of large-scale training dataset has significantly affected the performance of CNN based recognition methods, we present a preface to the most common large-scale face datasets, and then we describe some of the successful automatic data collection procedures.
This document presents research on using convolutional neural networks (CNNs) to detect skin lesions from dermoscopic images. The researchers:
1. Developed a CNN (U-Net) to segment skin lesions from images, achieving a Dice coefficient of 0.8689.
2. Used a fine-tuned VGG-16 network to classify images as benign or malignant. They found that using their automatic segmentations as input improved sensitivity over using unaltered images.
3. Concluded that their deep learning approach can help dermatologists diagnose skin cancer, and that automatic segmentation improves classification sensitivity compared to using whole images, even without perfect segmentation. This verifies their hypothesis that segmentation enhances classification.
Face recognition using artificial neural networkSumeet Kakani
This document provides an overview of a face recognition system that uses artificial neural networks. It describes the structure and processing of artificial neural networks, including convolutional networks. It discusses how the system works, including local image sampling, the self-organizing map, and the convolutional network. It then provides details about the implementation and applications of the system for face recognition, and concludes by discussing the benefits of the system.
This document summarizes a seminar presentation on face recognition using neural networks. It discusses face recognition, neural networks, the steps involved which include pre-processing, principle component analysis, and back propagation neural networks. Advantages of neural networks for face recognition are robustness to variations in faces and ability to learn from data. Face recognition has applications in security and identification.
AlexNet achieved unprecedented results on the ImageNet dataset by using a deep convolutional neural network with over 60 million parameters. It achieved top-1 and top-5 error rates of 37.5% and 17.0%, significantly outperforming previous methods. The network architecture included 5 convolutional layers, some with max pooling, and 3 fully-connected layers. Key aspects were the use of ReLU activations for faster training, dropout to reduce overfitting, and parallelizing computations across two GPUs. This dramatic improvement demonstrated the potential of deep learning for computer vision tasks.
Detection and recognition of face using neural networkSmriti Tikoo
This document describes research on face detection and recognition using neural networks. It discusses using the Viola-Jones algorithm for face detection and a backpropagation neural network for face recognition. The Viola-Jones algorithm uses haar features, integral images, AdaBoost training, and cascading classifiers for real-time face detection. A backpropagation network with sigmoid activation functions is trained on facial images for recognition. Results show the network can accurately recognize faces after training. The document concludes the approach allows face recognition from an input image and discusses limitations and potential improvements.
This document presents information on face detection techniques. It discusses image segmentation as a preprocessing step for face detection. Some common segmentation methods are thresholding, edge-based segmentation, and region-based segmentation. Face detection can be classified as implicit/pattern-based or explicit/knowledge-based. Implicit methods use techniques like templates, PCA, LDA, and neural networks, while explicit methods exploit cues like color, motion, and facial features. One method discussed is human skin color-based face detection, which filters for skin-colored regions and finds facial parts within those regions. Advantages include speed and independence from training data, while disadvantages include sensitivity to lighting and accessories.
This document provides an agenda for a presentation on deep learning, neural networks, convolutional neural networks, and interesting applications. The presentation will include introductions to deep learning and how it differs from traditional machine learning by learning feature representations from data. It will cover the history of neural networks and breakthroughs that enabled training of deeper models. Convolutional neural network architectures will be overviewed, including convolutional, pooling, and dense layers. Applications like recommendation systems, natural language processing, and computer vision will also be discussed. There will be a question and answer section.
Face Recognition Methods based on Convolutional Neural NetworksElaheh Rashedi
Convolutional neural networks (CNN) have improved the state of the art in many applications, especially the face recognition area. In this work, we present a review on latest face verification techniques based on Convolutional Neural Networks. In addition, we give a comparison on these techniques regarding their architecture, depth level, number of parameters in the network, and the obtained accuracy in identification and/or verification. Furthermore, as the availability of large-scale training dataset has significantly affected the performance of CNN based recognition methods, we present a preface to the most common large-scale face datasets, and then we describe some of the successful automatic data collection procedures.
This document presents research on using convolutional neural networks (CNNs) to detect skin lesions from dermoscopic images. The researchers:
1. Developed a CNN (U-Net) to segment skin lesions from images, achieving a Dice coefficient of 0.8689.
2. Used a fine-tuned VGG-16 network to classify images as benign or malignant. They found that using their automatic segmentations as input improved sensitivity over using unaltered images.
3. Concluded that their deep learning approach can help dermatologists diagnose skin cancer, and that automatic segmentation improves classification sensitivity compared to using whole images, even without perfect segmentation. This verifies their hypothesis that segmentation enhances classification.
Face recognition using artificial neural networkSumeet Kakani
This document provides an overview of a face recognition system that uses artificial neural networks. It describes the structure and processing of artificial neural networks, including convolutional networks. It discusses how the system works, including local image sampling, the self-organizing map, and the convolutional network. It then provides details about the implementation and applications of the system for face recognition, and concludes by discussing the benefits of the system.
Convolutional neural networks (CNNs) learn multi-level features and perform classification jointly and better than traditional approaches for image classification and segmentation problems. CNNs have four main components: convolution, nonlinearity, pooling, and fully connected layers. Convolution extracts features from the input image using filters. Nonlinearity introduces nonlinearity. Pooling reduces dimensionality while retaining important information. The fully connected layer uses high-level features for classification. CNNs are trained end-to-end using backpropagation to minimize output errors by updating weights.
Notes from Coursera Deep Learning courses by Andrew NgdataHacker. rs
Deep learning uses neural networks to process data and create patterns in a way that imitates the human brain. It has transformed industries like web search and advertising by enabling tasks like image recognition. This document discusses neural networks, deep learning, and their various applications. It also explains how recent advances in algorithms and increased data availability have driven the rise of deep learning by allowing neural networks to train on larger datasets and overcome performance plateaus.
This document summarizes a seminar on face recognition using neural networks. It discusses the history of face recognition, how neural networks can be used for face recognition, and the basic process which involves pre-processing images, using principal component analysis for feature extraction, and comparing images to a database using backpropagation neural networks. Some advantages are that it is non-intrusive and can use existing hardware, while disadvantages include issues with identical twins and environment. Applications discussed include security, criminal identification, and credit card verification.
1) Deep learning is a type of machine learning that uses neural networks with many layers to learn representations of data with multiple levels of abstraction.
2) Deep learning techniques include unsupervised pretrained networks, convolutional neural networks, recurrent neural networks, and recursive neural networks.
3) The advantages of deep learning include automatic feature extraction from raw data with minimal human effort, and surpassing conventional machine learning algorithms in accuracy across many data types.
This document provides an overview of convolutional neural networks (CNNs). It defines CNNs as multiple layer feedforward neural networks used to analyze visual images by processing grid-like data. CNNs recognize images through a series of layers, including convolutional layers that apply filters to detect patterns, ReLU layers that apply an activation function, pooling layers that detect edges and corners, and fully connected layers that identify the image. CNNs are commonly used for applications like image classification, self-driving cars, activity prediction, video detection, and conversion applications.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
The document discusses algorithms and their analysis. It begins by defining an algorithm and listing requirements like being unambiguous and finite. It describes writing algorithms using pseudocode or flowcharts and proving their correctness. The document then discusses analyzing algorithms by measuring their time and space efficiency using orders of growth. It explains analyzing best, worst, and average cases and counting basic operations. Finally, it provides examples of analyzing simple algorithms involving if statements and loops.
The document provides an overview of convolutional neural networks (CNNs) and their layers. It begins with an introduction to CNNs, noting they are a type of neural network designed to process 2D inputs like images. It then discusses the typical CNN architecture of convolutional layers followed by pooling and fully connected layers. The document explains how CNNs work using a simple example of classifying handwritten X and O characters. It provides details on the different layer types, including convolutional layers which identify patterns using small filters, and pooling layers which downsample the inputs.
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging
techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering
disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions
from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm
followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
image classification is a common problem in Artificial Intelligence , we used CIFR10 data set and tried a lot of methods to reach a high test accuracy like neural networks and Transfer learning techniques .
you can view the source code and the papers we read on github : https://github.com/Asma-Hawari/Machine-Learning-Project-
The document summarizes the Batch Normalization technique presented in the paper "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift". Batch Normalization aims to address the issue of internal covariate shift in deep neural networks by normalizing layer inputs to have zero mean and unit variance. It works by computing normalization statistics for each mini-batch and applying them to the inputs. This helps in faster and more stable training of deep networks by reducing the distribution shift across layers. The paper presented ablation studies on MNIST and ImageNet datasets showing Batch Normalization improves training speed and accuracy compared to prior techniques.
Deep Learning - Overview of my work IIMohamed Loey
Deep Learning Machine Learning MNIST CIFAR 10 Residual Network AlexNet VGGNet GoogleNet Nvidia Deep learning (DL) is a hierarchical structure network which through simulates the human brain’s structure to extract the internal and external input data’s features
Brain Tumor Segmentation using Enhanced U-Net Model with Empirical AnalysisMD Abdullah Al Nasim
Cancer of the brain is deadly and requires careful surgical segmentation. The brain tumors were segmented using U-Net using a Convolutional Neural Network (CNN). When looking for overlaps of necrotic, edematous, growing, and healthy tissue, it might be hard to get relevant information from the images. The 2D U-Net network was improved and trained with the BraTS datasets to find these four areas. U-Net can set up many encoder and decoder routes that can be used to get information from images that can be used in different ways. To reduce computational time, we use image segmentation to exclude insignificant background details. Experiments on the BraTS datasets show that our proposed model for segmenting brain tumors from MRI (MRI) works well. In this study, we demonstrate that the BraTS datasets for 2017, 2018, 2019, and 2020 do not significantly differ from the BraTS 2019 dataset's attained dice scores of 0.8717 (necrotic), 0.9506 (edema), and 0.9427 (enhancing).
Deep Learning Chapter 14 discusses autoencoders. Autoencoders are neural networks trained to copy their input to their output. They have an encoder that maps the input to a hidden representation and a decoder that maps this back to the output. Autoencoders are commonly used for dimensionality reduction, feature learning, and extracting a low-dimensional representation of the input data. Regularized autoencoders add constraints like sparsity or contractive penalties to prevent the autoencoder from learning the identity function and force it to learn meaningful representations. Denoising autoencoders are trained to reconstruct clean inputs from corrupted versions, which encourages the hidden representation to be robust. Contractive autoencoders add a penalty term that resists small changes to the input
Artificial neural networks mimic the human brain by using interconnected layers of neurons that fire electrical signals between each other. Activation functions are important for neural networks to learn complex patterns by introducing non-linearity. Without activation functions, neural networks would be limited to linear regression. Common activation functions include sigmoid, tanh, ReLU, and LeakyReLU, with ReLU and LeakyReLU helping to address issues like vanishing gradients that can occur with sigmoid and tanh functions.
Deep learning is a class of machine learning algorithms that uses multiple layers of nonlinear processing units for feature extraction and transformation. It can be used for supervised learning tasks like classification and regression or unsupervised learning tasks like clustering. Deep learning models include deep neural networks, deep belief networks, and convolutional neural networks. Deep learning has been applied successfully in domains like computer vision, speech recognition, and natural language processing by companies like Google, Facebook, Microsoft, and others.
This document summarizes a student project on implementing object detection using the Viola-Jones technique. The technique uses Haar feature extraction and an AdaBoost classifier cascade to quickly and accurately detect objects like faces in images. The student developed implementations in Matlab and C++ to train classifiers and detect faces. The Viola-Jones technique was groundbreaking for providing real-time object detection with high accuracy rates compared to previous methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
Convolutional neural networks (CNNs) learn multi-level features and perform classification jointly and better than traditional approaches for image classification and segmentation problems. CNNs have four main components: convolution, nonlinearity, pooling, and fully connected layers. Convolution extracts features from the input image using filters. Nonlinearity introduces nonlinearity. Pooling reduces dimensionality while retaining important information. The fully connected layer uses high-level features for classification. CNNs are trained end-to-end using backpropagation to minimize output errors by updating weights.
Notes from Coursera Deep Learning courses by Andrew NgdataHacker. rs
Deep learning uses neural networks to process data and create patterns in a way that imitates the human brain. It has transformed industries like web search and advertising by enabling tasks like image recognition. This document discusses neural networks, deep learning, and their various applications. It also explains how recent advances in algorithms and increased data availability have driven the rise of deep learning by allowing neural networks to train on larger datasets and overcome performance plateaus.
This document summarizes a seminar on face recognition using neural networks. It discusses the history of face recognition, how neural networks can be used for face recognition, and the basic process which involves pre-processing images, using principal component analysis for feature extraction, and comparing images to a database using backpropagation neural networks. Some advantages are that it is non-intrusive and can use existing hardware, while disadvantages include issues with identical twins and environment. Applications discussed include security, criminal identification, and credit card verification.
1) Deep learning is a type of machine learning that uses neural networks with many layers to learn representations of data with multiple levels of abstraction.
2) Deep learning techniques include unsupervised pretrained networks, convolutional neural networks, recurrent neural networks, and recursive neural networks.
3) The advantages of deep learning include automatic feature extraction from raw data with minimal human effort, and surpassing conventional machine learning algorithms in accuracy across many data types.
This document provides an overview of convolutional neural networks (CNNs). It defines CNNs as multiple layer feedforward neural networks used to analyze visual images by processing grid-like data. CNNs recognize images through a series of layers, including convolutional layers that apply filters to detect patterns, ReLU layers that apply an activation function, pooling layers that detect edges and corners, and fully connected layers that identify the image. CNNs are commonly used for applications like image classification, self-driving cars, activity prediction, video detection, and conversion applications.
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
The document discusses algorithms and their analysis. It begins by defining an algorithm and listing requirements like being unambiguous and finite. It describes writing algorithms using pseudocode or flowcharts and proving their correctness. The document then discusses analyzing algorithms by measuring their time and space efficiency using orders of growth. It explains analyzing best, worst, and average cases and counting basic operations. Finally, it provides examples of analyzing simple algorithms involving if statements and loops.
The document provides an overview of convolutional neural networks (CNNs) and their layers. It begins with an introduction to CNNs, noting they are a type of neural network designed to process 2D inputs like images. It then discusses the typical CNN architecture of convolutional layers followed by pooling and fully connected layers. The document explains how CNNs work using a simple example of classifying handwritten X and O characters. It provides details on the different layer types, including convolutional layers which identify patterns using small filters, and pooling layers which downsample the inputs.
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging
techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering
disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions
from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm
followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
image classification is a common problem in Artificial Intelligence , we used CIFR10 data set and tried a lot of methods to reach a high test accuracy like neural networks and Transfer learning techniques .
you can view the source code and the papers we read on github : https://github.com/Asma-Hawari/Machine-Learning-Project-
The document summarizes the Batch Normalization technique presented in the paper "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift". Batch Normalization aims to address the issue of internal covariate shift in deep neural networks by normalizing layer inputs to have zero mean and unit variance. It works by computing normalization statistics for each mini-batch and applying them to the inputs. This helps in faster and more stable training of deep networks by reducing the distribution shift across layers. The paper presented ablation studies on MNIST and ImageNet datasets showing Batch Normalization improves training speed and accuracy compared to prior techniques.
Deep Learning - Overview of my work IIMohamed Loey
Deep Learning Machine Learning MNIST CIFAR 10 Residual Network AlexNet VGGNet GoogleNet Nvidia Deep learning (DL) is a hierarchical structure network which through simulates the human brain’s structure to extract the internal and external input data’s features
Brain Tumor Segmentation using Enhanced U-Net Model with Empirical AnalysisMD Abdullah Al Nasim
Cancer of the brain is deadly and requires careful surgical segmentation. The brain tumors were segmented using U-Net using a Convolutional Neural Network (CNN). When looking for overlaps of necrotic, edematous, growing, and healthy tissue, it might be hard to get relevant information from the images. The 2D U-Net network was improved and trained with the BraTS datasets to find these four areas. U-Net can set up many encoder and decoder routes that can be used to get information from images that can be used in different ways. To reduce computational time, we use image segmentation to exclude insignificant background details. Experiments on the BraTS datasets show that our proposed model for segmenting brain tumors from MRI (MRI) works well. In this study, we demonstrate that the BraTS datasets for 2017, 2018, 2019, and 2020 do not significantly differ from the BraTS 2019 dataset's attained dice scores of 0.8717 (necrotic), 0.9506 (edema), and 0.9427 (enhancing).
Deep Learning Chapter 14 discusses autoencoders. Autoencoders are neural networks trained to copy their input to their output. They have an encoder that maps the input to a hidden representation and a decoder that maps this back to the output. Autoencoders are commonly used for dimensionality reduction, feature learning, and extracting a low-dimensional representation of the input data. Regularized autoencoders add constraints like sparsity or contractive penalties to prevent the autoencoder from learning the identity function and force it to learn meaningful representations. Denoising autoencoders are trained to reconstruct clean inputs from corrupted versions, which encourages the hidden representation to be robust. Contractive autoencoders add a penalty term that resists small changes to the input
Artificial neural networks mimic the human brain by using interconnected layers of neurons that fire electrical signals between each other. Activation functions are important for neural networks to learn complex patterns by introducing non-linearity. Without activation functions, neural networks would be limited to linear regression. Common activation functions include sigmoid, tanh, ReLU, and LeakyReLU, with ReLU and LeakyReLU helping to address issues like vanishing gradients that can occur with sigmoid and tanh functions.
Deep learning is a class of machine learning algorithms that uses multiple layers of nonlinear processing units for feature extraction and transformation. It can be used for supervised learning tasks like classification and regression or unsupervised learning tasks like clustering. Deep learning models include deep neural networks, deep belief networks, and convolutional neural networks. Deep learning has been applied successfully in domains like computer vision, speech recognition, and natural language processing by companies like Google, Facebook, Microsoft, and others.
This document summarizes a student project on implementing object detection using the Viola-Jones technique. The technique uses Haar feature extraction and an AdaBoost classifier cascade to quickly and accurately detect objects like faces in images. The student developed implementations in Matlab and C++ to train classifiers and detect faces. The Viola-Jones technique was groundbreaking for providing real-time object detection with high accuracy rates compared to previous methods.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Comparison on PCA ICA and LDA in Face Recognitionijdmtaiir
Face recognition is used in wide range of application.
In recent years, face recognition has become one of the most
successful applications in image analysis and understanding.
Different statistical method and research groups reported a
contradictory result when comparing principal component
analysis (PCA) algorithm, independent component analysis
(ICA) algorithm, and linear discriminant analysis (LDA)
algorithm that has been proposed in recent years. The goal of
this paper is to compare and analyze the three algorithms and
conclude which is best. Feret Dataset is used for consistency
Face Recognition Using Gabor features And PCAIOSR Journals
This document summarizes a research paper on face recognition using Gabor features and principal component analysis (PCA). It begins by providing background on face recognition and discusses challenges like lighting, pose, and orientation. It then describes preprocessing faces using Gabor wavelets to extract discriminative features and reduce variations. PCA is used to further reduce the dimensionality of features into principal components. These components are used for classification, with nearest neighbor classification tested on the Yale face database. Results show the proposed approach improves recognition rates compared to Euclidean distance measures.
FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS WITH MEDIAN FOR NORMALIZA...csandit
Recognizing Faces helps to name the various subjects present in the image. This work focuses
on labeling faces on an image which includes faces of humans being of various age group
(heterogeneous set ). Principal component analysis concentrates on finds the mean of the data
set and subtracts the mean value from the data set with an intention to normalize that data.
Normalization with respect to image is the removal of common features from the data set. This
work brings in the novel idea of deploying the median another measure of central tendency for
normalization rather than mean. The above work was implemented using matlab. Results show
that Median is the best measure for normalization for a heterogeneous data set which gives
raise to outliers.
The document discusses principal component analysis (PCA) and linear discriminant analysis (LDA) for dimensionality reduction in pattern recognition and their application to face recognition. PCA finds the directions along which the data varies the most to reduce dimensionality while retaining variation. LDA seeks directions that maximize between-class variation and minimize within-class variation. Studies show LDA performs better than PCA for classification when the training set is large and representative of each class.
COMPRESSION BASED FACE RECOGNITION USING DWT AND SVMsipij
The biometric is used to identify a person effectively and employ in almost all applications of day to day
activities. In this paper, we propose compression based face recognition using Discrete Wavelet Transform
(DWT) and Support Vector Machine (SVM). The novel concept of converting many images of single person
into one image using averaging technique is introduced to reduce execution time and memory. The DWT is
applied on averaged face image to obtain approximation (LL) and detailed bands. The LL band coefficients
are given as input to SVM to obtain Support vectors (SV’s). The LL coefficients of DWT and SV’s are fused
based on arithmetic addition to extract final features. The Euclidean Distance (ED) is used to compare test
image features with database image features to compute performance parameters. It is observed that, the
proposed algorithm is better in terms of performance compared to existing algorithms.
This document presents a novel fuzzy feature extraction method using Local Binary Patterns (LBP) for face recognition. It begins with an introduction to face recognition challenges and common approaches. It then describes using LBP to extract local features from divided image windows. Features are extracted by computing membership values based on pixel intensities and taking the product with central pixel value. Local and central pixel information values are combined as the total information for classification using Support Vector Machine (SVM) or K-Nearest Neighbor classifiers. Experimental results on two databases show the proposed approach achieves better recognition rates than other methods. The conclusion is that accounting for both central and neighborhood pixel information is effective for images with expression, illumination and pose variations.
This document summarizes a research paper on face recognition using Gabor features and PCA. It begins with an introduction to face recognition and discusses challenges like lighting, pose, and orientation. It then describes how the proposed system uses Gabor wavelets for preprocessing to reduce variations from pose, lighting, etc. Principal component analysis (PCA) is used to extract low dimensional and discriminating feature vectors from the preprocessed images. These feature vectors are then used for classification with k-nearest neighbors. The proposed system was tested on the Yale face database containing 100 images of 10 subjects with variable illumination and expressions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document discusses accelerating face recognition using graphics processing units (GPUs). It presents research on parallelizing a principal component analysis (PCA) face recognition algorithm using CUDA on NVIDIA GPUs. The key steps are:
1) Implementing PCA-based face recognition on CPUs for comparison.
2) Parallelizing the computationally-intensive training phase of projecting images into the PCA eigenspace using GPU threads.
3) Measuring speedups of 2-10x for the GPU implementation compared to CPUs, with higher speedups for larger databases due to greater parallelism.
This paper presents a study of the efficiency and performance speedup achieved by applying Graphics Processing Units for Face Recognition Solutions. We explore one of the possibilities of parallelizing and optimizing a well-known Face Recognition algorithm, Principal Component Analysis (PCA) with Eigenfaces. In recent years, the Graphics Processing Units (GPU) has been the subject of extensive research and the computation speed of GPUs has been rapidly increasing.
Application of gaussian filter with principal component analysisIAEME Publication
This document discusses face recognition using principal component analysis (PCA) and Gaussian-based PCA. It begins with an introduction to face recognition and PCA. The document then describes the PCA algorithm, including the training and recognition phases. It also discusses Gaussian filters and their use for image smoothing. The proposed method applies Gaussian filtering before PCA to enhance accuracy. Experimental results on a database of face images show Gaussian-based PCA produces closer matches and lower Euclidean distances, indicating more accurate recognition compared to normal PCA. In conclusion, preprocessing images with Gaussian filtering before PCA improves face recognition performance.
Application of gaussian filter with principal component analysisIAEME Publication
This document discusses face recognition using principal component analysis (PCA) and Gaussian-based PCA. It begins with an introduction to face recognition and PCA. The document then describes the PCA algorithm, including the training and recognition phases. It also discusses Gaussian filters and their use for image smoothing. The proposed method applies Gaussian filtering before PCA to enhance accuracy. Experimental results on a database of face images show Gaussian-based PCA produces closer matches and lower Euclidean distances, indicating more accurate recognition compared to normal PCA. In conclusion, preprocessing images with Gaussian filtering before PCA improves face recognition performance.
Multilinear Kernel Mapping for Feature Dimension Reduction in Content Based M...ijma
In the process of content-based multimedia retrieval, multimedia information is processed in order to
obtain descriptive features. Descriptive representation of features, results in a huge feature count, which
results in processing overhead. To reduce this descriptive feature overhead, various approaches have been
used to dimensional reduction, among which PCA and LDA are the most used methods. However, these
methods do not reflect the significance of feature content in terms of inter-relation among all dataset
features. To achieve a dimension reduction based on histogram transformation, features with low
significance can be eliminated. In this paper, we propose a feature dimensional reduction approaches to
achieve the dimension reduction approach based on a multi-linear kernel (MLK) modeling. A benchmark
dataset for the experimental work is taken and the proposed work is observed to be improved in analysis in
comparison to the conventional system.
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
Image reconstruction through compressive sampling matching pursuit and curvel...IJECEIAES
An interesting area of research is image reconstruction, which uses algorithms and techniques to transform a degraded image into a good one. The quality of the reconstructed image plays a vital role in the field of image processing. Compressive Sampling is an innovative and rapidly growing method for reconstructing signals. It is extensively used in image reconstruction. The literature uses a variety of matching pursuits for image reconstruction. In this paper, we propose a modified method named compressive sampling matching pursuit (CoSaMP) for image reconstruction that promises to sample sparse signals from far fewer observations than the signal’s dimension. The main advantage of CoSaMP is that it has an excellent theoretical guarantee for convergence. The proposed technique combines CoSaMP with curvelet transform for better reconstruction of image. Experiments are carried out to evaluate the proposed technique on different test images. The results indicate that qualitative and quantitative performance is better compared to existing methods.
This document discusses principal component analysis (PCA) for face recognition. It begins with an introduction to face recognition and PCA. PCA works by calculating eigenvectors from a set of face images, which represent the principal components that account for the most variance in the image data. These eigenvectors are called "eigenfaces" and can be used to reconstruct the face images. The document then discusses how the system is implemented, including preparing a face database, normalizing the training images, calculating the eigenfaces/principal components, projecting the face images into this reduced space, and recognizing faces by calculating distances between projected test images and training images.
This document discusses principal component analysis (PCA) for face recognition. It begins with an introduction to face recognition and PCA. PCA works by calculating eigenvectors from a set of face images, which represent the principal components that account for the most variance in the image data. These eigenvectors are called "eigenfaces" and can be used to reconstruct the face images. The document then discusses how the system is implemented, including preparing a face database, normalizing the training images, calculating the eigenfaces/principal components, projecting the face images into this reduced space, and recognizing faces by calculating distances between projected test images and training images.
Disease Classification using ECG Signal Based on PCA Feature along with GA & ...IRJET Journal
This document describes a method for classifying ECG signals to detect cardiovascular diseases using principal component analysis (PCA), genetic algorithms, and artificial neural networks. PCA is used to extract features from ECG signals. A genetic algorithm is then used to select optimal features and train an artificial neural network classifier. The method is tested on datasets from Physionet.org to classify ECG signals as normal or indicating conditions like bradycardia or tachycardia with high accuracy. The goal is to develop an automated system for ECG analysis and heart disease diagnosis.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Similar to Face Recognition Using Neural Networks (20)
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This presentation was provided by Rebecca Benner, Ph.D., of the American Society of Anesthesiologists, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
This presentation was provided by Racquel Jemison, Ph.D., Christina MacLaughlin, Ph.D., and Paulomi Majumder. Ph.D., all of the American Chemical Society, for the second session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session Two: 'Expanding Pathways to Publishing Careers,' was held June 13, 2024.
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...indexPub
The recent surge in pro-Palestine student activism has prompted significant responses from universities, ranging from negotiations and divestment commitments to increased transparency about investments in companies supporting the war on Gaza. This activism has led to the cessation of student encampments but also highlighted the substantial sacrifices made by students, including academic disruptions and personal risks. The primary drivers of these protests are poor university administration, lack of transparency, and inadequate communication between officials and students. This study examines the profound emotional, psychological, and professional impacts on students engaged in pro-Palestine protests, focusing on Generation Z's (Gen-Z) activism dynamics. This paper explores the significant sacrifices made by these students and even the professors supporting the pro-Palestine movement, with a focus on recent global movements. Through an in-depth analysis of printed and electronic media, the study examines the impacts of these sacrifices on the academic and personal lives of those involved. The paper highlights examples from various universities, demonstrating student activism's long-term and short-term effects, including disciplinary actions, social backlash, and career implications. The researchers also explore the broader implications of student sacrifices. The findings reveal that these sacrifices are driven by a profound commitment to justice and human rights, and are influenced by the increasing availability of information, peer interactions, and personal convictions. The study also discusses the broader implications of this activism, comparing it to historical precedents and assessing its potential to influence policy and public opinion. The emotional and psychological toll on student activists is significant, but their sense of purpose and community support mitigates some of these challenges. However, the researchers call for acknowledging the broader Impact of these sacrifices on the future global movement of FreePalestine.
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Face Recognition Using Neural Networks
1. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 153
Face Recognition using Neural Networks
P.Latha plathamuthuraj@gmail.com
Selection .grade Lecturer,
Department of Electrical and Electronics Engineering,
Government College of Engineering,
Tirunelveli- 627007
Dr.L.Ganesan
Assistant Professor,
Head of Computer Science & Engineering department,
Alagappa Chettiar College of Engineering & Technology,
Karaikudi- 630004
Dr.S.Annadurai
Additional Director, Directorate of Technical Education
Chennai-600025
Abstract
Face recognition is one of biometric methods, to identify given face image using
main features of face. In this paper, a neural based algorithm is presented, to
detect frontal views of faces. The dimensionality of face image is reduced by the
Principal component analysis (PCA) and the recognition is done by the Back
propagation Neural Network (BPNN). Here 200 face images from Yale database
is taken and some performance metrics like Acceptance ratio and Execution time
are calculated. Neural based Face recognition is robust and has better
performance of more than 90 % acceptance ratio.
Key words: Face recognition-Principal Component Analysis- Back Propagation Neural Network -
Acceptance ratio–Execution time
1. INTRODUCTION
A face recognition system [6] is a computer vision and it automatically identifies a human face
from database images. The face recognition problem is challenging as it needs to account for all
possible appearance variation caused by change in illumination, facial features, occlusions, etc.
This paper gives a Neural and PCA based algorithm for efficient and robust face recognition.
Holistic approach, feature-based approach and hybrid approach are some of the approaches for
face recognition. Here, a holistic approach is used in which the whole face region is taken into
account as input data. This is based on principal component-analysis (PCA) technique, which is
used to simplify a dataset into lower dimension while retaining the characteristics of dataset.
Pre-processing, Principal component analysis and Back Propagation Neural Algorithm
are the major implementations of this paper. Pre-processing is done for two purposes
(i) To reduce noise and possible convolute effects of interfering system,
(ii) To transform the image into a different space where classification may prove
easier by exploitation of certain features.
PCA is a common statistical technique for finding the patterns in high dimensional data’s [1].
Feature extraction, also called Dimensionality Reduction, is done by PCA for a three main
purposes like
i) To reduce dimension of the data to more tractable limits
2. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 154
ii) To capture salient class-specific features of the data,
iii) To eliminate redundancy.
Here recognition is performed by both PCA and Back propagation Neural Networks [3].
BPNN mathematically models the behavior of the feature vectors by appropriate descriptions and
then exploits the statistical behavior of the feature vectors to define decision regions
corresponding to different classes. Any new pattern can be classified depending on which
decision region it would be falling in. All these processes are implemented for Face Recognition,
based on the basic block diagram as shown in fig 1.
Fig. 1 Basic Block Diagram
The Algorithm for Face recognition using neural classifier is as follows:
a) Pre-processing stage –Images are made zero-mean and unit-variance.
b) Dimensionality Reduction stage: PCA - Input data is reduced to a lower dimension to facilitate
classification.
c) Classification stage - The reduced vectors from PCA are applied to train BPNN classifier to
obtain the recognized image.
In this paper, Section 2 describes about Principal component analysis, Section 3 explains
about Back Propagation Neural Networks, Section 4 demonstrates experimentation and results
and subsequent chapters give conclusion and future development.
2. PRINCIPAL COMPONENT ANALYSIS
Principal component analysis (PCA) [2] involves a mathematical procedure that transforms a
number of possibly correlated variables into a smaller number of uncorrelated variables called
principal components. PCA is a popular technique, to derive a set of features for both face
recognition.
Any particular face can be
(i) Economically represented along the eigen pictures coordinate space, and
(ii) Approximately reconstructed using a small collection of Eigen pictures
To do this, a face image is projected to several face templates called eigenfaces which can be
considered as a set of features that characterize the variation between face images. Once a set
of eigenfaces is computed, a face image can be approximately reconstructed using a weighted
combination of the eigenfaces. The projection weights form a feature vector for face
representation and recognition. When a new test image is given, the weights are computed by
projecting the image onto the eigen- face vectors. The classification is then carried out by
comparing the distances between the weight vectors of the test image and the images from the
database. Conversely, using all of the eigenfaces extracted from the original images, one can
reconstruct the original image from the eigenfaces so that it matches the original image exactly.
2.1 PCA Algorithm
The algorithm used for principal component analysis is as follows.
(i) Acquire an initial set of M face images (the training set) & Calculate the eigen-faces
from the training set, keeping only M' eigenfaces that correspond to the highest
eigenvalue.
(ii) Calculate the corresponding distribution in M'-dimensional weight space for each
known individual, and calculate a set of weights based on the input image
(iii) Classify the weight pattern as either a known person or as unknown, according to its
distance to the closest weight vector of a known person.
Pre-
processed
Input Image
Principal
Component
Analysis
(PCA)
Back
Propagation
Neural Network
(BPNN)
Classified
Output
Image
3. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 155
Let the training set of images be MΓΓΓ ,....., 21 . The average face of the set is defined
by
∑=
Γ=Ψ
M
n
n
M 1
1
-----------(1)
Each face differs from the average by vector
Ψ−Γ=Φ ii ------------(2)
The co- variance matrix is formed by
T
M
n
T
nn AA
M
C ..
1
1
=ΦΦ= ∑= -----------(3)
where the matrix ].,.....,,[ 21 MA ΦΦΦ=
This set of large vectors is then subject to principal component analysis, which seeks a
set of M orthonormal vectors Muu ....1 .
To obtain a weight vector Ω of contributions of
individual eigen-faces to a facial image Γ, the face image is transformed into its eigen-face
components projected onto the face space by a simple operation
)( Ψ−Γ= T
kk uω -----------(4)
For k=1,.., M', where M' ≤ M is the number of eigen-faces used for the recognition. The weights
form vector Ω = [ ',......,, 21 M
ωωω ] that describes the contribution of each Eigen-face in
representing the face image Γ, treating the eigen-faces as a basis set for face images.The
simplest method for determining which face provides the best description of an unknown input
facial image is to find the image k that minimizes the Euclidean distance kε .
=kε || )( kΩ−Ω ||
2
------------(5)
where kΩ is a weight vector describing the k
th
face from the training set. A face is classified as
belonging to person k when the ‘ kε ‘is below some chosen threshold εΘ otherwise, the face is
classified as unknown.
The algorithm functions by projecting face images onto a feature space that spans the
significant variations among known face images. The projection operation characterizes an
individual face by a weighted sum of eigenfaces features, so to recognize a particular face, it is
necessary only to compare these weights to those of known individuals. The input image is
matched to the subject from the training set whose feature vector is the closest within acceptable
thresholds.
Eigen faces have advantages over the other techniques available, such as speed and
efficiency. For the system to work well in PCA, the faces must be seen from a frontal view under
similar lighting.
3. NEURAL NETWORKS AND BACK PROPAGATION ALGORITHM
A successful face recognition methodology depends heavily on the particular choice of the
features used by the pattern classifier .The Back-Propagation is the best known and widely used
learning algorithm in training multilayer perceptrons (MLP) [5]. The MLP refer to the network
consisting of a set of sensory units (source nodes) that constitute the input layer, one or more
hidden layers of computation nodes, and an output layer of computation nodes. The input signal
propagates through the network in a forward direction, from left to right and on a layer-by-layer
basis.
Back propagation is a multi-layer feed forward, supervised learning network based on gradient
descent learning rule. This BPNN provides a computationally efficient method for changing the
weights in feed forward network, with differentiable activation function units, to learn a training set
4. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 156
of input-output data. Being a gradient descent method it minimizes the total squared error of the
output computed by the net. The aim is to train the network to achieve a balance between the
ability to respond correctly to the input patterns that are used for training and the ability to provide
good response to the input that are similar.
3.1 Back Propagation Neural Networks Algorithm
A typical back propagation network [4] with Multi-layer, feed-forward supervised learning is as
shown in the figure. 2. Here learning process in Back propagation requires pairs of input and
target vectors. The output vector ‘o ‘is compared with target vector’t ‘. In case of difference of ‘o’
and‘t‘vectors, the weights are adjusted to minimize the difference. Initially random weights and
thresholds are assigned to the network. These weights are updated every iteration in order to
minimize the mean square error between the output vector and the target vector.
Fig. 2 Basic Block of Back propagation neural network
Input for hidden layer is given by
∑=
=
n
z
mzzm wxnet
1
----------- (6)
The units of output vector of hidden layer after passing through the activation function are given
by
( )m
m
net
h
−+
=
exp1
1
------------ (7)
In same manner, input for output layer is given by
kz
m
z
zk whnet ∑=
=
1
------------ (8)
and the units of output vector of output layer are given by
( )k
k
net
o
−+
=
exp1
1
----------- (9)
For updating the weights, we need to calculate the error. This can be done by
( )∑=
−=
k
li
ii toE
2
2
1
---------- (10)
oi and ti represents the real output and target output at neuron i in the output layer respectively. If
the error is minimum than a predefined limit, training process will stop; otherwise weights need to
be updated. For weights between hidden layer and output layer, the change in weights is given by
jiij hw αδ=∆ ----------- (11)
5. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 157
whereα is a training rate coefficient that is restricted to the range [0.01,1.0], hajj is the output of
neuron j in the hidden layer, and δi can be obtained by
( ) ( )iiiii oloot −−=δ ----------- (12)
Similarly, the change of the weights between hidden layer and output layer, is given by
jHiij xw βδ=∆ ----------- (13)
where β is a training rate coefficient that is restricted to the range [0.01,1.0], xj is the output of
neuron j in the input layer, and δ Hi can be obtained by
( ) ij
k
j
jiiHi wxlx ∑=
−=
1
δδ ----------- (14)
xi is the output at neuron i in the input layer, and summation term represents the weighted sum of
all δ j values corresponding to neurons in output layer that obtained in equation. After calculating
the weight change in all layers, the weights can simply updated by
( ) ( ) ijijij woldwneww ∆+= ----------- (15)
This process is repeated, until the error reaches a minimum value
2.4.3 Selection of Training Parameters
For the efficient operation of the back propagation network it is necessary for the appropriate
selection of the parameters used for training.
Initial Weights
This initial weight will influence whether the net reaches a global or local minima of the error and
if so how rapidly it converges. To get the best result the initial weights are set to random numbers
between -1 and 1.
Training a Net
The motivation for applying back propagation net is to achieve a balance between memorization
and generalization; it is not necessarily advantageous to continue training until the error reaches
a minimum value. The weight adjustments are based on the training patterns. As along as error
the for validation decreases training continues. Whenever the error begins to increase, the net is
starting to memorize the training patterns. At this point training is terminated.
Number of Hidden Units
If the activation function can vary with the function, then it can be seen that a n-input, m-
output function requires at most 2n+1 hidden units. If more number of hidden layers are present,
then the calculation for the δ’s are repeated for each additional hidden layer present, summing all
the δ’s for units present in the previous layer that is fed into the current layer for which δ is being
calculated.
Learning rate
In BPN, the weight change is in a direction that is a combination of current gradient and the
previous gradient. A small learning rate is used to avoid major disruption of the direction of
learning when very unusual pair of training patterns is presented.
Various parameters assumed for this algorithm are as follows.
No.of Input unit = 1 feature matrix
Accuracy = 0.001
learning rate = 0.4
No.of epochs = 400
No. of hidden neurons = 70
No.of output unit = 1
Main advantage of this back propagation algorithm is that it can identify the given image as a face
image or non face image and then recognizes the given input image .Thus the back propagation
neural network classifies the input image as recognized image.
4. Experimentation and Results
6. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 158
In this paper for experimentation, 200 images from Yale database are taken and a sample of 20
face images is as shown in fig 3. One of the images as shown in fig 4a is taken as the Input
image. The mean image and reconstructed output image by PCA, is as shown in fig 4b and 4c.
In BPNN, a training set of 50 images is as shown in fig 5a and the Eigen faces and recognized
output image are as shown in fig 5b and 5c.
Fig 3. Sample Yale Database Images
4(a) 4(b) 4 (c)
Fig 4.(a) Input Image , (b)Mean Image , (c) Recognized Image by PCA method
5(a) 5(b) 5(c)
Fig 5 (a) Training set, (b) Eigen faces , (c) Recognized Image by BPNN method
Table 1 shows the comparison of acceptance ratio and execution time values for 40, 80,
120,160 and 200 images of Yale database. Graphical analysis of the same is as shown in fig 6.
No .of Acceptance ratio (%) Execution Time (Seconds)
7. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 159
Images
PCA PCA with BPNN PCA PCA with BPNN
40 92.4 96.5 38 36
60 90.6 94.3 46 43
120 87.9 92.8 55 50
160 85.7 90.2 67 58
200 83.5 87.1 74 67
Table 1 Comparison of acceptance ratio and execution time for Yale database images
Fig.6: comparison of Acceptance ratio and execution time
5. CONCLUSION
Face recognition has received substantial attention from researches in biometrics, pattern
recognition field and computer vision communities. In this paper, Face recognition using Eigen
faces has been shown to be accurate and fast. When BPNN technique is combined with PCA,
non linear face images can be recognized easily. Hence it is concluded that this method has the
acceptance ratio is more than 90 % and execution time of only few seconds. Face recognition
can be applied in Security measure at Air ports, Passport verification, Criminals list verification in
police department, Visa processing , Verification of Electoral identification and Card Security
measure at ATM’s..
6. REFERENCES
[1]. B.K.Gunturk,A.U.Batur, and Y.Altunbasak,(2003) “Eigenface-domain super-resolution for
face recognition,” IEEE Transactions of . Image Processing. vol.12, no.5.pp. 597-606.
Comparision of Execution Time
0
10
20
30
40
50
60
70
80
40 60 120 160 200
No of images
ExecutionTime(sec)
PCA PCA with BPNN
Comparision of Acceptance Ratio
75
80
85
90
95
100
40 60 120 160 200
No of Images
AcceptanceRatio(%)
PCA PCA with BPNN
8. P.Latha, Dr.L.Ganesan & Dr.S.Annadurai
Signal Processing: An International Journal (SPIJ) Volume (3) : Issue (5) 160
[2]. M.A.Turk and A.P.Petland, (1991) “Eigenfaces for Recognition,” Journal of Cognitive
Neuroscience. vol. 3, pp.71-86.
[3]. T.Yahagi and H.Takano,(1994) “Face Recognition using neural networks with multiple
combinations of categories,” International Journal of Electronics Information and
Communication Engineering., vol.J77-D-II, no.11, pp.2151-2159.
[4]. S.Lawrence, C.L.Giles, A.C.Tsoi, and A.d.Back, (1993) “IEEE Transactions of Neural
Networks. vol.8, no.1, pp.98-113.
[5]. C.M.Bishop,(1995) “Neural Networks for Pattern Recognition” London, U.K.:Oxford
University Press.
[6]. Kailash J. Karande Sanjay N. Talbar “Independent Component Analysis of Edge
Information for Face Recognition” International Journal of Image Processing Volume (3) :
Issue (3) pp: 120 -131.