This document discusses feature selection and feature normalization techniques for face recognition. It analyzes using discrete cosine transform (DCT) to extract local features from blocks of images, resulting in three feature sets. The local feature vectors are then normalized in two ways: making them unit norm or dividing each coefficient by its standard deviation learned from the training set. The document aims to improve face recognition by performing local analysis of images and fusing outputs from local features.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
Face recognition is a biometric authentication method that has become more significant and relevant in recent years. It is becoming a more mature technology that has been employed in many large scale systems such as Visa Information System, surveillance access control and multimedia search engine. Generally, there are three categories of approaches for recognition, namely global facial feature, local facial feature and hybrid feature. Although the global facial-based feature approach is the most researched area, this approach is still plagued with many difficulties and drawbacks due to factors such as face orientation, illumination, and the presence of foreign objects. This paper presents an improved offline face recognition algorithm based on a multi-local feature selection approach for grayscale images. The approach taken in this work consists of five stages, namely face detection, facial feature (eyes, nose and mouth) extraction, moment generation, facial feature classification and face identification. Subsequently, these stages were applied to 3065 images from three distinct facial databases, namely ORL, Yale and AR. The experimental results obtained have shown that recognition rates of more than 89% have been achieved as compared to other global-based features and local facial-based feature approaches. The results also revealed that the technique is robust and invariant to translation, orientation, and scaling.
Improved Face Recognition across Poses using Fusion of Probabilistic Latent V...TELKOMNIKA JOURNAL
Uncontrolled environments have often required face recognition systems to identify faces
appearing in poses that are different from those of the enrolled samples. To address this problem,
probabilistic latent variable models have been used to perform face recognition across poses. Although
these models have demonstrated outstanding performance, it is not clear whether richer parameters
always lead to performance improvement. This work investigates this issue by comparing performance of
three probabilistic latent variable models, namely PLDA, TFA, and TPLDA, as well as the fusion of these
classifiers on collections of video data. Experiments on the VidTIMIT+UMIST and the FERET datasets
have shown that fusion of multiple classifiers improves face recognition across poses, given that the
individual classifiers have similar performance. This proves that different probabilistic latent variable
models learn statistical properties of the data that are complementary (not redundant). Furthermore, fusion
across multiple images has also been shown to produce better perfomance than recogition using single
still image.
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition. Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH (Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH (Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSijcseit
Automatic facial feature extraction is one of the most important and attempted problems in computer
vision. It is a necessary step in face recognition, facial image compression. There are many methods have
been proposed in the literature for the facial feature extraction task. However, all of them have still
disadvantage such as not complete reflection about face structure, face texture. In this paper, we propose
a method for fast and accurate extraction of feature points such as eyes, nose, mouth, eyebrows and the
like from dynamic images with the purpose of face recognition. These methods are far from satisfactory
in terms of extraction accuracy and processing speed. The proposed method achieves high position
accuracy at a low computing cost by combining shape extraction with geometric features of facial images
like eyes, nose, mouth etc. In this paper, a facial expressions synthesis system, based on the facial points
tracking in the frontal image sequences. Selected facial points are automatically tracked using a crosscorrelation based optical flow. The proposed synthesis system uses a simple facial features model with a
few set of control points that can be tracked in original facial image sequences.
Multi Local Feature Selection Using Genetic Algorithm For Face IdentificationCSCJournals
Face recognition is a biometric authentication method that has become more significant and relevant in recent years. It is becoming a more mature technology that has been employed in many large scale systems such as Visa Information System, surveillance access control and multimedia search engine. Generally, there are three categories of approaches for recognition, namely global facial feature, local facial feature and hybrid feature. Although the global facial-based feature approach is the most researched area, this approach is still plagued with many difficulties and drawbacks due to factors such as face orientation, illumination, and the presence of foreign objects. This paper presents an improved offline face recognition algorithm based on a multi-local feature selection approach for grayscale images. The approach taken in this work consists of five stages, namely face detection, facial feature (eyes, nose and mouth) extraction, moment generation, facial feature classification and face identification. Subsequently, these stages were applied to 3065 images from three distinct facial databases, namely ORL, Yale and AR. The experimental results obtained have shown that recognition rates of more than 89% have been achieved as compared to other global-based features and local facial-based feature approaches. The results also revealed that the technique is robust and invariant to translation, orientation, and scaling.
Improved Face Recognition across Poses using Fusion of Probabilistic Latent V...TELKOMNIKA JOURNAL
Uncontrolled environments have often required face recognition systems to identify faces
appearing in poses that are different from those of the enrolled samples. To address this problem,
probabilistic latent variable models have been used to perform face recognition across poses. Although
these models have demonstrated outstanding performance, it is not clear whether richer parameters
always lead to performance improvement. This work investigates this issue by comparing performance of
three probabilistic latent variable models, namely PLDA, TFA, and TPLDA, as well as the fusion of these
classifiers on collections of video data. Experiments on the VidTIMIT+UMIST and the FERET datasets
have shown that fusion of multiple classifiers improves face recognition across poses, given that the
individual classifiers have similar performance. This proves that different probabilistic latent variable
models learn statistical properties of the data that are complementary (not redundant). Furthermore, fusion
across multiple images has also been shown to produce better perfomance than recogition using single
still image.
Age Invariant Face Recognition using Convolutional Neural Network IJECEIAES
In the recent years, face recognition across aging has become very popular and challenging task in the area of face recognition. Many researchers have contributed in this area, but still there is a significant gap to fill in. Selection of feature extraction and classification algorithms plays an important role in this area. Deep Learning with Convolutional Neural Networks provides us a combination of feature extraction and classification in a single structure. In this paper, we have presented a novel idea of 7-Layer CNN architecture for solving the problem of aging for recognizing facial images across aging. We have done extensive experimentations to test the performance of the proposed system using two standard datasets FGNET and MORPH (Album II). Rank-1 recognition accuracy of our proposed system is 76.6% on FGNET and 92.5% on MORPH (Album II). Experimental results show the significant improvement over available state-of- the-arts with the proposed CNN architecture and the classifier.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
FACIAL EXTRACTION AND LIP TRACKING USING FACIAL POINTSijcseit
Automatic facial feature extraction is one of the most important and attempted problems in computer
vision. It is a necessary step in face recognition, facial image compression. There are many methods have
been proposed in the literature for the facial feature extraction task. However, all of them have still
disadvantage such as not complete reflection about face structure, face texture. In this paper, we propose
a method for fast and accurate extraction of feature points such as eyes, nose, mouth, eyebrows and the
like from dynamic images with the purpose of face recognition. These methods are far from satisfactory
in terms of extraction accuracy and processing speed. The proposed method achieves high position
accuracy at a low computing cost by combining shape extraction with geometric features of facial images
like eyes, nose, mouth etc. In this paper, a facial expressions synthesis system, based on the facial points
tracking in the frontal image sequences. Selected facial points are automatically tracked using a crosscorrelation based optical flow. The proposed synthesis system uses a simple facial features model with a
few set of control points that can be tracked in original facial image sequences.
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...AM Publications
Facial expression analysis is a remarkable and demanding problem, and impacts significant applications in various fields like human-computer interaction and data-driven animation. Developing an efficient facial representation from the original face images is a crucial step for achieving facial expression recognition. Facial representation based on statistical local features, Local Binary Patterns (LBP) is practically assessed. Several machine learning techniques were thoroughly observed on various databases. LBP features- which are effectual and competent for facial expression recognition are generally used by researchers Cohn Kanade is the database for present work and the programming language used is MATLAB. Firstly, face area is divided in small regions, by which histograms, Local Binary Patterns (LBP) are extracted and then concatenated into single feature vector. This feature vector outlines a well-organized representation of face and is helpful in determining the resemblance among images.
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand to develop robust methods to verify facial images when they age. In this paper, a deep learning method based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG) and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and classification. The experiments are based on the facial images collected from MORPH and FG-Net benchmarked datasets. Euclidean distance has been used to measure the similarity between pairs of feature vectors with the age gap. Experiments results show an improvement in the validation accuracy conducted on the FG-NET database, which it reached 100%, while with MORPH database the validation accuracy is 99.8%. The proposed method has better performance and higher accuracy than current state-of-the-art methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
A novel approach for performance parameter estimation of face recognition bas...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Now days, the task of face recognition is widely used application of image analysis as well as pattern recognition. In biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. For classifying facial expressions into different categories, it is necessary to extract important facial features which contribute in identifying proper and particular expressions. Recognition and classification of human facial expression by computer is an important issue to develop automatic facial expression recognition system in vision community. In this paper the facial expression recognition system is proposed.
Review of face detection systems based artificial neural networks algorithmsijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Facial Expression Recognition Using Local Binary Pattern and Support Vector M...AM Publications
Facial expression analysis is a remarkable and demanding problem, and impacts significant applications in various fields like human-computer interaction and data-driven animation. Developing an efficient facial representation from the original face images is a crucial step for achieving facial expression recognition. Facial representation based on statistical local features, Local Binary Patterns (LBP) is practically assessed. Several machine learning techniques were thoroughly observed on various databases. LBP features- which are effectual and competent for facial expression recognition are generally used by researchers Cohn Kanade is the database for present work and the programming language used is MATLAB. Firstly, face area is divided in small regions, by which histograms, Local Binary Patterns (LBP) are extracted and then concatenated into single feature vector. This feature vector outlines a well-organized representation of face and is helpful in determining the resemblance among images.
FACE VERIFICATION ACROSS AGE PROGRESSION USING ENHANCED CONVOLUTION NEURAL NE...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand to develop robust methods to verify facial images when they age. In this paper, a deep learning method based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG) and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and classification. The experiments are based on the facial images collected from MORPH and FG-Net benchmarked datasets. Euclidean distance has been used to measure the similarity between pairs of feature vectors with the age gap. Experiments results show an improvement in the validation accuracy conducted on the FG-NET database, which it reached 100%, while with MORPH database the validation accuracy is 99.8%. The proposed method has better performance and higher accuracy than current state-of-the-art methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Abstract: This paper presents a new face parts information analyzer, as a promising model for detecting faces and locating the facial features in images. The main objective is to build fully automated human facial measurements systems from images with complex backgrounds. Detection of facial features such as eye, nose, and mouth is an important step for many subsequent facial image analysis tasks. The main study of face detection is detect the portion of part and mention the circle or rectangular of the every portion of body. In this paper face detection is depend upon the face pattern which is match the face from the pattern reorganization. The study present a novel and simple model approach based on a mixture of techniques and algorithms in a shared pool based on viola jones object detection framework algorithm combined with geometric and symmetric information of the face parts from the image in a smart algorithm.Keywords: Face detection, Video frames, Viola-Jones, Skin detection, Skin color classification, Face reorganization, Pattern reorganization. Skin Color.
Title: Face Detection Using Modified Viola Jones Algorithm
Author: Alpika Gupta, Dr. Rajdev Tiwari
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
A novel approach for performance parameter estimation of face recognition bas...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Now days, the task of face recognition is widely used application of image analysis as well as pattern recognition. In biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. For classifying facial expressions into different categories, it is necessary to extract important facial features which contribute in identifying proper and particular expressions. Recognition and classification of human facial expression by computer is an important issue to develop automatic facial expression recognition system in vision community. In this paper the facial expression recognition system is proposed.
Review of face detection systems based artificial neural networks algorithmsijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Face Recognition based on STWT and DTCWT using two dimensional Q-shift Filters IJERA Editor
The Biometrics is used to recognize a person effectively compared to traditional methods of identification. In this paper, we propose a Face recognition based on Single Tree Wavelet Transform (STWT) and Dual Tree Complex Wavelet transform (DTCWT). The Face Images are preprocessed to enhance quality of the image and resize. DTCWT and STWT are applied on face images to extract features. The Euclidian distance is used to compare features of database image with test face images to compute performance parameters. The performance of STWT is compared with DTCWT. It is observed that the DTCWT gives better results compared to STWT technique.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Bedrijfspresentatie Crea en Artiva Nederland B.V.tomravenstein
Crea en Artiva is gespecialiseerd in het ontwerpen en verzorgen van creatieve en aandachttrekkende productpresentaties. Creativiteit, innovatie en communicatie zijn de kenmerken waarin Crea en Artiva zich onderscheidt en waarbij de doelstelling van de klant voorop staat. Van design, displays, stands, shop-in-shop concepten, etalageverzorging en interieurconcepten tot aan events en instore communicatie.
Workshop Moderação de testes de Usabilidade, WUD São Paulo 2010Andressa Vieira
Andressa Vieira e Elisa Volpato mostraram na prática como é moderar um teste de usabilidade. Alguns voluntários foram colocados no papel de moderador com um testador fictício (e bem difícil) pra exemplificar situações do dia-a-dia de testes, como o testador muito inseguro, que fica perguntando e pedindo confirmação de tudo, aquele que não consegue focar na tarefa e fica contando histórias da sua vida e quando há enunciados de tarefa em que não podemos falar o nome do link para não influenciar na execução da tarefa.
No final as meninas listaram alguns exemplos de situações difíceis de teste com sugestões do que fazer. O público era bem variado – de gente que está começando na área a professores de design.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Comparative Studies for the Human Facial Expressions Recognition Techniquesijtsrd
This article reviews the different techniques for recognizing facial expressions. First, it gives a description of the emotions their types and the techniques to measure the emotions. Then it talks about the identification of the face and then the techniques for extracting the features from the face. Then the various classifiers designed to classify these extracted features are discussed. Finally, a comparative study of some of the recent studies has been presented. Sheena Gaur | Mayank Dixit | Sayed Nasir Hasan | Anwaar Wani | Tanveer Kazi | Ahsan Z Rizvi "Comparative Studies for the Human Facial Expressions Recognition Techniques" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd28027.pdfPaper URL: https://www.ijtsrd.com/computer-science/other/28027/comparative-studies-for-the-human-facial-expressions-recognition-techniques/sheena-gaur
PRE-PROCESSING TECHNIQUES FOR FACIAL EMOTION RECOGNITION SYSTEMIAEME Publication
Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions. Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature extraction and classification technique for emotion recognition is still an open problem. Image pre-processing and normalization is significant part of face recognition systems. Changes in lighting conditions produces dramatically decrease of recognition performance. In this paper, the image pre-processing techniques like K-Nearest Neighbor, Cultural Algorithm and Genetic Algorithm are used to remove the noise in the facial image for enhancing the emotion recognition. The performance of the preprocessing techniques are evaluated with various performance metrics.
Two Level Decision for Recognition of Human Facial Expressions using Neural N...IIRindia
Facial Expressions of the human being is the one which is the outcome of the inner feelings of the mind. It is the person’s internal emotional states and intentions.A person’s face provides a lot of information such as age, gender, identity, mood, expressions and so on. Faces play an important role in the recognition of the expressions of persons. In this research, an attempt is made to design a model to classify human facial expressions according to the features extracted f0rom human facial images by applying 3 Sigma limits inSecond level decision using Neural Network (NN). Now a days, Artificial Neural Network (ANN) has been widely used as a tool for solving many decision modeling problems. In this paper a feed forward propagation Neural networks are constructed for expression classification system for gray-scale facial images. Three groups of expressions including Happy, Sad and Anger are used in the classification system. In this paper, a Second level decision has been proposed in which the output obtained from the Neural Network(Primary Level) has been refined at the Second level in order to improvise the accuracy of the recognition rate. The accuracy of the system is analyzed by the variation on the range of the expression groups. The efficiency of the system is demonstrated through the experimental results.
Face Recognition plays a major role in Biometrics. Feature selection is a measure issue in face
recognition. This paper proposes a survey on face recognition. There are many methods to extract face
features. In some advanced methods it can be extracted faster in a single scan through the raw image and
lie in a lower dimensional space, but still retaining facial information efficiently. The methods which are
used to extract features are robust to low-resolution images. The method is a trainable system for selecting
face features. After the feature selection procedure next procedure is matching for face recognition. The
recognition accuracy is increased by advanced methods.
This report is based on research. This whole research content are taken by books and websites. you can learn about face recognition history, how's it is work traditional and in technical way, introduction of some face recognition software and devices. we also add face recognition algorithm in report.
AN IMPROVED TECHNIQUE FOR HUMAN FACE RECOGNITION USING IMAGE PROCESSINGijiert bestjournal
Face recognition is a computer application technique for automatically identifying or
verifying a person from a digital image or a video frame source. To do this is by comparing
selected facial features from the digital image and a face dataset. It is basically used in
security systems and can be compared to other biometrics such as fingerprint recognition or
eye, iris recognition systems. The main limitation of the current face recognition system is
that they only detect straight faces looking at the camera. Separate versions of the system
could be trained for each head orientation, and the results can be combined using arbitration
methods similar to those presented here. In earlier work, the face position must be centerlight
position; any lighting effect will affect the system. Similarly the eyes of person must be
open and without glass.
Face Verification Across Age Progression using Enhanced Convolution Neural Ne...sipij
This paper proposes a deep learning method for facial verification of aging subjects. Facial aging is a
texture and shape variations that affect the human face as time progresses. Accordingly, there is a demand
to develop robust methods to verify facial images when they age. In this paper, a deep learning method
based on GoogLeNet pre-trained convolution network fused with Histogram Orientation Gradient (HOG)
and Local Binary Pattern (LBP) feature descriptors have been applied for feature extraction and
classification
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Fl33971979
1. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
971 | P a g e
Analysis Of Face Recognition- A Case Study On Feature Selection
And Feature Normalization
B.Vijay1
, A.Nagabhushana Rao2
1,2
Assistant Professor, Dept of CSE, AITAM ,Tekkali, Andhra Pradesh, INDIA.
Abstract
In this the effects of feature selection and
feature normalization to the face recognition
scheme is presented. From the local features that
are extracted using block-base discrete cosine
transform (DCT), three feature sets are derived.
These local feature vectors are normalized in two
different ways by making them unit norm and by
dividing each coefficient to its standard deviation
that is learned from the training set. In this work
use local information by using block based
discrete cosine transform. Here the main idea is
to mitigate the effects of expression, illumination
and occlusion variations by performing local
analysis and by fusing the outputs of extracted
local features at the feature and at the decision
level.
Keywords- Image Processing, DCT, Face
recognition;
I. INTRODUCTION
Face recognition has been an active
research area over the last 30 years. Scientists from
different areas of psychophysical sciences and those
from different areas of computer sciences have
studied it. Psychologists and neuroscientists mainly
deal with the human perception part of the topic,
whereas engineers studying on machine recognition
of human faces deal with the computational aspects
of face recognition. Face recognition has
applications mainly in the fields of biometrics,
access control, law enforcement, and security and
surveillance systems. Biometrics is methods to
automatically verify or identify individuals using
their physiological or behavioral characteristics [1-
4].
A. Human Face Recognition
When building artificial face recognition
systems, scientists try to understand the architecture
of human face recognition system. Focusing on the
methodology of human face recognition system may
be useful to understand the basic system. However,
the human face recognition system utilizes more
than that of the machine recognition system which 7
is just 2-D data. The human face recognition system
uses some data obtained from some or all of the
senses visual, auditory, tactile, etc. All these data is
used either individually or collectively for storage
and remembering of faces. In many cases, the
surroundings also play an important role in human
face recognition system. It is hard for a machine
recognition system to handle so much data and their
combinations. However, it is also hard for a human
to remember many faces due to storage limitations.
A key potential advantage of a machine system is its
memory capacity [1-4], whereas for a human face
recognition system the important feature is its
parallel processing capacity. The issue ―which
features humans use for face recognition‖ has been
studied and it has been argued that both global and
local features are used for face recognition. It is
harder for humans to recognize faces, which they
consider as neither ―attractive‖ nor ―unattractive‖.
The low spatial frequency components are used to
clarify the sex information of the individual whereas
high frequency components are used to identify the
individual. The low frequency components are used
for the global description of the individual while the
high frequency components are required for finer
details needed in the identification process. Both
holistic and feature information are important for the
human face recognition system. Studies suggest the
possibility of global descriptions serving as a front
end for better feature-based perception [1-4]. If there
are dominant features present such as big ears, a
small nose, etc. holistic descriptions may not be
used. Also, recent studies show that an inverted face
(i.e. all the intensity values are subtracted from 255
to obtain the inverse image in the gray scale) is
much harder to recognize than a normal face. Hair,
eyes, mouth, face outline have been determined to be
more important than nose for perceiving and
remembering faces. It has also been found that the
upper part of the face is more useful than the lower
part of the face for recognition. Also, aesthetic
attributes (e.g. beauty, attractiveness, pleasantness,
etc.) play an important role in face Recognition; the
more attractive the faces are easily remembered.
For humans, photographic negatives of
faces are difficult to recognize. But, there is not
much study on why it is difficult to recognize
negative images of human faces. Also, a study on
the direction of illumination showed the importance
of top lighting it is easier for humans to recognize
faces illuminated from top to bottom than the faces
illuminated from bottom to top. According to the
neurophysicists, the analyses of facial Expressions
are done in parallel to face recognition in human
face recognition system. Some prosopagnosic
patients, who have difficulties in identifying familiar
faces, seem to recognize facial expressions due to
emotions. Patients who suffer from organic brain
2. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
972 | P a g e
syndrome do poorly at expression analysis but
perform face recognition quite well.
B. Machine Recognition of Faces
Human face recognition were expected to be
a reference on machine recognition of faces, research
on machine recognition of faces has developed
independent of studies on human face recognition.
During 1970‘s, typical pattern classification
techniques, which use measurements between
features in faces or face profiles, were
Used. During the 1980‘s, work on face recognition
remained nearly stable. Since the early 1990‘s,
research interest on machine recognition of faces has
grown tremendously. The reasons may be:
An increase in emphasis on
civilian/commercial research projects,
The studies on neural network classifiers
with emphasis on real-time computation
and adaptation,
The availability of real time hardware.
The growing need for surveillance
applications.
a. Statistical Approaches
Statistical methods include template
matching based systems where the training and test
images are matched by measuring the correlation
between them. Moreover, statistical methods include
the projection-based methods such as Principal
Component Analysis (PCA), Linear Discriminator
Analysis (LDA), etc. In fact, projection based
systems came out due to the shortcomings of the
straightforward template matching based
approaches; that is, trying to carry out the required
classification task in a space of extremely high
dimensionality Template Matching: Brunelli and
Poggio [3] suggest that the optimal strategy for face
recognition is holistic and corresponds to template
matching. In their study, they compared a geometric
feature based technique with a template matching
based system. In the simplest form of template
matching, the image (as 2-D intensity values) is 11
compared with a single template representing the
whole face using a distance metric. Although
recognition by matching raw images has been
successful under limited circumstances, it suffers
from the usual shortcomings of straightforward
correlation-based approaches, such as sensitivity to
face orientation, size, variable lighting conditions,
and noise. The reason for this vulnerability of direct
matching methods lies in their attempt to carry out
the required classification in a space of extremely
high dimensionality. In order to overcome the curse
of dimensionality, the connectionist equivalent of
data compression methods is employed first.
However, it has been successfully argued that the
resulting feature dimensions do not necessarily
retain the structure needed for classification, and that
more general and powerful methods for feature
extraction such as projection based systems are
required. The basic idea behind projection-based
systems is to construct low dimensional projections
of a high dimensional point cloud, by maximizing an
objective function such as the deviation from
normality.
b. Face Detection and Recognition by PCA
The Eigenface Method of Turk and
Pentland [5] is one of the main methods applied in
the literature, which is based on the Karhunen-Loeve
expansion. Their study is motivated by the earlier
work of Sirowich and Kirby [6], [7]. It is based on
the application of Principal Component Analysis to
the human faces. It treats the face images as 2-D
data, and classifies the face images by projecting
them to the eigenface space, which is composed of
eigenvectors obtained by the variance of the face
images. Eigenface recognition derives its name from
the German prefix Eigen, meaning own or
individual. The Eigenface method of facial
recognition is considered the first working facial
recognition technology. When the method was first
proposed by Turk and Pentland [5], they worked on
the image as a whole. Also, they used Nearest Mean
classifier two classify the face images. By using the
observation that the projection of a face image and
non-face image are quite different, a method of
detecting the face in an image is obtained. They
applied the method on a database of 2500 face
images of 16 subjects, digitized at all combinations
of 3 head orientations, 3 head sizes and 3 lighting
conditions. They conducted several experiments to
test the robustness of their approach to illumination
changes, variations in size, head orientation, and the
differences between training and test conditions.
They reported that the system was fairly robust to
illumination changes, but degrades quickly as the
scale changes [5]. This can be explained by the
correlation between images obtained under different
illumination conditions the correlation between face
images at different scales is rather low. The
eigenface approach works well as long as the test
image is similar to the training images used for
obtaining the eigenfaces.
C. Face Recognition by LDA
Etemad and Chellappa [8] proposed a
method on appliance of Linear/Fisher Discriminant
Analysis for the face recognition process. LDA is
carried out via scatter matrix analysis. The aim is to
find the optimal projection which maximizes between
class scatter of the face data and minimizes within
class scatter of the face data. As in the case of PCA,
where the eigenfaces are calculated by the eigenvalue
analysis, the projections of LDA are calculated by the
generalized eigenvalue equation.
Subspace LDA: An alternative method, which
combines PCA and LDA is studied. This method
consists of two steps the face image is projected into
3. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
973 | P a g e
the eigenface space, which is constructed by PCA,
and then the eigenface space projected vectors are
projected into the LDA classification space to
construct a linear classifier. In this method, the
choice of the number of eigenfaces used for the first
step is critical; the choice enables the system to
generate class separable features via LDA from the
eigenface space representation. The
generalization/over fitting problem can be solved in
this manner. In these studies, a weighted distance
metric guided by the LDA eigenvalues was also
employed to improve the performance of the
subspace LDA method.
II. BACKGROUND INFORMATION
Most research on face recognition falls into
two main categories feature-based and holistic.
Feature-based approaches to face recognition
basically rely on the detection and characterization
of individual facial features and their geometrical
relationships. Such features generally include the
eyes, nose, and mouth. The detection of faces and
their features prior to performing verification or
recognition makes these approaches robust to
positional variations of the faces in the input image.
Holistic or global approaches to face recognition, on
the other hand, involve encoding the entire facial
image. Thus, they assume that all faces are
constrained to particular positions, orientations, and
scales[10-14].
Feature-based approaches were more
predominant in the early attempts at automating the
process of face recognition. Some of this early work
involved the use of very simple image processing
techniques (such as edge detection, signatures, and
so on) for detecting faces and their features. In this
approach an edge map was first extracted from an
input image and then matched to a large oval
template, with possible variations in position and
size. The presence of a face was then confirmed by
searching for edges at estimated locations of certain
features like the eyes and mouth[10-14].
A successful holistic approach to face
recognition uses the Karhunen-Loeve transform
(KLT). This transform exhibits pattern recognition
properties that were largely overlooked initially
because of the complexity involved in its
computation. But the KLT does not achieve
adequate robustness against variations in face
orientation, position, and illumination. That is why it
is usually accompanied by further processing to
improve its performance. An alternative holistic
method for face recognition and compares it to the
popularly approach. The basic idea is to use the
discrete cosine transform (DCT) as a means of
feature extraction for later face classification. The
DCT is computed for a cropped version of an input
image containing a face, and only a small subset of
the coefficients is maintained as a feature vector. To
improve performance, various normalization
techniques are invoked prior to recognition to
account for small perturbations in facial geometry
and illumination. The main merit of the DCT is its
relationship to the KLT. Of the deterministic discrete
transforms, the DCT best approaches the KLT. Thus,
it is expected that it too will exhibit desirable pattern
recognition capabilities[15-21].
A. Transformation Based Systems
Podilchuk and zang proposed a method,
which finds the feature vectors using DCT.Their to
detect the critical areas of the face. The system is
based on matching the image to a map of invariant
facial attributes associated with specific areas of the
face. This technique is quite robust, since it relies on
global operations over a whole region of the face. A
codebook of feature vectors or code words is
determined for each from the training set. They
examine recognition performance based on feature
selection, number of features or codebook size and
feature dimensionality. For this feature selection, we
have several block-based transformations. Among
these block-based DCT coefficients produce good
low dimensional feature vectors with high
recognition performance. The main merit of the
DCT is its relationship to the KLT. The KLT is an
optimal transform based on various performance
criteria. The DCT in face recognition becomes of
more value than the KLT because of its
computational speed. The KLT is not only more
computationally intensive, but it must also be
redefined every time the statistics of its input signals
change. Therefore, in the context of face recognition,
the eigenvectors of the KLT (eigenfaces) should
ideally be recomputed every time a new face is
added to the training set of known faces[15-21].
B. Karhunen-Loeve Transform
The Karhunen-Loeve Transform (KLT) is a
rotation transformation that aligns the data with the
eigenvectors, and decor relates the input image data.
Here, the transformed image may make evident
features not discernable in the original data, or
alternatively, possibly preserve the essential
information content of the image, for a given
application with a reduced number of the
transformed dimensions. The KLT develops a new
coordinate system in the multi spectral vector space,
in which the data can be represented without
correlation as defined by:
GxY … (1)
Where Y is a new coordinate system, G is a
linear transformation of the original Co-ordinates
that is the transposed matrix of eigenvector of the
pixel data's covariance in x space, and x is an
original coordinate system. By the equation for Y,
we can get the principal components and choose the
first principal component from this transformation
that seems to be the best representation of the input
image.
4. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
974 | P a g e
The KLT, a linear transform, takes the basis
functions from the statistics of the signal. The KLT
transform has been researched extensively for use in
recognition systems because it is an optimal
transform in the sense of energy compaction, i.e., it
places as much energy as possible in as few
coefficients as possible. The KLT is also called
Principal Component Analysis (PCA), and is
sometimes referred to as the Singular Value
Decomposition (SVD) in literature. The transform is
generally not separable, and thus the full matrix
multiplication must be performed.
C. Quantisation
DCT-based image compression relies on
two techniques to reduce the data required to
represent the image. The first is quantization of the
image's DCT coefficients; the second is entropy
coding of the quantized coefficients. Quantization is
the process of reducing the number of possible
values of a quantity, thereby reducing the number of
bits needed to represent it. In lossy image
compression the transformation decompose the
image into uncorrelated parts projected on
orthogonal basis of the transformation. These bases
are represented by eigenvectors, which are
independent and orthogonal in nature. Taking
inverse of the transformed values will result in the
retrieval of the actual image data. For compression
of the image, the independent characteristic of the
transformed coefficients is considered; truncating
some of these coefficients will not affect the others.
This truncation of the transformed coefficients is
actually the lossy process involved in compression
and known as quantization. So we can say that DCT
is perfectly reconstructing, when all the coefficients
are calculated and stored to their full resolution. For
high compression, the DCT coefficients are
normalized by different scales, according to the
quantization matrix [22]. Vector quantization, (VQ)
mainly used for reducing or compressing the image
data. Application VQ on images for compression
started from early 1975by Hilbert mainly for the
coding of multispectral Landsat imaginary[15-21].
D. Coding
After the DCT coefficients have been
quantized, the DC coefficients are DPCM coded and
then they are entropy coded along with the AC
coefficients. The quantized AC and DC coefficient
values are entropy coded in the same way, but
because of the long runs in the AC coefficient, an
additional run length process is applied to them to
reduce their redundancy. The quantized coefficients
are all rearranged in a zigzag order. The run length
in this zigzag order is described by a RUN-SIZE
symbol. The RUN is a count of how many zeros
occurred before the quantized coefficient and the
SIZE symbol is used in the same way as it was for
the DC coefficients, but on their AC counter parts.
The two symbols are combined to form a RUN-SIZE
symbol and this symbol is then entropy coded.
Additional bits are also transmitted to specify the
exact value of the quantized coefficient. A size of
zero in the AC coefficient is used to indicate that the
rest of the 8 × 8 block is zeros (End of Block or
EOB) [1].
Huffman Coding: Huffman coding is an efficient
source-coding algorithm for source symbols that are
not equally probable. A variable length-encoding
algorithm was suggested by Huffman in 1952, based
on the source symbol probabilities P (xi), i=1,2… L.
The algorithm is optimal in the sense that the
average number of bits required to represent the
source symbols is a minimum provided the prefix
condition is met[15-21]. The steps of Huffman
coding algorithm are given below:
Arrange the source symbols in increasing
order of heir probabilities.
Take the bottom two symbols & tie them
together. Add the probabilities of the two
symbols & write it on the combined node.
Label the two branches with a ‗1‘ & a ‗0‘
Treat this sum of probabilities as a new
probability associated with a new symbol.
Again pick the two smallest probabilities tie
them together to form a new probability.
Each time we perform the combination of
two symbols we reduce the total number of
symbols by one. Whenever we tie together
two probabilities (nodes) we label the two
branches with a ‗0‘ & ‗1‘.
Continue the procedure until only one
procedure is left (& it should be one if your
addition is correct). This completes
construction of the Huffman tree.
To find out the prefix codeword for any
symbol, follow the branches from the final
node back to the symbol. While tracing
back the route read out the labels on the
branches. This is the codeword for the
symbol.
Huffman Decoding: The Huffman Code is an
instantaneous uniquely decodable block code. It is a
block code because each source symbol is mapped
into a fixed sequence of code symbols. It is
instantaneous because each codeword in a string of
code symbols can be decoded without referencing
succeeding symbols. That is, in any given Huffman
code no codeword is a prefix of any other codeword.
And it is uniquely decodable because a string of
code symbols can be decoded only in one way. Thus
any string of Huffman encoded symbols can be
decoded by examining the individual symbols of the
string in left to right manner. Because we are using
an instantaneous uniquely decodable block code,
there is no need to insert delimiters between the
encoded pixels. For Example consider a 19 bit string
5. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
975 | P a g e
1010000111011011111 which can be decoded
uniquely as x1 x3 x2 x4 x1 x1 x7 [7]. A left to right
scan of the resulting string reveals that the first valid
code word is 1 which is a code symbol for, next
valid code is 010 which corresponds to x1,
continuing in this manner, we obtain a completely
decoded sequence given by x1 x3 x2 x4 x1 x1 x7.
E. Comparison with the KLT
The KLT completely decor relates a signal
in the transform domain, minimizes MSE in data
compression, contains the most energy (variance) in
the fewest number of transform coefficients, and
minimizes the total representation entropy of the
input sequence All of these properties, particularly
the first two, are extremely useful in pattern
recognition applications. The computation of the
KLT essentially involves the determination of the
eigenvectors of a covariance matrix of a set of
training sequences (images in the case of face
recognition. As for the computational complexity of
the DCT and KLT, it is evident from the above
overview that the KLT requires significant
processing during training, since its basis set is data-
dependent. This overhead in computation, albeit
occurring in a non-time-critical off-line training
process, is alleviated with the DCT. As for online
feature extraction, the KLT of an N X N image can
be computed in O (M_N2) time where M_ is the
number of KLT basis vectors. In comparison, the
DCT of the same image can be computed in O
(N2log2N) time because of its relation to the discrete
Fourier transform—which can be implemented
efficiently using the fast Fourier transform This
means that the DCT can be computationally more
efficient than the KLT depending on the size of the
KLT basis set^2.
It is thus concluded that the discrete cosine
transform is very well suited to application in face
recognition. Because of the similarity of its basis
functions to those of the KLT, the DCT exhibits
striking feature extraction and data compression
capabilities. In fact, coupled with these, the ease and
speed of the computation of the DCT may even
favor it over the KLT in face recognition[15-21].
III. SYSTEM STRUCTURE
The process of embedding local appearance-based
face representation approach, a detected and
normalized face image is divided into blocks of 8x8
pixels size. On each 8x8 pixels block, DCT is
performed. The obtained DCT coefficients are
ordered using zigzag scanning from the ordered
coefficients, according to the feature selection
strategy, M of them are selected resulting an M
dimensional local feature vector[15-21]. Finally, the
DCT coefficients extracted from each block are
concatenated to construct the feature vector. This
will be shown in Fig 1 and 2.
Fig 1 Embedding Local Appearance-Based Face Representation
A. Algorihtm Explanation
The Discrete Cosine Transform (DCT): A DCT
expresses a sequence of finitely many data points in
terms of a sum of cosine functions. DCT are
important to numerous applications in science and
engineering. The use of cosine rather than sine
functions is critical in these applications for
compression, it turns out that cosine functions are
much more efficient. In particular, a DCT is a Fourier
related transform related to the discrete Fourier
transform DFT, but using only real numbers. This
type of frequency transform is orthogonal, real, and
separable and algorithms for its computation have
proved to be computationally efficient. The DCT is a
widely used frequency transform because it closely
approximates the optimal KLT transform, while not
suffering from the drawbacks of applying the KLT.
However KLT is constructed from the Eigen values
and the corresponding eigenvectors of the data is to
be transformed, it is signal dependent and there is no
general algorithm for its fast computation. The DCT
does not suffer from these drawbacks due to data-
independent basis functions and several algorithms
for fast implementation. A discrete cosine transform
(DCT) is a sinusoidal unitary transform. The DCT
has been used in digital signal and image processing
and particularly in transform coding systems for data
compression/decompression[15-21].
6. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
976 | P a g e
Fig.2 System Architecture
This type of frequency transform is real,
orthogonal, and separable, and algorithms for its
computation have proved to be computationally
efficient. In fact, the DCT has been employed as the
main processing tool for data
compression/decompression in international image
and video coding standards. All present digital signal
and image processing applications involve only even
types of the DCT. As this is the case, the discussion
in this dissertation is restricted to four even types of
DCT.In subsequent sections, N is assumed to be an
integer power of 2, i.e., N = 2m. A Subscript of a
matrix denotes its order, while a superscript denotes
the version number. Four even types of DCT in the
matrix form are defined as
DCT−I ∶ 𝐶𝑁+1
𝐼
𝑛𝑘 =
2
𝑁
𝜀𝑛 𝜀𝑘 cos
𝜋𝑛𝑘
𝑁
Wheren,k=0,1,…N-2
DCT−II ∶ 𝐶𝑁
𝐼𝐼
𝑛𝑘 =
2
𝑁
𝜀𝑘 cos
𝜋(2𝑛+1)𝑘
2𝑁
Wheren,k=0,1,…N-1
DCT−III ∶ 𝐶𝑁
𝐼𝐼𝐼
𝑛𝑘 =
2
𝑁
𝜀𝑛 cos
𝜋(2𝑛+1)𝑛
2𝑁
Wheren,k=0,1,…N-1
DCT−IV ∶ 𝐶𝑁
𝐼𝑉
𝑛𝑘 =
2
𝑁
cos
𝜋 2𝑛+1 (2𝑘+1)
4𝑁
Where 𝜀𝑝 =
1
2
𝑝 = 0 𝑜𝑟 𝑝 = 𝑁
1 𝑜𝑡ℎ𝑒𝑟 𝑤𝑖𝑠 𝑒
Database
Images
Normalization
(Geometry)
Normalization
(Illumination)
DCT-based feature
Extraction
Probe Normalization
(Geometry)
Normalization
(Illumination)
DCT-based feature
extraction
Quick Search
Stored
database
features
Database Images Normalization
(Geometry)
Normalization
(Illumination)
DCT-based feature
Extraction
Probe Normalization
(Geometry)
Normalization
(Illumination)
DCT-based feature
extraction
Quick Search
Stored
database
features
MATCH
RUNTIME
7. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
977 | P a g e
DCT matrices are real and orthogonal. All
DCTs are separable transforms; the Multi-
dimensional transform can be decomposed into a
successive application of one-dimensional transforms
(1-D) in the appropriate directions. The DCT is a
widely used frequency transform because it closely
approximates the optimal KLT transform, while not
suffering from the drawbacks of applying the KLT.
This close approximation is based upon the
asymptotic equivalence of the family of DCTs with
respect to KLT for a first-order stationary Markov
process, in terms of the transform size and the
interelement correlation coefficient. Recall that the
KLT is an optimal transform for data compression in
a statistical sense because it decorrelates a signal in
the transform domain, packs the most information in
a few coefficients, and minimizes mean-square error
between the reconstructed and original signal
compared to any other transform. However, KLT is
constructed from the eigen values and the
corresponding eigenvectors of a covariance matrix of
the data to be transformed it is signal-dependent, and
there is no general algorithm for its fast
computation[15-21]. The DCT does not suffer from
these drawbacks due to data-independent basis
functions and several algorithms for fast
implementation.
The DCT provides a good trade-off between
energy packing ability and computational complexity.
The energy packing property of DCT is superior to
that of any other unitary transform. This is important
because these images transforms pack the most
information into the fewest coefficients and yield the
smallest reconstruction errors. DCT basis images are
image independent as opposed to the optimal KLT,
which is data dependent. Another benefit of the DCT,
when compared to the other image independent
transforms, is that it has been implemented in a single
integrated circuit.
The performance of DCTII is closest to the
statistically optimal KLT based on a number of
performance criteria. Such criteria include energy
packing efficiency, variance distribution, rate
distortion, residual correlation, and possessing
maximum reducible bits Furthermore, a characteristic
of the DCT-II is superiority in bandwidth
compression (redundancy reduction) of a wide range
of signals and by existence of fast algorithms for its
implementation. As this is the case, the DCT-II and
its inversion, DCT-III, have been employed in the
international image/video coding standards: JPEG for
compression of still images, MPEG for compression
of video including HDTV (High Definition
Television), H.261 for compression telephony and
teleconferencing, and H.263 for visual
communication over telephone lines.
One-dimensional DCT: The one-dimensional DCT-
II (1-D DCT) is a technique that converts a spatial
Domain waveform into its constituent frequency
components as represented by a set of Coefficients.
The one-dimensional DCT-III is the processes of
reconstructing a set of spatial domain samples are
called the Inverse Discrete Cosine Transform (1-D
IDCT).The 1-D DCT has most often been used in
applying the two-dimensional DCT (2-D DCT), by
employing the row-column decomposition, which is
also more suitable for hardware implementation.
Two-dimensional DCT: The Discrete Cosine
Transform is one of many transforms that takes its
input and transforms it into a linear combination of
weighted basis functions. These basis functions are
commonly in the form of frequency components. The
2-D DCT is computed as a 1-D DCT applied twice,
once in the x direction, and again in the y direction.
The discrete Cosine transform is of a N x M image
f(x, y) is defined by: In computing the 2-D DCT,
factoring reduces the problem to applying a series of
1-D DCT computations. The two interchangeable
steps in calculating the 2-D DCT are:
Apply 1-D DCT (vertically) to the
columns.
Apply 1-D DCT (horizontally) to result of
Step 1.
In most compression schemes such as JPEG
and MPEG, typically an 8x8 or 16x16 sub image, or
block, of pixels (8x8 is optimum for trade-off
between compression efficiency and computational
complexity) is used in applying the 2-D DCT. The
DCT helps separate the image into parts (or spectral
sub-bands) of differing importance (with respect to
the image's visual quality). DCT transforms the input
into a linear combination of weighted basis functions.
These basis functions are the frequency components
of the input data. For most images, much of the signal
energy lies at low frequencies these are relocated to
the upper-left corner of the DCT. Conversely, the
lower-right values of the DCT array represent higher
frequencies, and turn out to be smaller in magnitude,
especially as u and v approach the sub image width
and height, respectively[15-21]. Here we show some
approach to basic algorithm
1. Maintain a database with images in JPEG
format.
2. Select a Probe for testing.
3. Convert the probe image and database
image by apply Normalization
Techniques.
4. Process of Normalization
i. Probe and database images are converted
into grayscale images.
ii. Converted images are resized into 8 X 8 or
16 X 16.
iii. Feature vector values are derived for the
resized images.
iv. DCT technique is applied to the Feature
vector values.
v. Finally covariance is displayed for both
database and probe.
5. Subtract the covariance and feature vector
values of probe and database image.
8. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
978 | P a g e
6. If the output value is Zero or negative then
the image is recognized. Otherwise
image is not recognized
IV. CASE STUDY
Here we study some of the results with different
images
Result Objectives: Both Images are same so image is
recognized, here we Enter the angle value 0. S = -
1.8084e+003
The output is Zero or Negative so object is
recognized,
Conclusion: When both the images are in same size
but in different angles. If angle is equal then result is
pass and image is recognized. Shown in Fig 3
Fig 3 Analysis 1
Result Objectives: Here we taken the same image
with equal sizes but they are in different angles. Here
we Enter the angle value 8, s = 6.8758e+003, The
output is positive so not recognized the object
Conclusion: When both images are in same sizes but
in different angles. If Angle is equal then result is
pass otherwise fail.so image is not Recognized
because Positive value. Shown in Fig 4
Fig 4 Analysis 2
Result Objectives: Different images are not
matched, enter the input image'pic5.jpg', enter the
database image'pic1.jpg'
S = 4.8084e+003, The both images are not same,
Conclusion: The input and database images are not
same. So they are not matched. Shown in Fig 5
Fig 5 Analysis 3
Result Objectives: Both images are matched. Enter the input image'pic5.jpg', Enter the database
image'pic5.jpg', S=0
The output is Zero or Negative so object is recognized, Both Images are same
Conclusion: The input and database images are same. Shown in Fig 6
Fig 6 Analysis 4
V. CONCLUSION AND FUTURE WORK
The Effects Of Feature Selection And
Feature Normalization To The Performance Of A
Local Appearance Based Face Recognition Scheme.
Here, Three Different Feature Sets And Two
Normalization Techniques Which Analyze The
9. B.Vijay, A.Nagabhushana Rao / International Journal of Engineering Research and
Applications (IJERA) ISSN: 2248-9622 www.ijera.com
Vol. 3, Issue 3, May-Jun 2013, pp.971-979
979 | P a g e
Effects Of Distance Metrics On The Performance.
This Approach Was Based On The Discrete Cosine
Transform, And Experimental Evidence To Confirm
Its Usefulness And Robustness Was Presented. The
Mathematical Relationship Between The Discrete
Cosine Transform (Dct) And The Karhunen-Loeve
Transform (Klt) Explains The Near-Optimal
Performance Of The Former. Experimentally, The
Dct Was Shown To Perform Very Well In Face
Recognition, Just Like The Klt. Face Normalization
Techniques Were Also Incorporated In The Face
Recognition System Discussed Here. It Is Required
That A Perfect Alignment Between A Gallery And A
Probe Image. We Can Also Extend It To Pose
Invariant Face Recognition Method, Centered On
Modeling Joint Appearance Of Gallery And Probe
Images Across Pose, That Do Not Require The Facial
Landmarks To Be Detected .
REFERENCES
[1]. R. Gottumukkal, V.K. Asari, ―An Improved
Face Recognition Technique Based On
Modular Pca Approach‖,Pattern Recognition
Letters, 25(4), 2004.
[2]. B. Heisele Et Al, ―Face Recognition:
Component-Based Versus Global
Approaches‖, Computer Vision And Image
Understanding, 91:6-21, 2003.
[3]. T. Kim Et Al, ―Component-Based Lda Face
Description For Image Retrieval And Mpeg-
7 Standardisation‖, Image And Vision
Computing, 23(7):631-642, 2005.
[4]. A.Pentland, B. Moghaddam, T. Starner And
M. Turk,―View Based And Modular
Eigenspaces For Face Recognition‖,
Proceedings Of Ieee Cvpr, Pp. 84-91, 1994.
[5]. Z. Pan And H. Bolouri, ―High Speed Face
Recognition Based On Discrete Cosine
Transforms And Neural Networks‖,
Technical Report, University Of
Hertfordshire, Uk, 1999.
[6]. Z. M. Hafed And M. D. Levine, ―Face
Recognition Using The Discrete Cosine
Transform‖, International Journal Of
Computer Vision, 43(3), 2001.
[7]. C. Sanderson And K. K. Paliwal, ―Features
For Robust Facebased Identity Verification‖,
Signal Processing, 83(5), 2003.
[8]. T. Sim, S. Baker, And M. Bsat, ―The Cmu
Pose, Illumination, And Expression (Pie)
Database‖, Proc. Of Intl.Conf. On
Automatic Face And
Gesturerecognition,2002
[9]. W. L. Scott, ―Block-Level Discrete Cosine
Transform Coefficients For Autonomic Face
Recognition‖, Phd Thesis,Louisiana State
University, Usa, May 2003.
[10]. A. Nefian, ―A Hidden Markov Model-Based
Approach For Face Detection And
Recognition―, Phd Thesis, Georgia Institute
Of Technology, 1999.
[11]. H.K. Ekenel, R. Stiefelhagen, "A Generic
Face (Frgc), Arlington, Va,
Usa,March2006.
[12]. H.K. Ekenel, R. Stiefelhagen, "A Generic
Face Representation Approach For Local
Appearance Based Face Verification", Cvpr
Ieee Workshop On Frgc Experiments, 2005.
[13]. H.K. Ekenel, A. Pnevmatikakis, ―Video-
Based Face Recognition Evaluation In The
Chil Project –Run 1‖, 7th
Intl. Conf. On
Automatic Face And Gesture Recogniti On
(Fg 2006), Southampton, Uk, April 2006.
[14]. A.M. Martinez, R. Benavente, “The Ar Face
Database‖, Cvc Technical Report #24, 1998.
[15]. M. Bartlett Et Al, ―Face Recognition By
Independent Component Analysis‖, Ieee
Trans. On Neural Networks,13(6):1450–
1454, 2002.
[16]. Kim, K.I., Jung, K., Kim, H.J., Face
Recognition Using Kernel Principal
Component Analysis. Ieee Signal Process.
Lett. 9 (2), 40–42, 2002.
[17]. C. Yeo , Y. H. Tan And Z. Li "Low-
Complexity Mode-Dependent Klt For
Block-Based Intra Coding", Ieee Icip-
2011,2011.
[18]. H. Seyedarabi, A. Aghagolzadeh And S.
Khanmohammadi, "Recognition Of Six
Basic Facial Expressions By Feature-Points
Tracking Using Rbf Neural Network And
Fuzzy Inference System," Ieee Int Conf. On
Multimedia And Expo (Icme), Pp. 1219-
1222, 2004.
[19]. H. Seyedarabi, A. Aghagolzadeh And S.
Khanmohammadi, "Facial Expression
Animation And Lip Tracking Using Facial
Characteristic Points And Deformable
Model," Transaction On Engineering,
Computer And Technology Issn, Vol. 1, Pp.
416-419, Dec 2004.
[20]. D. Vukadinovic And M. Panitc, "Fully
Automatic Facial Feature Point Detection
Using Gabor Feature Based Boosted
Classifiers," Proceedings Of Ieee
International Conference On Man, System
And Cybernetics, Pp. 1692-1698, 2005.
[21]. Mokhayeri, F.; Akbarzadeh-T, M.-R. "A
Novel Facial Feature Extraction Method
Based On Icm Network For Affective
Recognition", Neural Networks (Ijcnn), The
2011 International Joint Conference On, On
Page(S): 1988 – 199,2011.
[22]. H.K. Ekenel, R. Stiefelhagen, ―Local
Appearance Based Face Recognition Using
Discrete Cosine Transform‖,Eusipco 2005,
Antalya, Turkey, 2005.