The face is the most extraordinary communicator, which plays an important role in interpersonal relations and Human Machine Interaction. . Facial expressions play an important role wherever humans interact with computers and human beings to communicate their emotions and intentions. Facial expressions, and other gestures, convey non-verbal communication cues in face-to-face interactions. In this paper we have developed an algorithm which is capable of identifying a person’s facial expression and categorize them as happiness, sadness, surprise and neutral. Our approach is based on local binary patterns for representing face images. In our project we use training sets for faces and non faces to train the machine in identifying the face images exactly. Facial expression classification is based on Principle Component Analysis. In our project, we have developed methods for face tracking and expression identification from the face image input. Applying the facial expression recognition algorithm, the developed software is capable of processing faces and recognizing the person’s facial expression. The system analyses the face and determines the expression by comparing the image with the training sets in the database. We have followed PCA and neural networks in analyzing and identifying the facial expressions.
We seek to classify images into different emotions using a first 'intuitive' machine learning approach, then training models using convolutional neural networks and finally using a pretrained model for better accuracy.
We seek to classify images into different emotions using a first 'intuitive' machine learning approach, then training models using convolutional neural networks and finally using a pretrained model for better accuracy.
Facial emotion detection on babies' emotional face using Deep Learning.Takrim Ul Islam Laskar
phase- 1
Face Detection.
Facial Landmark detection.
phase- 2
Neural Network Training and Testing.
validation and implementation.
phase - 1 has been completed successfully.
Deep Neural Networks (DNNs) have shown to outperformtraditionalmethodsinvariousvisualrecognitiontasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed methodisevaluatedusingfourpubliclyavailabledatabases in subject-independent and cross-database tasks and outperforms state-of-the-art methods.
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
Neural Networks lie at the apogee of Machine Learning algorithms. With a large set of data and automatic feature selection and extraction process, Convolutional Neural Networks are second to none. Neural Networks can be very effective in classification problems.
Facial Emotion Recognition is a technology that helps companies and individuals evaluate customers and optimize their products and services by most relevant and pertinent feedback.
Facial Emotion Recognition using Convolution Neural NetworkYogeshIJTSRD
Facial expression plays a major role in every aspect of human life for communication. It has been a boon for the research in facial emotion with the systems that give rise to the terminology of human computer interaction in real life. Humans socially interact with each other via emotions. In this research paper, we have proposed an approach of building a system that recognizes facial emotion using a Convolutional Neural Network CNN which is one of the most popular Neural Network available. It is said to be a pattern recognition Neural Network. Convolutional Neural Network reduces the dimension for large resolution images and not losing the quality and giving a prediction output whats expected and capturing of the facial expressions even in odd angles makes it stand different from other models also i.e. it works well for non frontal images. But unfortunately, CNN based detector is computationally heavy and is a challenge for using CNN for a video as an input. We will implement a facial emotion recognition system using a Convolutional Neural Network using a dataset. Our system will predict the output based on the input given to it. This system can be useful for sentimental analysis, can be used for clinical practices, can be useful for getting a persons review on a certain product, and many more. Raheena Bagwan | Sakshi Chintawar | Komal Dhapudkar | Alisha Balamwar | Prof. Sandeep Gore "Facial Emotion Recognition using Convolution Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39972.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/39972/facial-emotion-recognition-using-convolution-neural-network/raheena-bagwan
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
Recognition of different internal emotions of human face under various critical
conditions is a difficult task. Facial expression recognition with different age variations is
considered in this study. This paper emphasizes on recognition of facial expression like
happiness mood of nine persons using subspace methods. This paper mainly focuses on new
robust subspace method which is based on Proposed Euclidean Distance Score Level Fusion
(PEDSLF) using PCA, ICA, SVD methods. All these methods and new robust method is
tested with FGNET database. An automatic recognition of facial expressions is being carried
out. Comparative analysis results surpluses PEDSLF method is more accurate for happiness
emotional facial expression recognition.
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...IJECEIAES
The importance of emotion recognition lies in the role that emotions play in our everyday lives. Emotions have a strong relationship with our behavior. Thence, automatic emotion recognition, is to equip the machine of this human ability to analyze, and to understand the human emotional state, in order to anticipate his intentions from facial expression. In this paper, a new approach is proposed to enhance accuracy of emotion recognition from facial expression, which is based on input features deducted only from fiducial points. The proposed approach consists firstly on extracting 1176 dynamic features from image sequences that represent the proportions of euclidean distances between facial fiducial points in the first frame, and faicial fiducial points in the last frame. Secondly, a feature selection method is used to select only the most relevant features from them. Finally, the selected features are presented to a Neural Network (NN) classifier to classify facial expression input into emotion. The proposed approach has achieved an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and 93.8% on the JAFFE database.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Three-dimensional multimodal models of objective classes are a great tool in modeling and recognition. The multimodal involuntary emotion recognition during a mentally challenged-based communication is presented. We have easily found the mentally disorder people without a doctor. The features are built upon the emotion, motion and frequency to identifying the percentage of mentally disorder peoples. Using Different categories of an image, video, audio and emotions can be discriminated. An image using an algorithms for classification is 3DMM (Three-dimensional morph able models) used to fit the model to images, and a framework for face emotion recognition. GPSO (Guided Particle Swarm Optimization) the emotion finding problem is basically an exploration problem, where at every point; we are pointed to recognize which of the thinkable emotions ensures the current facial expression denotes and GA (Genetic Algorithm) has the virtues of overflowing coding, and decoding, assigning complex information flexibly. GA is calculating the percentage of mental disorder. We proposed using different algorithm to identify the mentally challenged persons.
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
Abstract: Many approaches have been proposed for face recognition but there are major constraints like illumination, lightning, pose
etc., when taken into consideration, results in poor recognition rate. We propose a method to improve the recognition rate of the face
recognition system which uses various methods like homogeneity, energy, covariance, contrast, asymmetry, correlation, mean,
standard deviation, entropy, kurtosis to extract the facial features for a better recognition rate. Also the extracted features are trained
and it is associated with a feed forward back propagation neural network used for classification to render better results.
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
This paper presents a novel approach to face recognition. The task of face recognition is to verify a claimed identity by comparing a claimed image of the individual with other images belonging to the same individual/other individual in a database. The proposed method utilizes Local Ternary Pattern and signed bit multiplication to extract local features of a face. The image is divided into small non-overlapping windows. Processing is carried out on these windows to extract features. Test image’s features are compared with all the training images using Euclidean's distance. The image with lowest Euclidean distance is recognized as the true face image. If the distance between test and all training images is more than threshold then test image is considered as unrecognised image or match not found .The face recognition rate of proposed system is calculated by varying the number of images per person in training database
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
Facial emotion detection on babies' emotional face using Deep Learning.Takrim Ul Islam Laskar
phase- 1
Face Detection.
Facial Landmark detection.
phase- 2
Neural Network Training and Testing.
validation and implementation.
phase - 1 has been completed successfully.
Deep Neural Networks (DNNs) have shown to outperformtraditionalmethodsinvariousvisualrecognitiontasks including Facial Expression Recognition (FER). In spite of efforts made to improve the accuracy of FER systems using DNN, existing methods still are not generalizable enough in practical applications. This paper proposes a 3D Convolutional Neural Network method for FER in videos. This new network architecture consists of 3D Inception-ResNet layers followed by an LSTM unit that together extracts the spatial relations within facial images as well as the temporal relations between different frames in the video. Facial landmark points are also used as inputs to our network which emphasize on the importance of facial components rather than the facial regions that may not contribute significantly to generating facial expressions. Our proposed methodisevaluatedusingfourpubliclyavailabledatabases in subject-independent and cross-database tasks and outperforms state-of-the-art methods.
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
Neural Networks lie at the apogee of Machine Learning algorithms. With a large set of data and automatic feature selection and extraction process, Convolutional Neural Networks are second to none. Neural Networks can be very effective in classification problems.
Facial Emotion Recognition is a technology that helps companies and individuals evaluate customers and optimize their products and services by most relevant and pertinent feedback.
Facial Emotion Recognition using Convolution Neural NetworkYogeshIJTSRD
Facial expression plays a major role in every aspect of human life for communication. It has been a boon for the research in facial emotion with the systems that give rise to the terminology of human computer interaction in real life. Humans socially interact with each other via emotions. In this research paper, we have proposed an approach of building a system that recognizes facial emotion using a Convolutional Neural Network CNN which is one of the most popular Neural Network available. It is said to be a pattern recognition Neural Network. Convolutional Neural Network reduces the dimension for large resolution images and not losing the quality and giving a prediction output whats expected and capturing of the facial expressions even in odd angles makes it stand different from other models also i.e. it works well for non frontal images. But unfortunately, CNN based detector is computationally heavy and is a challenge for using CNN for a video as an input. We will implement a facial emotion recognition system using a Convolutional Neural Network using a dataset. Our system will predict the output based on the input given to it. This system can be useful for sentimental analysis, can be used for clinical practices, can be useful for getting a persons review on a certain product, and many more. Raheena Bagwan | Sakshi Chintawar | Komal Dhapudkar | Alisha Balamwar | Prof. Sandeep Gore "Facial Emotion Recognition using Convolution Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39972.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/39972/facial-emotion-recognition-using-convolution-neural-network/raheena-bagwan
Happiness Expression Recognition at Different Age ConditionsEditor IJMTER
Recognition of different internal emotions of human face under various critical
conditions is a difficult task. Facial expression recognition with different age variations is
considered in this study. This paper emphasizes on recognition of facial expression like
happiness mood of nine persons using subspace methods. This paper mainly focuses on new
robust subspace method which is based on Proposed Euclidean Distance Score Level Fusion
(PEDSLF) using PCA, ICA, SVD methods. All these methods and new robust method is
tested with FGNET database. An automatic recognition of facial expressions is being carried
out. Comparative analysis results surpluses PEDSLF method is more accurate for happiness
emotional facial expression recognition.
Emotion Recognition from Facial Expression Based on Fiducial Points Detection...IJECEIAES
The importance of emotion recognition lies in the role that emotions play in our everyday lives. Emotions have a strong relationship with our behavior. Thence, automatic emotion recognition, is to equip the machine of this human ability to analyze, and to understand the human emotional state, in order to anticipate his intentions from facial expression. In this paper, a new approach is proposed to enhance accuracy of emotion recognition from facial expression, which is based on input features deducted only from fiducial points. The proposed approach consists firstly on extracting 1176 dynamic features from image sequences that represent the proportions of euclidean distances between facial fiducial points in the first frame, and faicial fiducial points in the last frame. Secondly, a feature selection method is used to select only the most relevant features from them. Finally, the selected features are presented to a Neural Network (NN) classifier to classify facial expression input into emotion. The proposed approach has achieved an emotion recognition accuracy of 99% on the CK+ database, 84.7% on the Oulu-CASIA VIS database, and 93.8% on the JAFFE database.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Three-dimensional multimodal models of objective classes are a great tool in modeling and recognition. The multimodal involuntary emotion recognition during a mentally challenged-based communication is presented. We have easily found the mentally disorder people without a doctor. The features are built upon the emotion, motion and frequency to identifying the percentage of mentally disorder peoples. Using Different categories of an image, video, audio and emotions can be discriminated. An image using an algorithms for classification is 3DMM (Three-dimensional morph able models) used to fit the model to images, and a framework for face emotion recognition. GPSO (Guided Particle Swarm Optimization) the emotion finding problem is basically an exploration problem, where at every point; we are pointed to recognize which of the thinkable emotions ensures the current facial expression denotes and GA (Genetic Algorithm) has the virtues of overflowing coding, and decoding, assigning complex information flexibly. GA is calculating the percentage of mental disorder. We proposed using different algorithm to identify the mentally challenged persons.
An Approach to Face Recognition Using Feed Forward Neural NetworkEditor IJCATR
Abstract: Many approaches have been proposed for face recognition but there are major constraints like illumination, lightning, pose
etc., when taken into consideration, results in poor recognition rate. We propose a method to improve the recognition rate of the face
recognition system which uses various methods like homogeneity, energy, covariance, contrast, asymmetry, correlation, mean,
standard deviation, entropy, kurtosis to extract the facial features for a better recognition rate. Also the extracted features are trained
and it is associated with a feed forward back propagation neural network used for classification to render better results.
Face Recognition System Using Local Ternary Pattern and Signed Number Multipl...inventionjournals
This paper presents a novel approach to face recognition. The task of face recognition is to verify a claimed identity by comparing a claimed image of the individual with other images belonging to the same individual/other individual in a database. The proposed method utilizes Local Ternary Pattern and signed bit multiplication to extract local features of a face. The image is divided into small non-overlapping windows. Processing is carried out on these windows to extract features. Test image’s features are compared with all the training images using Euclidean's distance. The image with lowest Euclidean distance is recognized as the true face image. If the distance between test and all training images is more than threshold then test image is considered as unrecognised image or match not found .The face recognition rate of proposed system is calculated by varying the number of images per person in training database
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Implementation of Face Recognition in Cloud Vision Using Eigen FacesIJERA Editor
Cloud computing comes in several different forms and this article documents how service, Face is a complex multidimensional visual model and developing a computational model for face recognition is difficult. The papers discuss a methodology for face recognition based on information theory approach of coding and decoding the face image. Proposed System is connection of two stages – Feature extraction using principle component analysis and recognition using the back propagation Network. This paper also discusses our work with the design and implementation of face recognition applications using our mobile-cloudlet-cloud architecture named MOCHA and its initial performance results. The dispute lies with how to performance task partitioning from mobile devices to cloud and distribute compute load among cloud servers to minimize the response time given diverse communication latencies and server compute powers
PRE-PROCESSING TECHNIQUES FOR FACIAL EMOTION RECOGNITION SYSTEMIAEME Publication
Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions. Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature extraction and classification technique for emotion recognition is still an open problem. Image pre-processing and normalization is significant part of face recognition systems. Changes in lighting conditions produces dramatically decrease of recognition performance. In this paper, the image pre-processing techniques like K-Nearest Neighbor, Cultural Algorithm and Genetic Algorithm are used to remove the noise in the facial image for enhancing the emotion recognition. The performance of the preprocessing techniques are evaluated with various performance metrics.
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
Abstract -Computer vision is a dynamic research field which involves analyzing, modifying, and high-level understanding of images. Its goal is to determine what is happening in front of a camera and use the facts understood to control a computer or a robot, or to provide the users with new images that are more informative or esthetical pleasing than the original camera images. It uses many advanced techniques in image representation to obtain efficiency in computation. Sparse signal representation techniqueshave significant impact in computer vision, where the goal is to obtain a compact high-fidelity representation of the input signal and to extract meaningful information. Segmentation and optimal parallel processing algorithms are expected to further improve the efficiency and speed up in processing.
A Study on Sparse Representation and Optimal Algorithms in Intelligent Comput...MangaiK4
Abstract -Computer vision is a dynamic research field which involves analyzing, modifying, and high-level understanding of images. Its goal is to determine what is happening in front of a camera and use the facts understood to control a computer or a robot, or to provide the users with new images that are more informative or esthetical pleasing than the original camera images. It uses many advanced techniques in image representation to obtain efficiency in computation. Sparse signal representation techniqueshave significant impact in computer vision, where the goal is to obtain a compact high-fidelity representation of the input signal and to extract meaningful information. Segmentation and optimal parallel processing algorithms are expected to further improve the efficiency and speed up in processing.
Research Inventy : International Journal of Engineering and Scienceresearchinventy
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
Facial Expression Recognition Based on Facial Motion Patternsijeei-iaes
Facial expression is one of the most powerful and direct mediums embedded in human beings to communicate with other individuals’ feelings and abilities. In recent years, many surveys have been carried on facial expression analysis. With developments in machine vision and artificial intelligence, facial expression recognition is considered a key technique of the developments in computer interaction of mankind and is applied in the natural interaction between human and computer, machine vision and psycho- medical therapy. In this paper, we have developed a new method to recognize facial expressions based on discovering differences of facial expressions, and consequently appointed a unique pattern to each single expression.by analyzing the image by means of a neighboring window on it, this recognition system is locally estimated. The features are extracted as binary local features; and according to changes in points of windows, facial points get a directional motion per each facial expression. Using pointy motion of all facial expressions and stablishing a ranking system, we delete additional motion points that decrease and increase, respectively, the ranking size and strenghth. Classification is provided according to the nearest neighbor. In the conclusion of the paper, the results obtained from the experiments on tatal data of Cohn-Kanade demonstrate that our proposed algorithm, compared to previous methods (hierarchical algorithm combined with several features and morphological methods as well as geometrical algorithms), has a better performance and higher reliability.
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
This paper deals with acknowledgment of characteristic feelings from human countenances is a fascinating subject with an extensive variety of potential applications like human-PC communication, robotized mentoring frameworks, picture and video recovery, brilliant situations, what's more, driver cautioning frameworks. Generally, facial feeling acknowledgment frameworks have been assessed on lab controlled information, which is not illustrative of the earth confronted in genuine applications. To vigorously perceive facial feelings in genuine regular circumstances, this paper proposes a methodology called Extreme Sparse Learning (ESL), which can mutually take in a word reference (set of premise) and a non-direct grouping model. The proposed approach consolidates the discriminative force of Extreme Learning Machine (ELM) with the reproduction property of meager representation to empower exact arrangement when given uproarious signs and blemished information recorded in common settings. Moreover, this work exhibits another neighborhood spatioworldly descriptor that is particular what's more, posture invariant. The proposed structure can accomplish best in class acknowledgment precision on both acted what's more, unconstrained facial feeling databases.
REVIEW OF FACE DETECTION SYSTEMS BASED ARTIFICIAL NEURAL NETWORKS ALGORITHMSijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Review of face detection systems based artificial neural networks algorithmsijma
Face detection is one of the most relevant applications of image processing and biometric systems.
Artificial neural networks (ANN) have been used in the field of image processing and pattern recognition.
There is lack of literature surveys which give overview about the studies and researches related to the using
of ANN in face detection. Therefore, this research includes a general review of face detection studies and
systems which based on different ANN approaches and algorithms. The strengths and limitations of these
literature studies and systems were included also.
Two Level Decision for Recognition of Human Facial Expressions using Neural N...IIRindia
Facial Expressions of the human being is the one which is the outcome of the inner feelings of the mind. It is the person’s internal emotional states and intentions.A person’s face provides a lot of information such as age, gender, identity, mood, expressions and so on. Faces play an important role in the recognition of the expressions of persons. In this research, an attempt is made to design a model to classify human facial expressions according to the features extracted f0rom human facial images by applying 3 Sigma limits inSecond level decision using Neural Network (NN). Now a days, Artificial Neural Network (ANN) has been widely used as a tool for solving many decision modeling problems. In this paper a feed forward propagation Neural networks are constructed for expression classification system for gray-scale facial images. Three groups of expressions including Happy, Sad and Anger are used in the classification system. In this paper, a Second level decision has been proposed in which the output obtained from the Neural Network(Primary Level) has been refined at the Second level in order to improvise the accuracy of the recognition rate. The accuracy of the system is analyzed by the variation on the range of the expression groups. The efficiency of the system is demonstrated through the experimental results.
The Use of Java Swing’s Components to Develop a WidgetWaqas Tariq
Widget is a kind of application provides a single service such as a map, news feed, simple clock, battery-life indicators, etc. This kind of interactive software object has been developed to facilitate user interface (UI) design. A user interface (UI) function may be implemented using different widgets with the same function. In this article, we present the widget as a platform that is generally used in various applications, such as in desktop, web browser, and mobile phone. We also describe a visual menu of Java Swing’s components that will be used to establish widget. It will assume that we have successfully compiled and run a program that uses Swing components.
3D Human Hand Posture Reconstruction Using a Single 2D ImageWaqas Tariq
Passive sensing of the 3D geometric posture of the human hand has been studied extensively over the past decade. However, these research efforts have been hampered by the computational complexity caused by inverse kinematics and 3D reconstruction. In this paper, our objective focuses on 3D hand posture estimation based on a single 2D image with aim of robotic applications. We introduce the human hand model with 27 degrees of freedom (DOFs) and analyze some of its constraints to reduce the DOFs without any significant degradation of performance. A novel algorithm to estimate the 3D hand posture from eight 2D projected feature points is proposed. Experimental results using real images confirm that our algorithm gives good estimates of the 3D hand pose. Keywords: 3D hand posture estimation; Model-based approach; Gesture recognition; human- computer interface; machine vision.
Camera as Mouse and Keyboard for Handicap Person with Troubleshooting Ability...Waqas Tariq
Camera mouse has been widely used for handicap person to interact with computer. The utmost important of the use of camera mouse is must be able to replace all roles of typical mouse and keyboard. It must be able to provide all mouse click events and keyboard functions (include all shortcut keys) when it is used by handicap person. Also, the use of camera mouse must allow users troubleshooting by themselves. Moreover, it must be able to eliminate neck fatigue effect when it is used during long period. In this paper, we propose camera mouse system with timer as left click event and blinking as right click event. Also, we modify original screen keyboard layout by add two additional buttons (button “drag/ drop” is used to do drag and drop of mouse events and another button is used to call task manager (for troubleshooting)) and change behavior of CTRL, ALT, SHIFT, and CAPS LOCK keys in order to provide shortcut keys of keyboard. Also, we develop recovery method which allows users go from camera and then come back again in order to eliminate neck fatigue effect. The experiments which involve several users have been done in our laboratory. The results show that the use of our camera mouse able to allow users do typing, left and right click events, drag and drop events, and troubleshooting without hand. By implement this system, handicap person can use computer more comfortable and reduce the dryness of eyes.
A Proposed Web Accessibility Framework for the Arab DisabledWaqas Tariq
The Web is providing unprecedented access to information and interaction for people with disabilities. This paper presents a Web accessibility framework which offers the ease of the Web accessing for the disabled Arab users and facilitates their lifelong learning as well. The proposed framework system provides the disabled Arab user with an easy means of access using their mother language so they don’t have to overcome the barrier of learning the target-spoken language. This framework is based on analyzing the web page meta-language, extracting its content and reformulating it in a suitable format for the disabled users. The basic objective of this framework is supporting the equal rights of the Arab disabled people for their access to the education and training with non disabled people. Key Words : Arabic Moon code, Arabic Sign Language, Deaf, Deaf-blind, E-learning Interactivity, Moon code, Web accessibility , Web framework , Web System, WWW.
Real Time Blinking Detection Based on Gabor FilterWaqas Tariq
New method of blinking detection is proposed. The utmost important of blinking detections method is robust against different users, noise, and also change of eye shape. In this paper, we propose blinking detections method by measuring of distance between two arcs of eye (upper part and lower part). We detect eye arcs by apply Gabor filter onto eye image. As we know that Gabor filter has advantage on image processing application since it able to extract spatial localized spectral features, such line, arch, and other shape are more easily detected. After two of eye arcs are detected, we measure the distance between both by using connected labeling method. The open eye is marked by the distance between two arcs is more than threshold and otherwise, the closed eye is marked by the distance less than threshold. The experiment result shows that our proposed method robust enough against different users, noise, and eye shape changes with perfectly accuracy.
Computer Input with Human Eyes-Only Using Two Purkinje Images Which Works in ...Waqas Tariq
A method for computer input with human eyes-only using two Purkinje images which works in a real time basis without calibration is proposed. Experimental results shows that cornea curvature can be estimated by using two light sources derived Purkinje images so that no calibration for reducing person-to-person difference of cornea curvature. It is found that the proposed system allows usersf movements of 30 degrees in roll direction and 15 degrees in pitch direction utilizing detected face attitude which is derived from the face plane consisting three feature points on the face, two eyes and nose or mouth. Also it is found that the proposed system does work in a real time basis.
Toward a More Robust Usability concept with Perceived Enjoyment in the contex...Waqas Tariq
Mobile multimedia service is relatively new but has quickly dominated people¡¯s lives, especially among young people. To explain this popularity, this study applies and modifies the Technology Acceptance Model (TAM) to propose a research model and conduct an empirical study. The goal of study is to examine the role of Perceived Enjoyment (PE) and what determinants can contribute to PE in the context of using mobile multimedia service. The result indicates that PE is influencing on Perceived Usefulness (PU) and Perceived Ease of Use (PEOU) and directly Behavior Intention (BI). Aesthetics and flow are key determinants to explain Perceived Enjoyment (PE) in mobile multimedia usage.
Collaborative Learning of Organisational KnolwedgeWaqas Tariq
This paper presents recent research into methods used in Australian Indigenous Knowledge sharing and looks at how these can support the creation of suitable collaborative envi- ronments for timely organisational learning. The protocols and practices as used today and in the past by Indigenous communities are presented and discussed in relation to their relevance to a personalised system of knowledge sharing in modern organisational cultures. This research focuses on user models, knowledge acquisition and integration of data for constructivist learning in a networked repository of or- ganisational knowledge. The data collected in the repository is searched to provide collections of up-to-date and relevant material for training in a work environment. The aim is to improve knowledge collection and sharing in a team envi- ronment. This knowledge can then be collated into a story or workflow that represents the present knowledge in the organisation.
Our research aims to propose a global approach for specification, design and verification of context awareness Human Computer Interface (HCI). This is a Model Based Design approach (MBD). This methodology describes the ubiquitous environment by ontologies. OWL is the standard used for this purpose. The specification and modeling of Human-Computer Interaction are based on Petri nets (PN). This raises the question of representation of Petri nets with XML. We use for this purpose, the standard of modeling PNML. In this paper, we propose an extension of this standard for specification, generation and verification of HCI. This extension is a methodological approach for the construction of PNML with Petri nets. The design principle uses the concept of composition of elementary structures of Petri nets as PNML Modular. The objective is to obtain a valid interface through verification of properties of elementary Petri nets represented with PNML.
Development of Sign Signal Translation System Based on Altera’s FPGA DE2 BoardWaqas Tariq
The main aim of this paper is to build a system that is capable of detecting and recognizing the hand gesture in an image captured by using a camera. The system is built based on Altera’s FPGA DE2 board, which contains a Nios II soft core processor. Image processing techniques and a simple but effective algorithm are implemented to achieve this purpose. Image processing techniques are used to smooth the image in order to ease the subsequent processes in translating the hand sign signal. The algorithm is built for translating the numerical hand sign signal and the result are displayed on the seven segment display. Altera’s Quartus II, SOPC Builder and Nios II EDS software are used to construct the system. By using SOPC Builder, the related components on the DE2 board can be interconnected easily and orderly compared to traditional method that requires lengthy source code and time consuming. Quartus II is used to compile and download the design to the DE2 board. Then, under Nios II EDS, C programming language is used to code the hand sign translation algorithm. Being able to recognize the hand sign signal from images can helps human in controlling a robot and other applications which require only a simple set of instructions provided a CMOS sensor is included in the system.
An overview on Advanced Research Works on Brain-Computer InterfaceWaqas Tariq
A brain–computer interface (BCI) is a proficient result in the research field of human- computer synergy, where direct articulation between brain and an external device occurs resulting in augmenting, assisting and repairing human cognitive. Advanced works like generating brain-computer interface switch technologies for intermittent (or asynchronous) control in natural environments or developing brain-computer interface by Fuzzy logic Systems or by implementing wavelet theory to drive its efficacies are still going on and some useful results has also been found out. The requirements to develop this brain machine interface is also growing day by day i.e. like neuropsychological rehabilitation, emotion control, etc. An overview on the control theory and some advanced works on the field of brain machine interface are shown in this paper.
Exploring the Relationship Between Mobile Phone and Senior Citizens: A Malays...Waqas Tariq
There is growing ageing phenomena with the rise of ageing population throughout the world. According to the World Health Organization (2002), the growing ageing population indicates 694 million, or 223% is expected for people aged 60 and over, since 1970 and 2025.The growth is especially significant in some advanced countries such as North America, Japan, Italy, Germany, United Kingdom and so forth. This growing older adult population has significantly impact the social-culture, lifestyle, healthcare system, economy, infrastructure and government policy of a nation. However, there are limited research studies on the perception and usage of a mobile phone and its service for senior citizens in a developing nation like Malaysia. This paper explores the relationship between mobile phones and senior citizens in Malaysia from the perspective of a developing country. We conducted an exploratory study using contextual interviews with 5 senior citizens of how they perceive their mobile phones. This paper reveals 4 interesting themes from this preliminary study, in addition to the findings of the desirable mobile requirements for local senior citizens with respect of health, safety and communication purposes. The findings of this study bring interesting insight to local telecommunication industries as a whole, and will also serve as groundwork for more in-depth study in the future.
Principles of Good Screen Design in WebsitesWaqas Tariq
Visual techniques for proper arrangement of the elements on the user screen have helped the designers to make the screen look good and attractive. Several visual techniques emphasize the arrangement and ordering of the screen elements based on particular criteria for best appearance of the screen. This paper investigates few significant visual techniques in various web user interfaces and showcases the results for better understanding and their presence.
Virtual teams are used more and more by companies and other organizations to receive benefits. They are a great way to enable teamwork in situations where people are not sitting in the same physical place at the same time. As companies seek to increase the use of virtual teams, a need exists to explore the context of these teams, the virtuality of a team and software that may help these teams working virtualy. Virtual teams have the same basic principles as traditional teams, but there is one big difference. This difference is the way the team members communicate. Instead of using the dynamics of in-office face-to-face exchange, they now rely on special communication channels enabled by modern technologies, such as e-mails, faxes, phone calls and teleconferences, virtual meetings etc. This is why this paper is focused on the issues regarding virtual teams, and how these teams are created and progressing in Albania.
Cognitive Approach Towards the Maintenance of Web-Sites Through Quality Evalu...Waqas Tariq
It is a well established fact that the Web-Applications require frequent maintenance because of cutting– edge business competitions. The authors have worked on quality evaluation of web-site of Indian ecommerce domain. As a result of that work they have made a quality-wise ranking of these sites. According to their work and also the survey done by various other groups Futurebazaar web-site is considered to be one of the best Indian e-shopping sites. In this research paper the authors are assessing the maintenance of the same site by incorporating the problems incurred during this evaluation. This exercise gives a real world maintainability problem of web-sites. This work will give a clear picture of all the quality metrics which are directly or indirectly related with the maintainability of the web-site.
USEFul: A Framework to Mainstream Web Site Usability through Automated Evalua...Waqas Tariq
A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.
Robot Arm Utilized Having Meal Support System Based on Computer Input by Huma...Waqas Tariq
A robot arm utilized having meal support system based on computer input by human eyes only is proposed. The proposed system is developed for handicap/disabled persons as well as elderly persons and tested with able persons with several shapes and size of eyes under a variety of illumination conditions. The test results with normal persons show the proposed system does work well for selection of the desired foods and for retrieve the foods as appropriate as usersf requirements. It is found that the proposed system is 21% much faster than the manually controlled robotics.
Dynamic Construction of Telugu Speech Corpus for Voice Enabled Text EditorWaqas Tariq
In recent decades speech interactive systems have gained increasing importance. Performance of an ASR system mainly depends on the availability of large corpus of speech. The conventional method of building a large vocabulary speech recognizer for any language uses a top-down approach to speech. This approach requires large speech corpus with sentence or phoneme level transcription of the speech utterances. The transcriptions must also include different speech order so that the recognizer can build models for all the sounds present. But, for Telugu language, because of its complex nature, a very large, well annotated speech database is very difficult to build. It is very difficult, if not impossible, to cover all the words of any Indian language, where each word may have thousands and millions of word forms. A significant part of grammar that is handled by syntax in English (and other similar languages) is handled within morphology in Telugu. Phrases including several words (that is, tokens) in English would be mapped on to a single word in Telugu.Telugu language is phonetic in nature in addition to rich in morphology. That is why the speech technology developed for English cannot be applied to Telugu language. This paper highlights the work carried out in an attempt to build a voice enabled text editor with capability of automatic term suggestion. Main claim of the paper is the recognition enhancement process developed by us for suitability of highly inflecting, rich morphological languages. This method results in increased speech recognition accuracy with very much reduction in corpus size. It also adapts Telugu words to the database dynamically, resulting in growth of the corpus.
An Improved Approach for Word Ambiguity RemovalWaqas Tariq
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word. Keywords: Human Computer Interaction, Supervised Training, Unsupervised Learning, Word Ambiguity, Word sense disambiguation
Parameters Optimization for Improving ASR Performance in Adverse Real World N...Waqas Tariq
From the existing research it has been observed that many techniques and methodologies are available for performing every step of Automatic Speech Recognition (ASR) system, but the performance (Minimization of Word Error Recognition-WER and Maximization of Word Accuracy Rate- WAR) of the methodology is not dependent on the only technique applied in that method. The research work indicates that, performance mainly depends on the category of the noise, the level of the noise and the variable size of the window, frame, frame overlap etc is considered in the existing methods. The main aim of the work presented in this paper is to use variable size of parameters like window size, frame size and frame overlap percentage to observe the performance of algorithms for various categories of noise with different levels and also train the system for all size of parameters and category of real world noisy environment to improve the performance of the speech recognition system. This paper presents the results of Signal-to-Noise Ratio (SNR) and Accuracy test by applying variable size of parameters. It is observed that, it is really very hard to evaluate test results and decide parameter size for ASR performance improvement for its resultant optimization. Hence, this study further suggests the feasible and optimum parameter size using Fuzzy Inference System (FIS) for enhancing resultant accuracy in adverse real world noisy environmental conditions. This work will be helpful to give discriminative training of ubiquitous ASR system for better Human Computer Interaction (HCI). Keywords: ASR Performance, ASR Parameters Optimization, Multi-Environmental Training, Fuzzy Inference System for ASR, ubiquitous ASR system, Human Computer Interaction (HCI)
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Face Emotion Analysis Using Gabor Features In Image Database for Crime Investigation
1. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 42
Face Emotion Analysis Using Gabor Features
In Image Database for Crime Investigation
Lt. Dr. S. Santhosh Baboo, Reader santhos2001@sify.com
P.G & Research, Dept. of Computer Application,
D.G. Vaishnav College,
Chennai-106. INDIA
V.S. Manjula manjusunil.vs@gmail.com
Research Scholar, Dept. of Computer Science & Engineering,
Bharathiar University,
Coimbatore-641 046. INDIA
Abstract
The face is the most extraordinary communicator, which plays an important role in interpersonal
relations and Human Machine Interaction. Facial expressions play an important role wherever
humans interact with computers and human beings to communicate their emotions and
intentions. Facial expressions, and other gestures, convey non-verbal communication cues in
face-to-face interactions. In this paper we have developed an algorithm which is capable of
identifying a person’s facial expression and categorize them as happiness, sadness, surprise and
neutral. Our approach is based on local binary patterns for representing face images. In our
project we use training sets for faces and non faces to train the machine in identifying the face
images exactly. Facial expression classification is based on Principle Component Analysis. In our
project, we have developed methods for face tracking and expression identification from the face
image input. Applying the facial expression recognition algorithm, the developed software is
capable of processing faces and recognizing the person’s facial expression. The system analyses
the face and determines the expression by comparing the image with the training sets in the
database. We have followed PCA and neural networks in analyzing and identifying the facial
expressions.
Keywords: Facial Expressions, Human Machine Interaction, Training Sets, Faces and Non
Faces, Principal Component Analysis, Expression Recognition, Neural networks.
1. INTRODUCTION
Face expressions play a communicative role in the interpersonal relationships. Computer
recognition of human face identity is the most fundamental problem in the field of pattern
analysis. Emotion analysis in man-machine interaction system is designed to detect human face
in an image and analyze the facial emotion or expression of the face. This helps in improving the
interaction between the human and the machine. The machines can thereby understand the
man’s reaction and act accordingly. This reduces the human work hours. For example, robots can
be used as a class tutor, pet robots, CU animators and so on.. We identify facial expressions not
only to express our emotions, but also to provide important communicative cues during social
interaction, such as our level of interest, our desire to take a speaking turn and continuous
feedback signaling or understanding of the information conveyed. Support Vector Algorithm is
well suited for this task as high dimensionality does not affect the Gabor Representations. The
main disadvantage of the system is that it is very expensive to implement and maintain. Any
changes to be upgraded in the system needs a change in the algorithm which is very sensitive
and difficult; hence our developed system will be the best solution to overcome the above
mentioned disadvantages.
2. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 43
In this paper, we propose a complete face expression recognition system by combining the face
tracking and face expression identifying algorithm. The system automatically detects and
extracts the human face from the background based on a combination of a retrainable neural
network structure. In this system, the computer is trained with the various emotions of the face
and when given an input, the computer detects the emotion by comparing the co-ordinates of the
expression with that of the training examples and produces the output. Principle Component
Analysis algorithm is the one being used in this system to detect various emotions based on the
coordinates of the training sample given to the system.
2. RELATED WORK
Pantic & Rothkrantz [4] identify three basic problems a facial expression analysis approach needs
to deal with: face detection in a facial image or image sequence, facial expression data extraction
and facial expression classification. Most previous systems assume presence of a full frontal face
view in the image or the image sequence being analyzed, yielding some knowledge of the global
face location. To give the exact location of the face, Viola & Jones [5] use the Adaboost algorithm
to exhaustively pass a search sub-window over the image at multiple scales for rapid face
detection.
Essa & Pentland [6] perform spatial and temporal filtering together with thresholding to extract
motion blobs from image sequences. To detect presence of a face, these blobs are then
evaluated using the eigenfaces method [7] via principal component analysis (PCA) to calculate
the distance of the observed region from a face space of 128 sample images. To perform data
extraction, Littlewort et al. [8] use a bank of 40 Gabor wavelet filters at different scales and
orientations to perform convolution. They thus extract a “jet” of magnitudes of complex valued
responses at different locations in a lattice imposed on an image, as proposed in [9]. Essa &
Pentland [6] extend their face detection approach to extract the positions of prominent facial
features using eigenfeatures and PCA by calculating the distance of an image from a feature
space given a set of sample images via FFT and a local energy computation.
Cohn et al. [10] first manually localize feature points in the first frame of an image sequence and
then use hierarchical optical flow to track the motion of small windows surrounding these points
across frames. The displacement vectors for each landmark between the initial and the peak
frame represent the extracted expression information. In the final step of expression analysis,
expressions are classified according to some scheme. The most prevalent approaches are
based on the existence of six basic emotions (anger, disgust, fear, joy, sorrow and surprise) as
argued by Ekman [11] and the Facial Action Coding System (FACS), developed by Ekman and
Friesen [12], which codes expressions as a combination of 44 facial movements called Action
Units. While much progress has been made in automatically classifying according to FACS [13], a
fully automated FACS based approach for video has yet to be developed.
Dailey et al. [14] use a six unit, single layer neural network to classify into the six basic emotion
categories given Gabor jets extracted from static images. Essa & Pentland [6] calculate ideal
motion energy templates for each expression category and take the euclidean norm of the
difference between the observed motion energy in a sequence of images and each motion
energy template as a similarity metric. Littlewort et al. [8] preprocess image sequences image-by-
image to train two stages of support vector machines from Gabor filter jets. Cohn et al. [10] apply
separate discriminant functions and variance-covariance matrices to different facial regions and
use feature displacements as predictors for classification.
2.1. Principle Component Analysis
Facial expression classification was based on Principle Component Analysis. The Principle
Component Analysis (PCA) is one of the most successful techniques that have been used in
image recognition and compression. PCA is a statistical method under the broad title of factor
analysis. The purpose of PCA is to reduce the large dimensionality of the data space (observed
variables) to the smaller intrinsic dimensionality of feature space (independent variables), which
3. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 44
are needed to describe the data economically. [3] The main trend in feature extraction has been
representing the data in a lower dimensional space computed through a linear or non-linear
transformation satisfying certain properties.
The jobs which PCA can do are prediction, redundancy removal, feature extraction, data
compression, etc. Because PCA is a classical technique which can do something in the linear
domain, applications having linear models are suitable, such as signal processing, image
processing, system and control theory, communications, etc. [3]
Given an s-dimensional vector representation of each face in a training set of images, Principal
Component Analysis (PCA) tends to find a t-dimensional subspace whose basis vectors
correspond to the maximum variance direction in the original image space. This new subspace is
normally lower dimensional. If the image elements are considered as random variables, the PCA
basis vectors are defined as eigenvectors of the scatter matrix. PCA selects features important
for class representation.
The main idea of using PCA for face recognition is to express the large 1-D vector of pixels
constructed from 2-D facial image into the compact principle components of the feature space.
This can be called eigenspace projection. Eigenspace is calculated by identifying the
eigenvectors of the covariance matrix derived from a set of facial images(vectors). [3] we
implement a neural network to classify face images based on its computed PCA features.
Methodology
Let {X1, X2,…Xn }, x ∈ ℜ
n
be N samples from L classes {ω 1,ω 2, …… ω L }, and p(x) their mixture
distribution. In a sequel, it is assumed that a priori probabilities P(ω i) ,I = 1,2,...,L, are known.
Consider m and Σ denote mean vector and covariance matrix of samples, respectively. PCA
algorithm can be used to find a subspace whose basis vectors correspond to the maximum
variance directions in the original n dimensional space. PCA subspace can be used for
presentation of data with minimum error in reconstruction of original data. Let Φ
p
PCA denote a
linear n× p transformation matrix that maps the original n dimensional space onto a p dimensional
feature subspace where p < n . The new feature vectors,
Yi = (Φ
p
PCA)
t
*i, I = 1,2,…N --- (1)
It is easily proved that if the columns of Φ
p
PCA are the
eigenvectors of the covariance matrix corresponding to its p largest eigenvalues in decreasing
order, the optimum feature space for the representation of data is achieved. The covariance
matrix can be estimated by:
N
Σ = (∑ (x i − m) (x i − m) t
) / (N − 1) ------ (2)
i=1
Where m in (2) can be estimated by:
Mk = mk-1 + η (xk – mk-1) ------- (3)
Where mk is estimation of mean value at k-th iteration and xk is a the k-th input image. PCA is a
technique to extract features effective for representing data such that the average reconstruction
error is minimized. In the other word, PCA algorithm can be used to find a subspace whose basis
vectors correspond to the maximum variance directions in the original n dimensional space. PCA
transfer function is composed of significant eigenvectors of covariance matrix. The following
equation can be used for incremental estimation of covariance matrix :
Σk = Σk-1 + ηk(Xk Xk
t
- E k-1 ) ------- (4)
4. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 45
Where Σk is the estimation of the covariance matrix at k-th iteration, xk is the incoming input
vector and ηk is the learning rate.
The emotion categories are : Happiness, sadness, surprise, disgust, fear, anger, neutral. In this
project the emotions such as Happiness, sadness, Surprise and neutral is taken into account for
human emotion reorganization.
2.2 Training Sets
The ensemble of input-desired response pairs used to train the system. The system is provided
with various examples and is trained to give a response to each of the provided examples. The
input given to the system is compared with the examples provided. If it finds a similar example
then the output is produced based on the example’s response. This method is a kind of learning
by examples.
Our system will perform according to the number of training sets we provide. The accuracy of our
system depends on the number of training sets we provide. If the training samples are higher the
performance of the system is accurate. Sample training sets are shown in the fig[1] below,
FIGURE 1: Samples of training set images
2.3. Neural Networks
Recently, there has been a high level of interest in applying artificial neural network for solving
many problems. The application of neural network gives easier solution to complex problems
such as in determining the facial expression. Each emotion has its own range of optimized values
for lip and eye. In some cases an emotion range can overlap with other emotion range. This is
experienced due to the closeness of the optimized feature values. For example, in Table, X1 of
Sad and Dislike are close to each other. These values are the mean values computed from a
range of values. It has been found that the ranges of feature values of X1 for Sad and dislike
overlap with each other. Such overlap is also found in X for Angry and Happy. A level of
intelligence has to be used to identify and classify emotions even when such overlaps occur.
5. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 46
FIGURE 2: Optimized Value of three features
A feed forward neural network is proposed to classify the emotions based on optimized ranges of
3-D data of top lip, bottom lip and eye. The optimized values of the 3-D data are given as inputs
to the network as shown in Figure. The network is considered to be of two different models where
the first model comes with a structure of 3 input neurons, 1 hidden layer of 20 neurons and 3
output neurons (denoted by (3x20x3)) and the other model with a structure of (3x20x7). The
output of (3x20x3) is a 3-bit binary word indicating the seven emotional states. The output (Oi,
i=1, 2,.., 7) of (3x20x7) is of mutually exclusive binary bit representing an emotion. The networks
with each of the above listed input sizes are trained using a back-propagation training algorithm.
A set of suitably chosen learning parameters is indicated in Table. A typical “cumulative error
versus epoch” characteristic of the training of NN models as in Figure 10 ensures the
convergence of the network performances. The training is carried out for 10 trials in each case by
reshuffling input data within the same network model. The time and epoch details are given in
Table 3 which also indicates the maximum and minimum epoch required for converging to the
test-tolerance.
Neural Network Structures – (a & b)
6. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 47
FIGURE 3: Details of Neural Network Classification using Neural Networks
2.4. Feature Extraction
A feature extraction method can now to be applied to the edge detected images. Three feature
extraction methods are considered and their capabilities are compared in order to adopt one that
is suitable for the proposed face emotion recognition problem. They are projection profile, contour
profile and moments (Nagarajan et al., 2006).
Implementation Overview
For PCA to work properly, we have to subtract the mean from each of the data dimensions. The
mean subtracted is the average across each dimension. So, all the x values have subtracted, and
all the y values have subtracted from them. This produces a data set whose mean is zero as
shown in the fig [2].
FIGURE 2:
The next step is to calculate the Co-Variance Matrix. Since the data is 2 dimensional, the
covariance matrix will be 2 x 2. We will just give you the result as
Since the non-diagonal elements in this covariance matrix are positive, we should expect that
both the x and y variable increase together.
7. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 48
2.5 Eigen Vectors and Eigen Faces
Since the covariance matrix is square, we can calculate the eigenvectors and eigenvalues for this
matrix. These are rather important, as they tell us useful information about our data. Here are the
eigenvectors and eigenvalues,
It is important to notice that these eigenvectors are both unit eigenvectors that is their lengths are
both 1. This is very important for PCA. Most maths packages, when asked for eigenvectors, will
give you unit eigenvectors. It turns out that the eigenvector with the highest eigen value is the
principle component of the data set. In our example, the eigenvector with the largest eigen value
is the one that pointed down the middle of the data. It is the most significant relationship between
the data dimensions.
In general, once eigenvectors are found from the covariance matrix, the next step is to order them
by eigen value, highest to lowest. This gives the components in order of significance. If we leave
out some components, the final data set will have less dimensions than the original. To be
precise, if we originally have n dimensions in our data, and so calculate n eigenvectors and eigen
values, and then choose only the first p eigenvectors, then the final data set has only p
dimensions. This is the final step in PCA is choosing the components (eigenvectors) that we wish
to keep in our data and form a feature vector, we simply take the transpose of the vector and
multiply it on the left of the original data set, transposed.
Final Data=RowFeatureVector x RowDataAdjust
where RowFeatureVector is the matrix with the eigenvectors in the columns transposed so that
the eigenvectors are now in the rows, with the most significant eigenvector at the top, and
RowDataAdjust is the mean-adjusted data transposed, ie. the data items are in each column, with
each row holding a separate dimension. Final Data is the final data set, with data items in
columns, and dimensions along rows.
The implementation chart is as in Fig [4],
FIGURE 4: Implementation Chart
2.6 Face Detection
Face Detection follows the Eigen face approach. Once the eigenfaces have been computed,
several types of decision can be made depending on the application. The steps involved in this
approach are mentioned below: (1) Acquire initial set of face images (the training set). (2)
Calculate Eigenvector from the training set keeping only M images (face space). (3) Calculate the
corresponding distribution in the face space. The process is explained in the following flow
diagram Fig [5],
8. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 49
FIGURE 5: Flow Diagram
2.7 Emotion Filtering
Emotion Filtering is where in the detected image is projected over the various average faces such
as Happy, Neutral, Sad, and Surprised. The filters provide the output values of the given input by
comparing the input with the training samples present in the system. The input is compared to all
the training sets of various emotions. The values obtained after the comparison in this module is
provided to the analyzer.
2.8 Emotion Analyzer
This is the decision making module. The output of the emotion filter is passed into the emotion
analyzer. There are threshold values set for each emotions. For example,
• If output value is equal to the threshold value, then it is considered as neutral
• If output value is greater than the threshold value, then it is considered as happy
• If output value is less than the threshold value, then it is considered as sad
Thus the emotion analyzer makes the decision based on the threshold values provided. By
adding new threshold values one add a new emotion to the system.
FIGURE 6: Sample Emotions
In our project, we can have nearly 300 images with which our software is completely trained.
Some samples of the training set images are shown in the following figure (7). We have
developed this system using Dotnet language. In our software we have developed a form with
various input and functionality objects.
9. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 50
FIGURE 7
In the first phase our system runs the training sets by which system training is over. In the second
phase we can stop the training and in the third phase we continue with the testing module. In the
testing phase our system will start showing the emotions of the images continuously as in the
following figure (8).
FIGURE 8
The expression in the above image seems to be sad, and the result of the system is also sad.
Thus it is proven that our system is able to identify the emotions in the images accurately.
3. EXPERIMENTAL RESULTS
Face segmentation and extraction results were obtained for more than 300 face images, of non-
uniform background, with excellent performance. The experimental results for the recognition
stage presented here, were obtained using the face database. Comparisons were accomplished
for all methods using face images. In our system, all the input images are getting converted to
the fixed size pixels before going for the training session. However, this recognition stage has the
great advantages of working in the compressed domain and being capable for multimedia and
content-based retrieval applications.
Face detection and recognition can also be done using Markov random fields, Guided Particle
Swarm Optimization algorithm and Regularized Discriminant Analysis based Boosting algorithm.
In Markov Random Fields method the eye and mouth expressions are considered for finding the
emotion, using the edge detection techniques. This kind of emotion analysis will not be perfect
because the persons actual mood cannot be identified just by his eyes and mouth. In the Guided
Particle Swarm optimization method video clips of the user showing various emotions are used,
the video clip is segmented and then converted into digital form to identify the emotion. This is a
long and tedious process. Our proposed system was implemented using C# programming
10. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 51
language under .NET development framework. The implemented project has two modes, the
training mode and the detection mode. The training mode is completed once if all the images are
trained to the system. In the detection mode the system starts identifying the emotions of all the
images which will be 97% accurate. The following figure shows the comparison of recognition
rates.
Methods Recognition Rates
Markov Random Fields 95.91
Guided Particle Swarm
Optimization
93.21
Regularized Discriminant
Analysis based Booting
96.51
Our Proposed method 97.76
FIGURE 9
Comparison of facial expression recognition using image database
The above results are shown as comparison graphs in the following Fig [11],
FIGURE 10
Here, we have taken number of training samples in the x-axis and recognition rate on the y-axis.
The accuracy of emotion recognition is high in our proposed method compared to the other two
algorithms.
4. CONCLUSION
In this paper we have presented an approach to expression recognition in the images. This
emotion analysis system implemented using PCA algorithm is used in detection of human
emotions in the images at low cost with good performance. This system is designed to recognize
expressions in human faces using the average values calculated from the training samples. We
evaluated our system in terms of accuracy for a variety of images and found that our system was
able to identify the face images and evaluate the expressions accurately from the images.
REFERENCES
[1] M. Pantic, L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions”:
The State of the Art, IEEE Trans. Pattern Analysis and Machine Intelligence,
Vol. 22, No. 12, pp. 1424-1445, 2000
[2] G. Donato, M.S. Bartlett, J.C. Hager, P. Ekman, T.J. Sejnowski, “Classifying Facial
Actions", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, pp. 974-
989, 1999
0
20
40
60
80
100
120
10
faces
20 30 40
RDAB
GPSO
MRF
Proposed
11. Dr. S. Santhosh Baboo & V.S. Manjula
International Journal of Data Engineering (IJDE), Volume (2) : Issue (2) : 2011 52
[3] Face Recognition using Principle Component Analysis -Kyungnam Kim, Department of
Computer Science, University of Maryland, and College Park, MD 20742, USA.
[4] M. Pantic, L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions”:
The State of the Art, IEEE Trans. Pattern Analysis and Machine Intelligence,
Vol. 22, No. 12, pp. 1424-1445, 2000
[5] P. Viola and M. Jones. Robust real-time object detection.
Technical Report 2001/01, Compaq Cambridge Research Lab, 2001.
[6] I. Essa and A. Pentland. Coding, analysis,
Interpretation and recognition of facial expressions. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 19(7):757–763, 1997.
[7] M. Turk and A. Pentland. “Eigenfaces for recognition”.
Journal of Cognitive Neuroscience, 3(1):71–86, 1991.
[8] G. Littlewort, I. Fasel, M. Stewart Bartlett, and J. R. Movellan. Fully automatic coding of
basic expressions from video. Technical Report 2002.03, UCSD INC MPLab, 2002.
[9] M. Lades, J. C. Vorbr¨uggen, J. Buhmann, J. Lange, C. von der Malsburg, R. P. W¨urtz,
and W. Konen. Distortion invariant object recognition in the dynamic link architecture.
IEEE Transactions on Computers, 42:300–311, 1993.
[10] J. F. Cohn, A. J. Zlochower, J. J. Lien, and T. Kanade.
Feature-point tracking by optical flow discriminates subtle differences in facial expression.
In Proceedings International Conference on Automatic
Face and Gesture Recognition, pages 396–401, 1998.
[11] P. Ekman. Basic emotions. In T. Dalgleish and T. Power, editors, The Handbook of
Cognition and Emotion. John Wiley & Sons, Ltd., 1999.
[12] P. Ekman and W. Friesen. “Facial Action Coding System” (FACS):
Manual. Consulting Psychologists Press, Palo Alto, CA, USA, 1978.
[13] Y.-L. Tian, T. Kanade, and J. F. Cohn. Recognizing action units for
Facial expression analysis. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23(2):97–115, 2001.
[14] M. N. Dailey, G. W. Cottrell, and R. Adolphs. A six-unit network is all you need to discover
happiness. In Proceedings of the Twenty-Second Annual Conference of the Cognitive
Science Society, Mahwah, NJ, USA, 2000. Erlbaum.
[15] Nagarajan R., S. Yaacob, P. Pandiyan, M. Karthigayan, M. Khalid & S. H. Amin. (2006).
Real Time Marking Inspection Scheme for Semiconductor Industries. International Journal
of Advance Manufacturing Technology (IJAMT), ISSN No (online): 1433-3015.
[16] Karthigayan. M, Mohammed Rizon, Sazali Yaacob & R. Nagarajan,. (2006). An Edge
Detection and Feature Extraction Method Suitable for Face Emotion Detection under
Uneven Lighting, The 2nd Regional Conference on Artificial Life and Robotics (AROB’06),
July 14 - July 15, Hatyai, Thailand.
[17] Karthigayan M, M.Rizon, S.Yaacob and R. Nagarajan. (2007). On the Application of Lip
features in Classifying the Human Emotions, International Symposium on Artificial life and
Robotics, Japan.