DETECTING EMOTION FROM FACIAL EXPRESSION HAS BECOME AN URGENT NEED BECAUSE OF
ITS IMMENSE APPLICATIONS IN ARTIFICIAL INTELLIGENCE SUCH AS HUMAN-COMPUTER
COLLABORATION, DATA DRIVEN ANIMATION, HUMAN-ROBOT COMMUNICATION ETC. SINCE IT
IS A DEMANDING AND INTERESTING PROBLEM IN COMPUTER VISION, SEVERAL WORKS HAD
BEEN CONDUCTED REGARDING THIS TOPIC. THE OBJECTIVE OF THIS PROJECT IS TO DEVELOP A
FACIAL EXPRESSION RECOGNITION SYSTEM BASED ON CONVOLUTIONAL NEURAL NETWORK
WITH DATA AUGMENTATION. THIS APPROACH ENABLES TO CLASSIFY SEVEN BASIC EMOTIONS
CONSIST OF ANGRY, DISGUST, FEAR, HAPPY, NEUTRAL, SAD AND SURPRISE FROM IMAGE DATA.
CONVOLUTIONAL NEURAL NETWORK WITH DATA AUGMENTATION LEADS TO HIGHER
VALIDATION ACCURACY THAN THE OTHER EXISTING MODELS (WHICH IS 96.24%) AS WELL AS
HELPS TO OVERCOME THEIR LIMITATIONS.
We will use 7 emotions namely - We have used 7 emotions namely - 'Angry', 'Disgust'濫, 'Fear', 'Happy', 'Neutral', 'Sad'☹️, 'Surprise' to train and test our algorithm using Convolution Neural Networks.
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
Neural Networks lie at the apogee of Machine Learning algorithms. With a large set of data and automatic feature selection and extraction process, Convolutional Neural Networks are second to none. Neural Networks can be very effective in classification problems.
Facial Emotion Recognition is a technology that helps companies and individuals evaluate customers and optimize their products and services by most relevant and pertinent feedback.
Emotion recognition using image processing in deep learningvishnuv43
User’s emotion using its facial expressions will be detected. These expressions can be derived from the live feed via system's camera or any pre-existing image available in the memory. Emotions possessed by humans can be recognized and has a vast scope of study in the computer vision industry upon which several researches have already been done.
We propose a compact CNN model for facial expression recognition.
The work has been implemented using Python Open Source Computer Vision Library (OpenCV) and NumPy,pandas,keras packages. The scanned image (testing dataset) is being compared to training dataset and thus emotion is predicted.
We will use 7 emotions namely - We have used 7 emotions namely - 'Angry', 'Disgust'濫, 'Fear', 'Happy', 'Neutral', 'Sad'☹️, 'Surprise' to train and test our algorithm using Convolution Neural Networks.
Facial Emotion Recognition: A Deep Learning approachAshwinRachha
Neural Networks lie at the apogee of Machine Learning algorithms. With a large set of data and automatic feature selection and extraction process, Convolutional Neural Networks are second to none. Neural Networks can be very effective in classification problems.
Facial Emotion Recognition is a technology that helps companies and individuals evaluate customers and optimize their products and services by most relevant and pertinent feedback.
Emotion recognition using image processing in deep learningvishnuv43
User’s emotion using its facial expressions will be detected. These expressions can be derived from the live feed via system's camera or any pre-existing image available in the memory. Emotions possessed by humans can be recognized and has a vast scope of study in the computer vision industry upon which several researches have already been done.
We propose a compact CNN model for facial expression recognition.
The work has been implemented using Python Open Source Computer Vision Library (OpenCV) and NumPy,pandas,keras packages. The scanned image (testing dataset) is being compared to training dataset and thus emotion is predicted.
FACE EXPRESSION RECOGNITION USING CONVOLUTION NEURAL NETWORK (CNN) MODELS ijgca
This paper proposes the design of a Facial Expression Recognition (FER) system based on deep
convolutional neural network by using three model. In this work, a simple solution for facial expression
recognition that uses a combination of algorithms for face detection, feature extraction and classification
is discussed. The proposed method uses CNN models with SVM classifier and evaluates them, these models
are Alex-net model, VGG-16 model and Res-Net model. Experiments are carried out on the Extended
Cohn-Kanada (CK+) datasets to determine the recognition accuracy for the proposed FER system. In this
study the accuracy of AlexNet model compared with Vgg16 model and ResNet model. The result show that
AlexNet model achieved the best accuracy (88.2%) compared to other models.
Facial Emotion Recognition using Convolution Neural NetworkYogeshIJTSRD
Facial expression plays a major role in every aspect of human life for communication. It has been a boon for the research in facial emotion with the systems that give rise to the terminology of human computer interaction in real life. Humans socially interact with each other via emotions. In this research paper, we have proposed an approach of building a system that recognizes facial emotion using a Convolutional Neural Network CNN which is one of the most popular Neural Network available. It is said to be a pattern recognition Neural Network. Convolutional Neural Network reduces the dimension for large resolution images and not losing the quality and giving a prediction output whats expected and capturing of the facial expressions even in odd angles makes it stand different from other models also i.e. it works well for non frontal images. But unfortunately, CNN based detector is computationally heavy and is a challenge for using CNN for a video as an input. We will implement a facial emotion recognition system using a Convolutional Neural Network using a dataset. Our system will predict the output based on the input given to it. This system can be useful for sentimental analysis, can be used for clinical practices, can be useful for getting a persons review on a certain product, and many more. Raheena Bagwan | Sakshi Chintawar | Komal Dhapudkar | Alisha Balamwar | Prof. Sandeep Gore "Facial Emotion Recognition using Convolution Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39972.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/39972/facial-emotion-recognition-using-convolution-neural-network/raheena-bagwan
Emotion Detection using Artificial Intelligence presentation by Aryan Trisal.
In this ppt you will learn about emotion detection using AI and how will it change the world.
IF U WANT A PPT MADE AT VERY LOW PRICES CONTACT ME ON LINKEDIN -www.linkedin.com/in/aryan-trisal-420253190
https://mcv-m6-video.github.io/deepvideo-2018/
Overview of deep learning solutions for video processing. Part of a series of slides covering topics like action recognition, action detection, object tracking, object detection, scene segmentation, language and learning from videos.
Prepared for the Master in Computer Vision Barcelona:
http://pagines.uab.cat/mcv/
Facial emotion detection on babies' emotional face using Deep Learning.Takrim Ul Islam Laskar
phase- 1
Face Detection.
Facial Landmark detection.
phase- 2
Neural Network Training and Testing.
validation and implementation.
phase - 1 has been completed successfully.
FACE EXPRESSION RECOGNITION USING CONVOLUTION NEURAL NETWORK (CNN) MODELS ijgca
This paper proposes the design of a Facial Expression Recognition (FER) system based on deep
convolutional neural network by using three model. In this work, a simple solution for facial expression
recognition that uses a combination of algorithms for face detection, feature extraction and classification
is discussed. The proposed method uses CNN models with SVM classifier and evaluates them, these models
are Alex-net model, VGG-16 model and Res-Net model. Experiments are carried out on the Extended
Cohn-Kanada (CK+) datasets to determine the recognition accuracy for the proposed FER system. In this
study the accuracy of AlexNet model compared with Vgg16 model and ResNet model. The result show that
AlexNet model achieved the best accuracy (88.2%) compared to other models.
Facial Emotion Recognition using Convolution Neural NetworkYogeshIJTSRD
Facial expression plays a major role in every aspect of human life for communication. It has been a boon for the research in facial emotion with the systems that give rise to the terminology of human computer interaction in real life. Humans socially interact with each other via emotions. In this research paper, we have proposed an approach of building a system that recognizes facial emotion using a Convolutional Neural Network CNN which is one of the most popular Neural Network available. It is said to be a pattern recognition Neural Network. Convolutional Neural Network reduces the dimension for large resolution images and not losing the quality and giving a prediction output whats expected and capturing of the facial expressions even in odd angles makes it stand different from other models also i.e. it works well for non frontal images. But unfortunately, CNN based detector is computationally heavy and is a challenge for using CNN for a video as an input. We will implement a facial emotion recognition system using a Convolutional Neural Network using a dataset. Our system will predict the output based on the input given to it. This system can be useful for sentimental analysis, can be used for clinical practices, can be useful for getting a persons review on a certain product, and many more. Raheena Bagwan | Sakshi Chintawar | Komal Dhapudkar | Alisha Balamwar | Prof. Sandeep Gore "Facial Emotion Recognition using Convolution Neural Network" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-3 , April 2021, URL: https://www.ijtsrd.com/papers/ijtsrd39972.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/39972/facial-emotion-recognition-using-convolution-neural-network/raheena-bagwan
Emotion Detection using Artificial Intelligence presentation by Aryan Trisal.
In this ppt you will learn about emotion detection using AI and how will it change the world.
IF U WANT A PPT MADE AT VERY LOW PRICES CONTACT ME ON LINKEDIN -www.linkedin.com/in/aryan-trisal-420253190
https://mcv-m6-video.github.io/deepvideo-2018/
Overview of deep learning solutions for video processing. Part of a series of slides covering topics like action recognition, action detection, object tracking, object detection, scene segmentation, language and learning from videos.
Prepared for the Master in Computer Vision Barcelona:
http://pagines.uab.cat/mcv/
Facial emotion detection on babies' emotional face using Deep Learning.Takrim Ul Islam Laskar
phase- 1
Face Detection.
Facial Landmark detection.
phase- 2
Neural Network Training and Testing.
validation and implementation.
phase - 1 has been completed successfully.
Human Emotion Recognition using Machine Learningijtsrd
It is quite interesting to recognize the human emotions in the field of machine learning. Using a person's facial expression one can know his emotions or what the person wants to express. But at the same time it's not easy to recognize one's emotion easily its quite challenging at times. Facial expression consist of various human emotions such as sad, happy , excited, angry, frustrated and surprise. Few years back Natural language processing was used to detect the sentiment from the text and then it took a step forward towards emotion detection. Sentiments can be positive, negative or neutral where as emotions are more refined categories. There are many techniques used to recognize emotions. This paper provides a review of research work carried out and published in the field of human emotion recognition and various techniques used for human emotions recognition. Prof. Mrs. Dhanamma Jagli | Ms. Pooja Shetty "Human Emotion Recognition using Machine Learning" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd25217.pdfPaper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/25217/human-emotion-recognition-using-machine-learning/prof-mrs-dhanamma-jagli
BEST IMAGE PROCESSING TOOLS TO EXPECT in 2023 – Tutors IndiaTutors India
As the name suggests, processing an image entails a number of steps before we reach our goal.
Check our Pdf for More Information
Visit our work (Source):
https://www.tutorsindia.com/blog/top-13-image-processing-tools-to-expect-2023/
Everything You Need to Know About Computer VisionKavika Roy
https://www.datatobiz.com/blog/computer-vision-guide/
To most, they consist of pixels only, but digital images, like any other form of content, can be mined for data by computers. Further, they can also be analyzed afterward. Use image processing methods, including computers, to retrieve the information from still photographs, and even videos. Here we are going to discuss everything you must know about computer vision.
There are two forms-Machine Vision, which is this tech’s more “traditional” type, and Computer Vision (CV), a digital world offshoot. While the first is mostly for industrial use, as an example are cameras on a conveyor belt in an industrial plant, the second is to teach computers to extract and understand “hidden” data inside digital images and videos.
Facebook this August said it was open-sourcing its work to improve its Computer Visiontechnology software for users further. This image was posted by FB Research scientist Piotr Dollar to explain the difference between human and computer vision.
Thanks to advances in artificial intelligence and innovations in deep learning and neural networks, the field has been able to take big leaps in recent years, and in some tasks related to detection and labeling of objects has been able to surpass humans.
One of the driving factors behind computer vision development is the amount of data we produce now, which will then get used to educate and develop computer vision.
Automatic gender and age classification has become quite relevant in the rise of social media platforms. However, the existing methods have not been completely successful in achieving this. Through this project, an attempt has been made to determine the gender and age based on a frame of the person. This is done by using deep learning, OpenCV which is capable of processing the real-time frames. This frame is given as input and the predicted gender and age are given as output. It is difficult to predict the exact age of a person using one frame due the facial expressions, lighting, makeup and so on so for this purpose various age ranges are taken, and the predicted age falls in one of them. The Adience dataset is used as it is a benchmark for face photos and includes various real-world imaging conditions like noise, lighting etc.
Computer Vision Applications - White Paper Addepto
Computer vision (CV) is an artificial intelligence-based technology that allows computers to observe the world. Find out in our white paper what tools are used to create computer vision solutions. The number of computer vision applications grow every year. Check out real-life examples in retail and marketing industry.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
3. Abstra
ctDETECTING EMOTION FROM FACIAL EXPRESSION HAS BECOME AN URGENT
NEED BECAUSE OF ITS IMMENSE APPLICATIONS IN ARTIFICIAL INTELLIGENCE
SUCH AS HUMAN-COMPUTER COLLABORATION, DATA DRIVEN ANIMATION,
HUMAN-ROBOT COMMUNICATION ETC. SINCE IT IS A DEMANDING AND
INTERESTING PROBLEM IN COMPUTER VISION, SEVERAL WORKS HAD BEEN
CONDUCTED REGARDING THIS TOPIC. THE OBJECTIVE OF THIS PROJECT IS TO
DEVELOP A
FACIAL EXPRESSION RECOGNITION SYSTEM BASED ON CONVOLUTIONAL
NEURAL NETWORK WITH DATAAUGMENTATION. THIS APPROACH ENABLES TO
CLASSIFY SEVEN BASIC EMOTIONS CONSIST OF ANGRY, DISGUST, FEAR,
HAPPY, NEUTRAL, SAD AND SURPRISE FROM IMAGE DATA. CONVOLUTIONAL
NEURAL NETWORK WITH DATAAUGMENTATION LEADS TO HIGHER VALIDATION
ACCURACY THAN THE OTHER EXISTING MODELS (WHICH IS 96.24%) AS WELL
AS HELPS TO OVERCOME THEIR LIMITATIONS.
4. Introducti
onWe human express our feelings using many means, this project deal with
the expressions an expressions can convey a person perception. There
are 7 universal facial expressions they are anger, contempt, disgust, fear
,joy, sadness and surprise.
This project is build to detect the facial expressions of person on any video
or the system camera. The project
is developed using deep learning algorithm convolution neural network.
These algorithm is most useful for image recognition tools and categorizing
it into types of category.
The model is trained to use input from video or camera and predict the
facial expression of image and display on the web page. The data
use for training model is a dataset of machine learning competition
in2013 and has images of expressions of distinct varieties.
The dataset is split to 80 per of train dataset and 20 per of test dataset
for evaluation of our model to get how accurate it is in predicting for
inputs we provide and display on the screen
5. Definition of
problem
In recent years there has been a growing interest in improving all aspects
of the interaction between humans
and computers.
The rapid advance of technology in recent years has made computers
cheaper and more powerful and has made the use of microphones and
pc-cameras affordable and easily available. The microphones and
cameras enable the computerto “see” and “hear,” and to use this
information to act.
It is argued that to truly achieve effective human-computer intelligent
interaction, there is a need for the computer to be able to interact
naturally with the user, like the way human-human interaction takes
place.
Human beings possess and express emotions in everyday interactions with
others. Emotions are often reflected
on the face, in hand and body gestures, and in the voice, to express our
feelings or likings.
6. Psychologists and engineers alike have tried to analyze facial expressions
to understand and categorize these expressions. This knowledge can be
for example used to teach computers to recognize human emotions from
video images acquired from built-in cameras.
There are several related problems: detection of an image segment as a
face, extraction of the facial expression
information, and classification of the expression.
A system that performs these operations accurately and in real time would
be a major step forward in achieving a human-like interaction between the
man and machine.
The demand to meet the requirements inspired us to build a facail
recognition system and apply it to solve real time problems such as
computer instructors, emotion monitor ect.
7. The 7 universal facial
expressionsIt is widely supported within the scientific community that there are seven
basic emotions, each with their own
unique and distinctive facial expressions.
1.Happi
ness
2.Sadn
ess
3.Fear
4.Disgust
5.Anger
6.Contem
pt and
7.Surprise
.
8. Existing
Systems:
SVM classifierA support vector machine (SVM) is a supervised machine learning model
that uses classification algorithms for two-group classification problems.
After giving an SVM model sets of labeled training data for each category,
they’re able to categorize new text.
In this proposed algorithm initially detecting eye and mouth, features of eye
and mouth are extracted using gabor filter, LBP and PCA is used to reduce
the dimensions of the features. Finally SVM is used to classification of
expression and facial action units.
example. Let’s imagine we have two tags: red and blue, and our
data has two features: x and y. We want a classifier that, given a pair
of (x,y) coordinates, outputs if it’s either red or blue. We plot our already
labeled training data on a plane
9. A support vector machine takes these data points and outputs the hyperplane
(which in two dimensions it’s simply a line) that best separates the tags. This line is
the decision boundary: anything that falls to one side of it we will classify as blue,
and anything that falls to the other as red.
11. Fully connected neural network
The fully connected neural network is made to predict the face expression by fitting the
model with train and test data and output it to device connected to machine.
12. Proposed
systemIn the svm systems and fully
connected system the
model that is built after we
fit the model using dataset
it consumes large space
and is time consuming.
So to overcome the
problem we built a
convolution neural network
model which take less size
and is fast.
FULLY
CONNECTED
LAYER
CNN
LAYER
13. Tools
used.
PythonIn recent years, python has become the language of choice for data
science and artificial intelligence two technology trends essential for
global businesses to stay competitive today. In fact, python is the fastest-
growing programming language today. It’s used across a wide variety of
applications from web development to task automation to data analysis.
Keras
Keras is a minimalist python library for deep learning that can run on
top of theano or tensorflow. It was developed to make implementing
deep learning models as fast and easy as possible for research and
development.
TensorFlow
TensorFlow is an end-to-end open source platform for machine learning. It
has a comprehensive, flexible ecosystem of tools, libraries and community
resources that lets researchers push the state-of-the-art in ML and
developers easily build and deploy ML powered applications.
14. Fla
skFlask is a web application framework written in python. Armin ronacher, who
leads an international group of python enthusiasts named pocco, develops it.
Flask is based on werkzeug WSGI toolkit and jinja2 template engine.
Jupyter notebook
Jupyterlab is a web-based interactive development environment for jupyter
notebooks, code, and data. Jupyterlab is flexible: configure and arrange the
user interface to support a wide range of workflows in data science, scientific
computing, and machine learning.
Camera module
Python provides various libraries for image and video processing. One of
them is OpenCV. OpenCV is a vast library that helps in providing various
functions for image and video operations. With OpenCV, we can capture a
video from the camera. It lets you create a video capture object which is
helpful to capture videos through webcam and then you may perform
desired operations on that video.
OpenCV
OpenCV is an open source computer vision and machine learning software
15. Importing important
modules
Import NumPy
NumPy is the fundamental package for scientific computing in python. It
is a python library that provides a multidimensional array object,
various derived objects
Import seaborn
Seaborn is a python data visualization library based on matplotlib. It provides
a high-level interface for drawing
attractive and informative statistical graphics.
Import matplotlib.Pyplot
Matplotlib is a plotting library for the python programming language and its
numerical mathematics extension NumPy.
Import utils
Python utils is a collection of small python functions and classes which make
common patterns shorter and easier.
Import os
OS module in python provides functions for interacting with the operating
16. Import
imagedatageneratorThe imagedatagenerator class is very useful in image classification. There
are several ways to usethis generator, depending on the method we use,
here we will focus on flow_from_directory takes a path to the directory
containing images sorted in sub directories and image augmentation
parameters.
Import dense layer
Dense layer is the regular deeply connected neural network layer. It is most
common and frequently used layer.
Import input layer
Input is used to instantiate a keras tensor.
Import dropout layer
The dropout layer randomly sets input units to 0 with a frequency of rate at
each step during training time, which helps prevent overfitting.
Import flatten layer
If inputs are shaped (batch,) without a feature axis, then flattening adds an
extra channel dimension and output
shape is (batch, 1).
Import conv2d layer
17. Import batch
normalization
Normalize the activations of the previous layer at each batch, i.e. Applies a
transformation that maintains the mean
activation close to 0 and the activation standard deviation close to 1.
Import activation
Relu activation: max(x, 0), the element-wise maximum of 0 and the input
tensor.
Import maxpooling2d
Down samples the input representation by taking the maximum value over the
window defined by pool size for each
dimension along the feature's axis.
Import model
Model groups layers into an object with training and inference features.
Import sequential
A sequential model is appropriate for a plain stack of layers where each layer
has exactly one input tensor and one output tensor.
Import adam
Adam optimization is a stochastic gradient descent method that is based on
adaptive estimation of first-order and
18. Import
reducelronplateauReduce learning rate when a metric has stopped improving.
Import ipyhton.Display
When this object is returned by an expression or passed to the display
function, it will result in the data being
displayed in the frontend.
Import livelossplot
The livelossplot python package for live training loss plots in jupyter
notebook for keras.
Import TensorFlow
TensorFlow is an end-to-end open source platform for machine learning. It
has a comprehensive, flexible ecosystem of tools, libraries and community
resources that lets researchers push the state-of-the-art in ML and
developers easily build and deploy ML powered applications.
19. LOADING IMAGE
DATASET
Data augmentationData augmentation encompasses a wide range of techniques used to
generate “new” training samples from the original ones by applying
random jitters and perturbations (but at the same time ensuring that the
class labels of the data are not changed).
we have a smaller number of images of disgust. we use data
augmentation technique to generate new images. this is done with
image datagenerator.
augmented data is more likely to generalize to example data points not
included in the training set.
loading dataset
we have two sets of data train data and test data.
we had split our data to 80 perc train dataset and 20 perc test dataset for
testing our model.
we use method flow_from_directory to load our dataset.
20. Role of layer in CNN image
classification
A convolutional neural network(CNN) architecture has three main parts:
A convolutional layerthat extracts features from a source image.
Convolution helps with blurring, sharpening, edge detection, noise
reduction, or other operations that can help the machine to learn specific
characteristics of an image.
A pooling layer that reduces the image dimensionality without losing
important features or patterns.
A fully connectedlayer also known as the dense layer, in which the results
of the convolutional layers are fed through one or more neural layers to
generate a prediction.
In between the convolutional layer and the fully connected layer, there is a
‘flatten’ layer. Flattening transforms a
two-dimensional matrix of features into a vector that can be fed into a fully
21. Create CNN
modelIn keras, this is a typical process for building a CNN architecture:
Reshape the input data into a format suitable for the convolutional
layers, using x_train.Reshape() and x_test.Reshape().
For class-based classification, one-hot encode the categories using
to_categorical()
Build the model using the sequential.Add() function.
Add a convolutional layer.
Add a pooling layer
Add a “flatten” layer which prepares a vector for the fully connected
layers.
Add one or more fully connected layer.
Compile the model using model.Compile().
Train the model using model.Fit(), supplying x_train(), x_test(),
y_train() and y_test()
22. CNN
layers
Filters: integer, the
dimensionality ofthe output space.
Strides: an integer or tuple/list of 2
integers, specifying the strides of
the
convolution along the height and
width.
When using the conv2d layer first
we give the input shape ex
48,48,1. We use grayscale
images.
Padding: one of "valid" or "same"
(case-insensitive). "Valid" means
no padding. "Same" results in
padding evenly to the left/right or
up/down of the input such that
output has the same height/width
dimension as the input.
23. Normalize the activations of
the previous layer at each
batch, i.e. Applies a
transformation that
maintains the mean
activation close to 0 and the
activation standard deviation
close to 1.
Down samples the input
representation by taking the
maximum value over the
window defined by pool size
for each dimension along
the feature's axis. The
window is shifted by strides
in each dimension.
Dropout is a technique where
26. Activatio
ns
Relu
The rectified linear unit is the most
used activation function in deep
learning models. The function returns 0
if it receives any negative input, but for
any positive value x it returns that
value back. So it can be written as
f(x)=max(0,x) .
SoftMax
The SoftMax function is a function that
turns a vector of K
real values into a vector of K real
values that sum to 1. The input values
can be positive, negative, zero, or
27. Train and evaluate
model
Sample
One element of a dataset. For instance, one image is a sample in a
convolutional network. One audio snippet is a
sample for a speech recognitionmodel.
Batch
A set of N samples. The samples in a batch are processed independently, in
parallel.If training, a batch results in only one update to the model. A batch
generally approximates the distribution of the input data better than a single
input.
Epochs
An arbitrary cutoff, generally defined as "one pass over the entire dataset",
used to separate traininginto distinct phases, which is useful for logging and
periodic evaluation. When using validation_data or validation_split with the
fit method of keras models, evaluation will be run at the end of every epoch.
28. Callba
cksA callback is an object that can perform actions at various stages of
training (e.g. At the start or end of an epoch, before or after a single
batch, etc).
Callback to save the keras model or model weights at some frequency.
Modelcheckpoint
Modelcheckpoint callback is used in conjunction with training using
model.Fit() to save a model or weights (in a checkpoint file) at some
interval, so the model or weights can be loaded later to continue the
training from the state saved.
Reducelronplateau
Models often benefit from reducing the learning rate by a factor of 2-10
once learning stagnates. This callback monitors a quantity and if no
improvement is seen for a 'patience' number of epochs, the learning rate
is reduced.
29. Model
fitModel fitting is a measure of how well a machine learning model
generalizes to similar data to that on which it was trained. A model that is
well-fitted produces more accurate outcomes. A model that is overfitted
matches the data too closely. A model that is underfitted doesn’t match
closely enough. Models are trained by numpy arrays using fit(). The main
purpose of this fit function is used to evaluate your model on training.
To train a model with fit(), you need to specify a loss function, an
optimizer, and optionally, some metrics to monitor. If your model has
multiple outputs, you can specify different losses and metrics for each
output, and you can modulate the contribution of each output to the total
loss of the model.
Adam optimization is a stochastic gradient descent method that is based
on adaptive estimation of first-order
and second-order moments.
30. The compilation is the final step in
creating a model. Once the
compilation is done, we can move
on to training phase.Histo
ryOne of the default callbacks that is
registered when training all deep
learning models is the history
callback. Its records training
metrics for each epoch. This
includes the loss and the accuracy
(for classification problems) as
well as the loss and accuracy for
the validation dataset, if one is set.
The history object is returned from
calls to the fit() function used to
train the model. Metrics are stored
in a dictionary in the history
member of the object returned.
31. Visualization of
historyWe can create plots from the collected history data.
A plot of accuracy on the training and validation datasets over
training epochs.
A plot of loss on the training and validation datasets over training
epochs.
32. From the plot of accuracy we can see that the model could probably be
trained a little more as the trend for accuracy on both datasets is still rising
for the last few epochs. We can also see that the model has not yet over-
learned the training dataset, showing comparable skill on both datasets.
From the plot of loss, we can see that the model has comparable
performance on both train and validation datasets (labeled test). If these
parallel plots start to depart consistently, it might be a sign to stop training at
an earlier epoch.
Represent model as json
JSON is a simple file format for describing data hierarchically.
Keras provides the ability to describe any model using json format with a
to_json() function. This can be saved to file and later loaded via the
model_from_json() function that will create a new model from the JSON
specification.
The weights are saved directly from the model using the save_weights()
function and later loaded using the
symmetrical load_weights() function.
33. Saving keras
model
We can include the model in our code to give the inputs and predict
the outputs to cv2 and display an output after processing the input
weprovide to model we build.
34. Metrics and
accuracy
Keras allows you to list the metrics to monitor during the
training of your model.
You can do this by specifying the “metrics” argument and
providing a list of function names
to the compile() function on your model.
All metrics are reported in verbose output and in the history
object returned from calling
the fit() function.
Regardless of whether your problem is a binary or multi-class
classification problem, you
can specify the ‘accuracy‘ metric to report on accuracy.
35. Crossentro
pyAs part of the optimization algorithm, the error for the current state of the
model must be estimated repeatedly. This requires the choice of an error
function, conventionally called a loss function, that can be used to estimate
the loss of the model so that the weights can be updated to reduce the loss
on the next evaluation.
Cross-entropy is the default loss function to use for binary classification
problems.
It is intended for use with binary classification where the target values are in
the set {0, 1}.
Mathematically, it is the preferred loss function under the inference
framework of maximum likelihood. It is the
loss function to be evaluated first and only changed if you have a good
reason.
Cross-entropy is a measure from the field of information theory, building
upon entropy and generally calculating the difference between two
36. Web
applicationThe output of data is given to a web application.
We built the web application with flask.
Flask is a lightweight wsgi web application framework. It is designed to make
getting started quick and easy, with
the ability to scale up to complex applications.
It began as a simple wrapper around werkzeug and jinja and has become
one of the most popular python web
application frameworks.
We us render template .
We give the path for input in video feed. If the input is a video, we the video
path or if input is from the webcam,
we give 0 in definition.
39. Conclusi
onWe made a CNN model to predict the facial expression and solve real time
problems.
We build a web application using flask to display output.
We calculated accuracy of model its accuracy is 66.7 .
Applications of facial expression recognizing.
Facial expressions and other gestures convey nonverbal communication
cues that play an important role in
interpersonal relations.
Computer can monitor and counsel person by using an emotions.
For businesses, since facial expression recognition software delivers raw
emotional responses, it can provide
valuable information about the sentiment of a target audience towards a
marketing message, product or brand.