3. Introduction to organization
Freecodecamp is India's no.1 internship and training platform with 40000+ free internships
in Engineering, MBA, media, law, arts, and other streams.
They are a technology company on a mission to equip students with relevant skills &
practical exposure through internships and online trainings. Imagine a world full of freedom
and possibilities. A world where you can discover your passion and turn it into your career.
A world where your practical skills matter more than your university degree. A world where
you do not have to wait till 21 to taste your first work experience (and get a rude shock that
it is nothing like you had imagine it to be). A world where you graduate fully assured, fully
confident, and fully prepared to stake claim on your place in the world.
3
4. Introduction
• The system works on CNN (convolutional neural network) for
extracting the physiological signals and make a prediction. The
results can be drawn out by scanning the person’s image through a
camera and then correlate it with a training dataset to predict
one’s state of emotions.
• DeepFace is the most lightweight face recognition and facial
attribute analysis library for Python.
• OpenCV is a vast library that helps in providing various functions
for image and video operations.
4
5. Purpose
5
• Recognizing facial expressions would help systems to
detect if people were happy or sad as a human being can.
This will allow software’s and AI systems to provide an even
better experience to humans in various applications.
• Can be used in multiple AI tools to get user feedback.
• It can be used to improve access and security.
6. Scope
✖ Facial Detection — Ability to detect the location of
face in any input image or frame. The output is the
bounding box coordinates of the detected faces
✖ Emotion Detection — Classifying the emotion on the
face as happy, angry, sad, neutral, surprise, disgust or
fear
6
10. Overall Architecture
Receive input frame
from the webcam
10
Identify faces
in the webcam
and prepare
these images
for the deep
learning
models
Send
processe
d faces to
the
models
Render prediction
outcomes with
bounding boxes to
screen
12. Capturing Live Video
Opencv
Webcam Output
12
Python provides various libraries for image and video processing.
One of them is OpenCV.
OpenCV is a vast library that helps in providing various functions for
image and video operations. With OpenCV, we can capture a video
from the camera. It lets you create a video capture object which is
helpful to capture videos through webcam and then you may perform
desired operations on that video.
13. Capturing Live Video
13
We can build very interesting applications using the live video
stream from the webcam. OpenCV provides a video capture
object which handles everything related to opening and
closing of the webcam. All we need to do is create that object
and keep reading frames from it.
The following code will open the webcam, capture the
frames, scale them down by a factor of 2, and then display
them in a window. You can press the Esc key to exit.
15. Steps to capture video
15
• Use cv2.VideoCapture() to get a video capture object for the camera.
• Set up an infinite while loop and use the read() method to read the frames using
the above created object.
• Use cv2.imshow() method to show the frames in the video.
• Breaks the loop when the user clicks a specific key.
16. Facial Detection
16
For face emotion recognition I will not train data sample either rather we
can use a pre-trained Deep Learning library which is deepface. It contains
lot of pre-trained Deep Learning architectures for face emotion recognition.
This library scans the input image and returns the bounding box
coordinates of all detected faces
Input Output
17. Advantages Of DeepFace
17
You may ask yourself why you should use the deepface library over
alternatives? I think those are the most important reasons why people use
DeepFace to build facial recognition applications:
1. It is lightweight
2. It’s easy to Install
3. Multiple Models and Detectors
4. Open Source Face Recognition
5. Growing deepface Community
6. Language-Independent Package
18. Integration Of Opencv-python Explained
18
Basically, openCV captures video from your webcam. For every frame, it will
convert it to RGB format. This RGB frame will be sent to detect_face function,
which firstly detects all the faces in the frame using
haar_cascade_frontal_face.xml and for every face, predicts using the 3 trained
models to generate outcomes. These outcomes are returned together with the
face bounding box locations (top, right, bottom, left).
OpenCV then makes use of the bounding box locations to draw rectangle on the
frame and display prediction outcomes in text.
Implementation of the detect_face function can be found in the source code. Note
that since emotion model is trained from grey-scale images, RGB image needs to
be grey-scaled before being predicted by emotion model.
20. 20
1. Obtain facial bounding box from haar
2. Find the center point of the bounding box
3. Find the maximum between the height and width of the bounding box
4. Draw new bounding box based on the center and maximum side length
5. Resize the cropped face from the new bounding box to the required size
An ideal cropped face photo should have the face located at the center with
no distortion and the required size. If the required size is a square, the
following method does the trick.
21. Integration Of cv2, deepface, and haarcascade
21
Humans are used to taking in non verbal cues from facial emotions. Now
computers are also getting better to reading emotions. So how do I detect
emotions in an image? I have used an open source data set — Face Emotion
Recognition (FER from DeepFace and built a CNN to detect emotions.
The emotions can be classified into 7 classes — happy, sad, fear, disgust, angry,
neutral and surprise.
Inserting text
22. Results in live emotion
detection
22
Opencv
(live camera)
Deepface
(face detection)
23. Working
Total success!
Surprised Cena as Input
23
Detect the face by pre-build
libraries and distinguishing
it with green rectangle
Matched with the library
models for the emotion
detection
DeepFace.verify(img1_path =
"img1.jpg", img2_path =
"img2.jpg")
26. I was able to open a window and see myself on the screen, however
the moment I tried to change my facial expressions the window
crashed and not responding. I consulted with my colleagues who ran
into similar problems on his MacBook, but not his iMac at home. I
researched the issue online and soon found that many users are
having this issue, and none of the solutions helped. The Stack
Overflow posts by other users here and here describe my exact issue. I
tried numerous solutions but nothing worked to my satisfaction. For
instance, I was able to take only few expressions, but this was
unsuitable for the live demonstration that I am building.
Problem While Execution
26
27. Conclusion
• Live Emotion Recognition would help systems to detect if people
were happy or sad as a human being can.
• This will also allow software’s and AI systems to provide an even
better experience to humans in various applications.
• Can be used in multiple AI tools to get user feedback.
28. Reference
• The Opencv for recording live camera is taken from
Geeksforgeeks.org
• Deepface for the face detection is taken from viso.ai
• HaarCascade.xml file for pre-stored expressins models has
been taken from blog masters
haarcascade_frontalface_default.xml file