Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
ML Report.docx
1. [Year]
ML Report On
“Emojify”
Create your own emoji with Deep
Learning
Project By
Umair Manzoor FA20-BCS-134
Muhammad Bilal FA20-BCS-146
Submitted to
Sir Rehaan Ashraf
2. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
Abstract
Emojis or avatars are ways to indicate nonverbal cues. These cues have become
an essential part of online chatting, product review, brand emotion, and many
more. It also lead to increasing data science research dedicated to emoji-driven
storytelling.With advancements in computer vision and deep learning, it is now
possible to detect human emotions from images. In this deep learning project, we
will classify human facial expressions to filter and map corresponding emojis or
avatars.Emojis are widely used in marketing, virtual communication, sentiment
analysis, and viewpoint mining to enhance the semantic quality of messages and
express emotions more authentically. They are also popular in feedback forms,
where their use increases the response rate. Emojis simulate facial expressions
and are used to express emotions in informal text communication such as
sarcasm, irony, or humor. A study explores real-time emotional recognition using
facial features via emojis and develops software that includes five human
expressions (happy, neutral, sad, surprised, and neutral) to better communicate
emotional responses and promote interpersonal connections. The output of the
study displays the corresponding emoji with the respective facial expression.
3. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
Introduction
Emojis have become an integral part of our daily communication. They have evolved
into a new visual language that allows us to express ideas and emotions more
effectively than traditional text-based communication. In today's generation, people
have embraced emojis as a means of connecting and conveying their feelings in
various online platforms, such as Twitter, Facebook, and Instagram. Understanding
the significance of emojis in modern communication, we have developed a software
called Emoji-Fy, which enables the creation of customized emojis and avatars.
The emergence of neural networks has revolutionized numerous fields by enabling
end-to-end learning and sophisticated data analysis. In this paper, we present a
system that implements a Convolutional Neural Network (CNN) in combination
with the Fer2013 Dataset to detect emotions from facial expressions and convert
them into personalized emojis.
Our primary objective is to build a convolutional neural network capable of
accurately recognizing facial emotions. Emotion recognition from facial expressions
has been a longstanding challenge in the field of computer vision. Traditional
approaches relied on handcrafted features and machine learning algorithms, which
often struggled to capture the intricacies and nuances of human emotions. However,
with the advancements in deep learning and the availability of large-scale datasets,
the potential for accurate emotion detection has significantly improved.
Dataset
To accomplish our goal, we will train our model using the Fer2013 dataset. The
Fer2013 dataset consists of approximately 30,000 facial RGB images depicting
various expressions. The images are grayscale and have a size restricted to 48×48
pixels. The Fer2013 dataset labels emotions into seven distinct types: Angry (0),
Disgust (1), Fear (2), Happy (3), Sad (4), Surprise (5), and Neutral (6). Each image
is associated with one of these emotion labels, representing the predominant emotion
displayed in the facial expression.
4. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
The Fer2013 dataset provides a rich and diverse collection of facial expressions,
allowing our model to learn and generalize patterns associated with different
emotions. It is worth noting that the Disgust expression category contains the fewest
number of images, with only 600 samples, while the other labels have nearly 5,000
samples each. This class imbalance poses a challenge during the training process, as
the model may have a tendency to favor the majority classes and struggle with
accurately detecting the minority class.
Techniques
By leveraging the power of the Convolutional Neural Network and the rich dataset
of facial expressions, our system aims to accurately identify and classify emotions
from images. The CNN architecture consists of multiple layers, including
convolutional layers, pooling layers, and fully connected layers. These layers work
in tandem to extract relevant features from the input images and learn discriminative
representations of emotions. The Fer2013 dataset will be divided into training,
validation, and testing sets to evaluate the performance and generalization ability of
the trained model.
Once the emotions are detected, our system will map them to corresponding emojis
or avatars. The mapping process involves associating each emotion label with a
specific visual representation, thereby providing users with a highly personalized
and expressive means of communication. The customized emojis and avatars can
convey the detected emotion in a more vivid and relatable manner, enhancing the
user experience and facilitating better emotional connections in online
communication.
Methodology
Now we will discuss the methodology and techniques used in developing our
emotion detection system, including the architecture of the Convolutional Neural
Network, the data preprocessing steps, and the training process. We will delve into
the challenges posed by the class imbalance in the Fer2013 dataset and present
strategies employed to mitigate its impact on the model's performance. Furthermore,
we will evaluate the performance and accuracy of our system by comparing it with
5. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
existing emotion detection approaches and conducting extensive experiments on
various evaluation metrics.
The subsequent sections of this paper are organized as follows: Section 2 provides
an overview of related work in the field of emotion detection and the utilization of
neural networks. It explores the advancements made in emotion recognition
techniques and highlights the strengths and limitations of existing approaches.
Section 3 details the methodology and architecture of our emotion detection system,
covering the data preprocessing steps, the CNN architecture, and the training
process. Section 4 presents the results and performance evaluation of our system,
including comparisons with state-of-the-art methods and analyses of various
evaluation metrics. Section 5 discusses the implications and potential applications
of our system, exploring how personalized emojis and avatars can enhance online
communication and facilitate emotional connections. Finally, Section 6 concludes
the paper with a summary of our findings, contributions, and suggestions for future
research directions.
In summary, this paper introduces Emoji-Fy, a software that utilizes Convolutional
Neural Networks and the Fer2013 Dataset to detect emotions from facial expressions
and convert them into personalized emojis or avatars. Our objective is to enhance
online communication by providing users with a more expressive and customized
means of conveying emotions. Through the detailed methodology, experimental
results, and discussions presented in this paper, we aim to contribute to the
advancement of emotion detection systems and their practical application
6. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
Proposed System: Facial Emotion Recognition Using CNN:
1. Build a CNN Architecture.
Here we are uploading all of the required libraries wished for our version after which
we’re initializing the education and validation mills i.e., we’re first rescaling all our
pics had to teach our version after which changing them to grayscale pics.
Imports:
Build the CNN architecture:
7. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
2. Train the version on Fer2013 dataset.
Here we’re education our community on all the pics we have got i.e.Fer2013 dataset
after which saving weights in version for the destiny predictions. Then the use of
OpenCV hit upon bounding containers of face in webcam and are expecting
emotion.
Transfer Train and set records.
Predicting Emotions.
8. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
Final Output.
Here are very few pics of ways will this task looks. This dataset includes of facial
feelings of following categories:0:Sad 1:Surprise2:Fear3:Happy4:Neutral
9. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
BLOCK DIAGRAM
Developing GUI and Mapping with emojis
Create a folder named emojis and save the emojis corresponding to each of
the seven emotions in the dataset. The trained model is tested on a set of images.
Random images are introduced to the network and the output label is compared to
the original known label of the image. Parameters used for evaluation are F1 score,
precision and recall. Precision is the proportion of predicted positives that are truly
positives.
Testing
Loss Plot Accuracy
10. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
TOOLS USED
We have used diverse records technology associated libraries like keras,
TensorFlow, OpenCV, NumPy etc.
For the motive of constructing the keras version we’ve got used sequential modelling
technique.
VS Code and Anaconda Prompt is used for usual improvement as a common
platform.
CONCLUSION
As Today’s technology humans is loving the fashion of speaking with non-verbal
cues like emoticons so we notion why now no longer deliver out our personal emojis.
With improvements in laptop imaginative prescient and deep learning, we are able
to now capable to hit upon human feelings from pics. In this deep learning task, we
are able to classify human facial expressions to clear out and map corresponding
emojis or avatars. The end result we’re anticipated is the usage of emojify in chatting
world. We need humans to talk with their personal customisable emoticon. The task
will apprehend one’s present emotion and convert that emotion's emoji in order that
the consumer receives emoji of their face and use it in chatting.
References
I. R. Yamashita, M. Nishio, R. Do and K. Togashi, "Convolutional neural
networks: an overview and application in radiology", Insights into Imaging,
vol. 9, no. 4, pp. 611-629, 2018.
II. D. Meena and R. Sharan, "An approach to face detection and recognition",
2016 International Conference on Recent Advances and Innovations in
Engineering (ICRAIE), pp. 1-6, 2016
III. The Extended Cohn–Kanade Database. Available online:
http://www.consortium.ri.cmu.edu/ckagree/
IV. The Japanese Female Facial Expression Database. Available online:
http://www.kasrl.org/jaffe.html.
11. COMSATS UNIVERSITY ISLAMABAD, VEHARI CAMPUS
V. Binghamton University 3D Facial Expression Database.
http://www.cs.binghamton.edu/~lijun/Research/3DFE/3DFE_Ana
lysis.html.
VI. Facial Expression Recognition 2013 Database. Available online:
https://www.kaggle.com/c/challenges-inrepresentation-learning-facial-
expression-recognition-challenge/data
VII. Zafeiriou, S.; Zhang, C.; Zhang, Z. A survey on face detection in the wild:
Past, present and future. Comput. Vis. Image Underst. 2015, 138, 1–24.
VIII.Perez, L.; Wang, J. The effectiveness of data augmentation in image
classification using deep learning. arXiv 2017, arXiv:1712.04621
IX. ACM Digital Library. Available online: https://dl.acm.org/
X. IEEE Xplore Digital Library. Available online:
https://ieeexplore.ieee.org/Xplore/home.jsp
XI. Bielefeld Academic Search Engine. Available online: https://www.base-
search.net/