SAPTHAGIRI COLLEGE OF ENGINEERING
(AFFILIATED TO VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI.APPROVED BY AICTE, NEW DELHI)
(Accredited by NAAC with “A” Grade ) (Accredited by NBA for ECE, CSE, ISE, ME, EEE)
An ISO 9001:2015 & ISO 14001:2015 Certified Institution,
14/5, Chikkasandra, Hesaragatta Main Road, Bengaluru-560057
DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING
VATHSALYA B USN -1SG22IS120
SAHANA USN- 1SG22IS091
YASHASWINI M USN- 1SG22IS124
Under the guidance of
Prof.MANASA.P.M
Dept of ISE
Project Phase-1 Review -1 presentation
On
Exploring human emotions through EEG and deep learning
methods
Abstract
Emotion recognition plays a vital role in healthcare, human-computer interaction,
and mental health monitoring. Traditional methods relying on facial expressions
and text often fail to reflect true emotional states. This study proposes a deep
learning-based approach using electroencephalography (EEG) signals to classify
emotions accurately and objectively. EEG data is preprocessed using a Butterworth
bandpass filter, followed by feature extraction using Power Spectral Density (PSD),
Spectral Entropy (SE), and Hjorth parameters. Dimensionality reduction is
achieved via Principal Component Analysis (PCA). Two deep learning models—
Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU)—are
developed and trained on labeled EEG data. The proposed LSTM and GRU models
achieve high classification accuracy of 97% and 96% respectively, outperforming
traditional machine learning models. This work demonstrates the potential of EEG-
based systems for real-time emotion detection and paves the way for future
applications in affective computing and mental health assessment.
Problem Statement
Traditional emotion recognition methods—such as facial expression
analysis, speech tone evaluation, and textual sentiment analysis—often
fail to reliably detect genuine emotional states due to subjectivity, social
masking, and external variability. These approaches lack the ability to
capture internal emotional experiences, especially in real-time or clinical
settings. Furthermore, conventional machine learning techniques applied
to EEG data require manual feature extraction and often miss complex
patterns, limiting their effectiveness.
There is a pressing need for an accurate, real-time, and objective
emotion recognition system that can overcome these limitations. This
study addresses the problem by leveraging EEG signals and deep
learning models (LSTM and GRU) to build a robust emotion
classification framework capable of recognizing true emotional states
from brainwave patterns.
Objective
The primary objective of this study is to develop an accurate and efficient emotion
recognition system using EEG brain signals, leveraging deep learning models—
specifically Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU)
networks. The research aims to:
•Explore the potential of EEG as a non-invasive and reliable method for detecting
real-time human emotions.
•Extract meaningful features from raw EEG data using techniques like Power
Spectral Density (PSD), Spectral Entropy (SE), and Hjorth Parameters.
•Reduce dimensionality using Principal Component Analysis (PCA) for optimal
model performance.
•Train and evaluate both traditional machine learning models and deep learning
architectures to classify emotional states (positive, negative, neutral) with high
accuracy.
•Demonstrate that deep learning models can outperform traditional approaches in
handling the temporal dynamics of EEG data for emotion classification.
Hardware and Software
requirements
Emotions are fundamental to human experience, influencing
decision-making, communication, and behavior. Recognizing
emotions accurately is essential in domains like healthcare,
human-computer interaction, education, and mental health.
Traditional emotion recognition systems primarily rely on non-
physiological cues such as facial expressions, speech, or textual
input. However, these methods can be misleading due to
voluntary masking, cultural variations, or social pressures,
especially in modern digital settings where individuals may
exhibit fake emotions.
To overcome these limitations, Electroencephalography (EEG)
has emerged as a promising alternative. EEG provides a non-
invasive and real-time means of measuring brain activity,
making it an objective source for emotion detection. Brainwaves
captured via EEG can be directly associated with emotional
states, offering deeper insights into internal experiences beyond
external behavior.
LITERATUE REVIEW
SL NO PAPER TITLE YEAR OF
PUBLICATION/
AUTHORS
DESCRIPTION AND
METHODOLOGY
OBSERVATIONS/
LIMITATIONS
1 Deep Learning
with Convolutional
Neural Networks
for EEG Emotion
Recognition
2017/Yang Li, Dongrui
Wu
Used CNN to extract spatial features
from EEG data from the DEAP
dataset. Data preprocessing involved
frequency band decomposition and
normalization.
Achieved higher accuracy than
traditional ML methods; CNNs
can capture spatial dependencies
in EEG signals.
2
EEG-based
Emotion
Recognition Using
Deep Autoencoders
2019Zheng et al.
Employed stacked autoencoders
(SAEs) for deep feature extraction
from EEG data. Evaluated
performance on the SEED dataset.
SAEs achieved better
classification performance
compared to handcrafted
features.
Model may not perform well on
real-time data; overfitting
potential due to deep
architecture.
3 Emotion
Recognition Using
Hybrid Deep
Neural Network
with EEG Signals
2020Tuncer et al. Proposed a hybrid CNN + LSTM
architecture to capture both spatial
and temporal features from EEG
signals
Outperformed single-network
models; Performance heavily
depends on tuning;
4 Transformer-Based
Framework for
EEG Emotion
Recognition 2023Lin et al.
Applied transformer architecture to
model long-range dependencies
Showed superior performance
over RNN-based models in
capturing global temporal
context.
Transformers require large
datasets
SYSTEM DESISGN AND BLOCK DIAGRAM
1. Data Collection:
The Emotion EEG dataset from Kaggle includes EEG recordings of brain activity
while participants watched emotional videos. The dataset contains 15 EEG channels and
labels for six emotions: happiness, sadness, anger, fear, surprise, and neutral.
2. Data Preprocessing:
The data is preprocessed by:
•Handling missing values (imputation/removal).
•Normalization of feature values.
•Splitting the data into training and testing sets.
EEG signals are raw time-series data, which will be transformed into meaningful
features for model training.
3. Feature Extraction:
EEG signals are converted into features like:
•Time-domain features: Mean, variance, etc.
•Frequency-domain features: Power spectral density.
•Time-Frequency Analysis: Wavelets or Short-Time Fourier Transform (STFT).
4. Model Training:
Deep learning models, such as CNN or RNN, will be trained using the preprocessed
and feature-extracted data to classify emotions.
5. Model Evaluation:
The model's performance will be evaluated on the test dataset using:
Conclusion
The EEG-LSTM framework effectively classifies human
emotions by leveraging brainwave data. Through careful
preprocessing, meaningful feature extraction (such as PSD, spectral
entropy, and Hjorth parameters), and the application of LSTM
networks, the system captures temporal patterns in EEG signals.
This approach provides a robust and accurate method for emotion
recognition, demonstrating the potential of combining deep learning
with neurophysiological data for human-centered applications.
exploring human values through eeg and deep learning methods

exploring human values through eeg and deep learning methods

  • 1.
    SAPTHAGIRI COLLEGE OFENGINEERING (AFFILIATED TO VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI.APPROVED BY AICTE, NEW DELHI) (Accredited by NAAC with “A” Grade ) (Accredited by NBA for ECE, CSE, ISE, ME, EEE) An ISO 9001:2015 & ISO 14001:2015 Certified Institution, 14/5, Chikkasandra, Hesaragatta Main Road, Bengaluru-560057 DEPARTMENT OF INFORMATION SCIENCE AND ENGINEERING VATHSALYA B USN -1SG22IS120 SAHANA USN- 1SG22IS091 YASHASWINI M USN- 1SG22IS124 Under the guidance of Prof.MANASA.P.M Dept of ISE Project Phase-1 Review -1 presentation On Exploring human emotions through EEG and deep learning methods
  • 2.
    Abstract Emotion recognition playsa vital role in healthcare, human-computer interaction, and mental health monitoring. Traditional methods relying on facial expressions and text often fail to reflect true emotional states. This study proposes a deep learning-based approach using electroencephalography (EEG) signals to classify emotions accurately and objectively. EEG data is preprocessed using a Butterworth bandpass filter, followed by feature extraction using Power Spectral Density (PSD), Spectral Entropy (SE), and Hjorth parameters. Dimensionality reduction is achieved via Principal Component Analysis (PCA). Two deep learning models— Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU)—are developed and trained on labeled EEG data. The proposed LSTM and GRU models achieve high classification accuracy of 97% and 96% respectively, outperforming traditional machine learning models. This work demonstrates the potential of EEG- based systems for real-time emotion detection and paves the way for future applications in affective computing and mental health assessment.
  • 3.
    Problem Statement Traditional emotionrecognition methods—such as facial expression analysis, speech tone evaluation, and textual sentiment analysis—often fail to reliably detect genuine emotional states due to subjectivity, social masking, and external variability. These approaches lack the ability to capture internal emotional experiences, especially in real-time or clinical settings. Furthermore, conventional machine learning techniques applied to EEG data require manual feature extraction and often miss complex patterns, limiting their effectiveness. There is a pressing need for an accurate, real-time, and objective emotion recognition system that can overcome these limitations. This study addresses the problem by leveraging EEG signals and deep learning models (LSTM and GRU) to build a robust emotion classification framework capable of recognizing true emotional states from brainwave patterns.
  • 4.
    Objective The primary objectiveof this study is to develop an accurate and efficient emotion recognition system using EEG brain signals, leveraging deep learning models— specifically Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks. The research aims to: •Explore the potential of EEG as a non-invasive and reliable method for detecting real-time human emotions. •Extract meaningful features from raw EEG data using techniques like Power Spectral Density (PSD), Spectral Entropy (SE), and Hjorth Parameters. •Reduce dimensionality using Principal Component Analysis (PCA) for optimal model performance. •Train and evaluate both traditional machine learning models and deep learning architectures to classify emotional states (positive, negative, neutral) with high accuracy. •Demonstrate that deep learning models can outperform traditional approaches in handling the temporal dynamics of EEG data for emotion classification.
  • 5.
    Hardware and Software requirements Emotionsare fundamental to human experience, influencing decision-making, communication, and behavior. Recognizing emotions accurately is essential in domains like healthcare, human-computer interaction, education, and mental health. Traditional emotion recognition systems primarily rely on non- physiological cues such as facial expressions, speech, or textual input. However, these methods can be misleading due to voluntary masking, cultural variations, or social pressures, especially in modern digital settings where individuals may exhibit fake emotions. To overcome these limitations, Electroencephalography (EEG) has emerged as a promising alternative. EEG provides a non- invasive and real-time means of measuring brain activity, making it an objective source for emotion detection. Brainwaves captured via EEG can be directly associated with emotional states, offering deeper insights into internal experiences beyond external behavior.
  • 6.
    LITERATUE REVIEW SL NOPAPER TITLE YEAR OF PUBLICATION/ AUTHORS DESCRIPTION AND METHODOLOGY OBSERVATIONS/ LIMITATIONS 1 Deep Learning with Convolutional Neural Networks for EEG Emotion Recognition 2017/Yang Li, Dongrui Wu Used CNN to extract spatial features from EEG data from the DEAP dataset. Data preprocessing involved frequency band decomposition and normalization. Achieved higher accuracy than traditional ML methods; CNNs can capture spatial dependencies in EEG signals. 2 EEG-based Emotion Recognition Using Deep Autoencoders 2019Zheng et al. Employed stacked autoencoders (SAEs) for deep feature extraction from EEG data. Evaluated performance on the SEED dataset. SAEs achieved better classification performance compared to handcrafted features. Model may not perform well on real-time data; overfitting potential due to deep architecture. 3 Emotion Recognition Using Hybrid Deep Neural Network with EEG Signals 2020Tuncer et al. Proposed a hybrid CNN + LSTM architecture to capture both spatial and temporal features from EEG signals Outperformed single-network models; Performance heavily depends on tuning; 4 Transformer-Based Framework for EEG Emotion Recognition 2023Lin et al. Applied transformer architecture to model long-range dependencies Showed superior performance over RNN-based models in capturing global temporal context. Transformers require large datasets
  • 7.
    SYSTEM DESISGN ANDBLOCK DIAGRAM 1. Data Collection: The Emotion EEG dataset from Kaggle includes EEG recordings of brain activity while participants watched emotional videos. The dataset contains 15 EEG channels and labels for six emotions: happiness, sadness, anger, fear, surprise, and neutral. 2. Data Preprocessing: The data is preprocessed by: •Handling missing values (imputation/removal). •Normalization of feature values. •Splitting the data into training and testing sets. EEG signals are raw time-series data, which will be transformed into meaningful features for model training. 3. Feature Extraction: EEG signals are converted into features like: •Time-domain features: Mean, variance, etc. •Frequency-domain features: Power spectral density. •Time-Frequency Analysis: Wavelets or Short-Time Fourier Transform (STFT). 4. Model Training: Deep learning models, such as CNN or RNN, will be trained using the preprocessed and feature-extracted data to classify emotions. 5. Model Evaluation: The model's performance will be evaluated on the test dataset using:
  • 8.
    Conclusion The EEG-LSTM frameworkeffectively classifies human emotions by leveraging brainwave data. Through careful preprocessing, meaningful feature extraction (such as PSD, spectral entropy, and Hjorth parameters), and the application of LSTM networks, the system captures temporal patterns in EEG signals. This approach provides a robust and accurate method for emotion recognition, demonstrating the potential of combining deep learning with neurophysiological data for human-centered applications.