SlideShare a Scribd company logo
1 of 9
Download to read offline
International Journal of Electrical and Computer Engineering (IJECE)
Vol. 9, No. 2, April 2019, pp. 1012~1020
ISSN: 2088-8708, DOI: 10.11591/ijece.v9i2.pp1012-1020  1012
Journal homepage: http://iaescore.com/journals/index.php/IJECE
Recognition of emotional states using EEG signals
based on time-frequency analysis and SVM classifier
Fabian Parsia George, Istiaque Mannafee Shaikat, Prommy Sultana Ferdawoos,
Mohammad Zavid Parvez, Jia Uddin
Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh
Article Info ABSTRACT
Article history:
Received Apr 17, 2018
Revised Nov 5, 2018
Accepted Dec 3, 2018
The recognition of emotions is a vast significance and a high developing
field of research in the recent years. The applications of emotion recognition
have left an exceptional mark in various fields including education and
research. Traditional approaches used facial expressions or voice intonation
to detect emotions, however, facial gestures and spoken language can lead to
biased and ambiguous results. This is why, researchers have started to use
electroencephalogram (EEG) technique which is well defined method for
emotion recognition. Some approaches used standard and pre-defined
methods of the signal processing area and some worked with either fewer
channels or fewer subjects to record EEG signals for their research. This
paper proposed an emotion detection method based on time-frequency
domain statistical features. Box-and-whisker plot is used to select the optimal
features, which are later feed to SVM classifier for training and testing the
DEAP dataset, where 32 participants with different gender and age groups
are considered. The experimental results show that the proposed method
exhibits 92.36% accuracy for our tested dataset. In addition, the proposed
method outperforms than the state-of-art methods by exhibiting higher
accuracy.
Keywords:
Accuracy
EEG signal
Emotions
Features
FFT
Frequency bands
SVM
Copyright © 2019 Institute of Advanced Engineering and Science.
All rights reserved.
Corresponding Author:
Jia Uddin,
Department of Computer Science and Engineering,
BRAC University, 66 Mohakhali, Dhaka-1212, Bangladesh.
Email: jia.uddin@bracu.ac.bd
1. INTRODUCTION
Emotion, being a non-verbal vital approach for social interaction, is a psychological and mental state
of mind that provokes us to effectively react to a certain situation based on past experience [1]. The
implementation of emotion recognition can be applied to various fields such as education, cognitive science,
entertainment, machine learning, self-control and security, biomedical engineering, marketing and
production. According to Russell, any discrete emotion can be deducted from their level of arousal and
valence using his Circumplex Model of Emotion [2].
Many existing methods implemented facial expression, speech signals, and self-ratings to classify
emotions [3], [4]. However, the systems used in these existing methods usually fail to acknowledge all the
detailed emotional inputs for processing, such as the hand gestures or the tone of the voice, thus leading to
vague and biased outcome [3]. Some approaches used subjective measurement that can affect the end result
as the presence of anomalous trials can be significant [4]. After the high influence of Electroencephalogram
(EEG) signals on the field of research, it was observed that human emotion can be represented more
accurately with EEG signals than with facial gestures, speech signals, or self-reporting information [5].
Emotional activities cause the brain to generate signals in the forms of waves and the technique used to
record these signals is known as EEG [5]. Predicting emotions using EEG signals was first introduced by
Musha et al. [6]. From then onward, studies on this area is bringing more interest in the current years for
Int J Elec & Comp Eng ISSN: 2088-8708 
Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George)
1013
machine learning purpose and the provision of less expensive EEG measuring devices. In numerous
investigations, experiments were restricted to very few subjects, but the methods implemented for a single
subject were inadequately simplified and could not be utilized broadly for multiple subjects [7]. Many studies
were limited to few channels in order to avoid high complexity and computational costs nevertheless, this can
lead to feeble result due to lack of adequate information [8]. For the feature extraction process, different
methodologies were used to retrieve features in previous studies and these included Sample Entropy [9],
Autoregressive (AR) Model [10], Discrete Wavelet Transform (DWT) [11], and Fast Fourier Transformation
(FFT) [12]. Classification of these extracted features was done using various classifiers by the researchers
such as Support Vector Machine (SVM) [13, 14], Neural Network [15], and k-nearest neighbor (KNN) [16].
Referring to the previously specified issues [3], [4], [7], [8], a novel methodology, for emotion
recognition based on time-frequency analysis, is proposed and evaluated with EEG signals from DEAP
dataset [17]. In the proposed model, initially, the EEG signals from only the prefrontal cortex are retrieved
for further work since emotional activities occur mainly in the frontal and temporal lobe of the brain [18].
Additionally, this also reduced the total number of channels that are to be used in the feature extraction
method, as the irrelevant channels are already discarded beforehand. This lead to less computational cost and
thus increased the efficiency of the algorithms used in our technique [19]. Existing methods, working with
frequency bands, extracted all the five types of frequency bands, namely delta (0.5-4 Hz), theta (4-7 Hz),
alpha (7-13 Hz), beta (13-30 Hz), and gamma (30-60 Hz) [20]. According to the state-of-the-art methods, the
emotional and cognitive activities of the brain can be well signified using the alpha, beta, and theta frequency
bands [8], [21]. Therefore, these specific bands were considered in this paper. The dataset is distributed into
four emotional quadrants, which are high arousal - high valence (HAHV), low arousal - high valence
(LAHV), low arousal - low valence (LALV), and high arousal - low valence (HALV). The dataset in each
specific quadrant was then averaged for all participants for the specific emotion. The reason for averaging the
samples according to the quadrant is due to the inconsistency in emotions felt by the participants. Not all the
participants feel the same emotion for a particular video which indicates that some samples in the EEG
signals are anomalous and thus can greatly affect the end result. The premise of averaging is to reduce data
deviation and to statistically reach close to the actual value. This hypothesis might improve the accuracy of
the classification process. Finally, the statistical features, extracted in the frequency domain, were then fed
into the SVM classifier in order to classify the emotions.
The subsequent sections of the paper have been organized as follows. Section 2 introduces the
dataset used in this paper, as well as the proposed approach for recognizing emotion. Section 3 provides the
experimental results along with their analysis. Finally, Section 4 concludes the paper.
2. PROPOSED METHOD
The proposed model illustrated in Figure 1 represents the overall flow of our work. For this
research, the EEG signals were first accumulated followed by data preprocessing. Next, bands of specific
frequencies were extracted from the preprocessed data. Subsequently, suitable features were extracted and
selected to be fed into the classifier. Finally, SVM classifier was used to classify these selected features.
Figure 1. Workflow of the proposed method
2.1. Data description
In our research, we used DEAP dataset [17] as the source of brain signals. It is a multimodal dataset
which can be used to analyze the human affective states. The data collection process was carried out in the
controlled light environment. Biosemi ActiveTwo system was used to record the EEG signals of each
participant. Two computers were used which were synchronized periodically with the help of markers - one
for recording the signals and another for presentation of stimuli. The experiment was carried out using a
1-minute 40 music videos displayed in 17-inch screen but (800x600) resolution was maintained to minimize
EEG Signals
Signal
Preprocessing
Signal Refining
Band
Extraction
Feature
Extraction
Emotion
Classification
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020
1014
the eye movements. For the experiment, 32 AgCl electrodes were used to record the EEG signals at a
sampling rate of 512 Hz. 16 male and 16 female participants with ages ranging from 19 to 37 were chosen to
conduct the experiment. After 20 video trials, the subjects could take a break for snacks. Each trial lasted for
63 seconds, a total of 60-second video and 3-second pre-trial baseline. At the end of each video trial, a
manual rating was done by the subjects on a scale of 1-9 to determine four different emotions (Arousal,
Dominance, Liking, and Valence).
2.2. Signal preprocessing
The dataset was downsampled into 128 Hz and then Electrooculography (EOG) artifacts were
removed due to eye movements. The signals were then filtered with a minimum of 4 Hz and a maximum of
45 Hz using a band-pass filter. To create a common reference, the data were averaged. Later on, the data was
segmented into 60 seconds by removing the 3 seconds pre-trial baseline and was arranged in Experiment_id
order.
For our research, we have chosen pre-processed data files that included EEG signals of each
participant. All participants file have two arrays such as data and label, which is clarified below in Table 1.
Table 1. Contents of each Participant Files
Name Size Description
Data 40×40×8064 Video/trial×Channel×Data
Label 40×4 Video/trial×Label (Valence, Arousal, Dominance, Liking)
The data array contains EEG signals for all the participants and all the videos inclusive whereas the
labels array contains the video ID classifications according to the label (valence, arousal, dominance, and
liking).
2.3. Signal refining
As the first step towards band extraction, the data were first rearranged to make it appropriate for the
extraction process. The preprocessed data from DEAP dataset was used for our work which contained 32
files representing each participant. Each file contained two arrays: one was 3D array, named Data, of size
40x40x8064 and another was the 2D array, named Label, of size 40x4. For our study, the 3D data array was
used throughout the course of our work. A total of 40 channels were used to record the EEG signals, out of
which 32 were EEG channels and 8 were peripheral channels. Previous studies illustrated that the
information related to emotions are focused mostly in the frontal and temporal areas of the brain [18].
However, in order to decrease the computational costs of our proposed method, we only worked with the
channels that are related to the frontal lobe of the brain and these channels are Fp1, F3, F7, FC5, FC1, Fp2,
Fz, F4, F8, FC6, and FC2. Classification and feature extraction from 3D array were laborious as it was hard
to manipulate the data as per our requirements. For this reason, the preprocessed data was sorted to 40 files
each representing the music video used in the DEAP dataset. Each video file contained an array of size
8064x352 where the rows represent the length of data and columns represent the total number of channels of
the 32 participants as described in Table 2.
Table 2. Array Representation of the Video File
Array Name Array Size (Row × Column) Array Contents (Row × Column)
Video_no 8064 × 352 Data × subjectNo_channelNo
2.4. Band extraction
The EEG signals used for our research are on the time domain. Existed researches demonstrated
that, in order to recognize emotional activities with better accuracy the features are extracted in the frequency
domain [22]. This is done by applying FFT to the time domain signal. FFT is an algorithm that is used to
convert a signal from the time domain to the frequency domain.
For X and Y of length n, these transforms are defined as follows:
∑ (1)
where is one of roots of unity and .
Int J Elec & Comp Eng ISSN: 2088-8708 
Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George)
1015
As it was discussed earlier, emotional activities cause the brain to generate signals in the form of
waves. These signals, which can be subdivided into 5 frequency bands, hold a correlation with the emotional
activities. These 5 different types of frequency bands are comprised of delta, theta, alpha, beta, and gamma
[21]. As stated by [21], the alpha, beta, and theta can well represent the emotional and cognitive process of
the brain than the other 2 bands. This is why, we have extracted these 3 bands using Butterworth band-pass
filter after applying FFT on the EEG signals.
2.5. Feature extraction
The experimenters in [17] also provided information regarding emotions that are supposed to be felt
after watching each video. It was estimated that each video can be placed in any of the 4 emotional quadrants
which are HAHV, LAHV, LALV, and HALV as represented in Figure 2. Individuals can respond differently
for a specific video, which can result in the presence of irregular samples in the EEG signals. These
erroneous samples are required to be ruled out of each quadrant in order to minimize the inconsistency in the
samples. For our research, in order to reduce the data deviation, the extracted band values of the videos were
averaged according to their corresponding quadrant along with specific emotion as illustrated in Table 3.
Figure 2. Four quadrants of emotion
Table 3. List of 40 Videos (Denoted by the Experiment_Id) and their Corresponding Quadrant
HAHV LAHV LALV HALV
1 8 16 10
2 9 22 21
3 12 23 31
4 13 24 32
5 14 25 33
6 15 26 34
7 17 27 35
11 18 28 36
- 19 29 37
- 20 30 38
After sorting the video in accordance with their quadrant and averaging the bands of all the videos
from each quadrant, 4 video files were created which contained only the averages values of the extracted
bands. These band values were further scaled so that the SVM classifier does not get influenced by large
band values. Once the band values were scaled, the features of the input signals were then extracted. For our
work, we have extracted the statistical features minimum, maximum, variance, standard deviation, wave
entropy, power bandwidth, skewness, and kurtosis based the location or central tendency (statistical
Features I), the dispersion or spread (Statistical Features II), and the shape of distribution (Statistical
Features III) as illustrated in Table 4.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020
1016
Table 4. Types of Features Extracted For Classification
Statistical Features I Statistical Features II Statistical Features III
Minimum Variance Skewness
Maximum Standard deviation Kurtosis
Mean Wave entropy -
- Power bandwidth -
2.6. Emotion classification
Numerous machine learning algorithms have been in the existed studies, out of SVM is measured as
one of the most efficient classifiers for classifying emotions [8], [9]. The basic perception of the SVM is to
determine a decision hyperplane in order to classify data samples into two classes. The optimum hyperplane
for differentiating two groups is determined by maximizing the distances between nearest data point of both
the classes and the hyperplane [23]. The classification procedure includes predicting a confusion matrix
model by partitioning the sample data into a training set and a test set, for training and validation
respectively, using a technique called k-fold cross validation [24]. This technique randomly divides the data
into k equal subset of the data and is repeated 10 times. Each time, one of the k subsets is used as the test set
and the other k-1 subsets are put together to form a training set [24].
In our paper, different combinations of features were used for training and testing the SVM
classifier in order to generate the confusion matrix model. This model was then used to find the accuracy
depending on the k-fold cross validation. Here, we integrated SVM with 10-fold cross fold validation with
the parameters, kernel and regularization, which were selected by the grid-search method. In order to
implement SVM, LIBSVM library is used, which is a widely used library for support vector machines [25].
3. RESULTS AND ANALYSIS
Classifying the statistical features for obtaining a decent outcome was not an easy process. Various
aspects were required to be considered prior to reaching to a conclusion as the initial trials did not generate a
satisfying output. In this paper, 10-fold cross validation was incorporated with the SVM classifier using the
regularization and kernel parameter, which were selected via a grid-search approach. In order to determine
the accuracy of k-fold cross validation for the classification technique, (2) was used.
The equation for accuracy is defined as:
(2)
where TP: the number of true positive, TN: the number of true negative, FP: the number of false positive, and
FN: the number of false negative.
At the very beginning, prior to averaging the extracted band values according to their corresponding
quadrants, all the statistical features from Table 4 were fed into the SVM classifier at once. They were used
to train and test the SVM with a specific end goal to construct the confusion matrix model. This model was
then used to determine the accuracy based on the 10-fold cross validation. However, the very first trial did
not provide an expected output since the accuracy was only found to be 2.03%. This occurred because the
features, before averaging the data, do not contain distinguishable characteristics as it can be seen from
Figure 3. A box-and-whisker plot was used to graphically represent each feature through their quartiles, as
shown in Figure 3(a).
(a) (b)
Figure 3. Boxplot of the statistical features: (a) before averaging, (b) after averaging the video data
Int J Elec & Comp Eng ISSN: 2088-8708 
Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George)
1017
Afterward, we averaged the band values, in accordance with their quadrant to reduce the data
deviation and improve the accuracy of the classification process. Once the data were averaged, these were
scaled and the statistical features were extracted again as discussed in Section 2.5. The features were once
more fed into the classifier all at once. This time, there was a significant improvement in the percentage
accuracy for classification as it increased drastically from 2.03% to 32.14%.
The percentage of accuracy before and after averaging the data of all the features is summarized in
Table 5. It can be observed from the table that the features do not offer satisfactory outcome prior to
averaging the data because not all the participants feel the same emotion for a specific category of video.
Hence, the SVM classifier could not create our expected model due to the irregularity in data for each
quadrant.
Table 5. Accuracy for Classification before and after Averaging the Video Data
State of the Data Accuracy (%)
Before averaging the data 2.03
After averaging the data 32.14
Even though the accuracy for classification improved by a certain amount after averaging the band
values, it still did not provide the expected output. Subsequently, we reconstructed the boxplot to analyze
each feature that can be seen from Figure 3(b), but this time after averaging the data according to their
corresponding quadrants.
A box-and-whisker plot was used to distinguish each feature once the data were averaged with
respect to their equivalent quadrants, as shown in Figure 3(b). It can be observed from the figure that the data
of the features skewness, kurtosis, and wave entropy can be easily distinguished from each other as the data
do not overlap and are significantly deviated. The situation is also similar for the features skewness and
power bandwidth as these two features contained significant distinguishable characteristics. On the other
hand, the features mean, variance and standard deviation were relatively deviated from each other, but most
of the data were still seen to be overlapping. Furthermore, it was seen that the difference in the features
minimum, maximum, and variance was not as prominent as the deviation in the data was very insignificant.
Based on the above parameters, we segregated all the features into the following combinations:
1) Feature combination A: mean, standard deviation, variance.
2) Feature combination B: skewness, kurtosis, wave entropy.
3) Feature combination C: minimum, maximum, variance.
4) Feature combination D: skewness, power bandwidth.
It was observed that the accuracy for classification improved by a significant amount as illustrated
in Table 6. It can be noticed that the combination B provided a better result with an accuracy of 92.36% for
the quadrant HAHV_LALV and 89.11% for the quadrant HALV_LAHV, whereas the combination C
provided the least result with an accuracy of 11.23% for the quadrant HAHV_LALV and 15.69% for the
quadrant HALV_LAHV.
Table 6. Accuracy of the Feature Combinations after Averaging the Video Data
Accuracy of Feature Combinations (%) HAHV_LALV HALV_LAHV
A 25.34±5.20 21.21±3.20
B 92.36±6.30 89.11±8.30
C 11.23±5.20 15.69±3.50
D 44.31±7.50 49.23±8.30
The reason for feature combination B providing better result is that the samples of the features
skewness, kurtosis, and wave entropy can be easily distinguished from each other as the data are significantly
deviated (see Figure 3(b). Similarly, the accuracy for feature combination D was also seemed to be
satisfactory for the same reason. On the other hand, the feature combinations A, and C could not provide a
satisfactory result due to the overlapping and similarity of the data. This affects the SVM classifier in
generating a confusion matrix model that fails to depict the better accuracy. By observing the above results,
we could conclude that the shape of the distribution could well represent the emotional activities of the brain
and thus has provided a better percentage accuracy for classification.
As DEAP is a public dataset, we took a step forward to compare and analyze the existing methods
that have already used the DEAP dataset for recognizing emotions with our proposed approach. Table 7
illustrates some of the existing methods for emotion recognition using the DEAP dataset. It also provides
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020
1018
information regarding the emotions identified, the features extracted and the accuracy for emotion
classification by each method. It can be observed that most of the existing methods extracted more than one
type of features and few identified more than two emotions. However, for our work, only the statistical
features were considered for the feature extraction process and only two states of emotion were identified,
namely valence and arousal. Furthermore, it can be also seen from the table that the accuracy of the existing
methodologies is limited within 80%, whereas our proposed approach provided an accuracy of approximately
92.36%. Thus, it can be said that our approach is more efficient in terms of classifying emotions than many
of the existing approaches.
Table 7. Accuracy of Existing Approaches based on DEAP Dataset
Reference Emotions Features Accuracy (%)
[26] Valence and arousal DWT, WE, and Statistical 71.40
[27] Male/female valence and male/female arousal Statistical, Linear, and Non-statistical 78.60
[28] Stress and calm Statistical, PSD, and HOC 71.40
[29] Valence, arousal, and dominance Statistical and HFD 71.40
[30] Excitation, happiness, sadness, and hatred WT (db5), SE, CC, and AR 78.60
[31] Anger, surprise, and other HHS, HOC, and STFT 78.60
Proposed
Approach
Valence and arousal Statistical 92.36
4. CONCLUSION
In this paper, the preprocessed EEG signals from DEAP dataset were used to classify two types of
emotions, namely valence and arousal. The samples in the dataset were first transferred from time domain to
frequency domain by applying FFT followed by the extraction of the alpha, beta, and theta frequency bands
that are particularly significant for emotion recognition. Subsequently, the extracted bands were averaged in
correspondence to their quadrant for each emotion and the averaged band values were used to extract
statistical features. After that, the extracted features were scaled and various feature combinations were fed
into the SVM classifier for emotion recognition. It was observed that our approach predicts emotions with an
accuracy of 92.36% using skewness, kurtosis, and wave entropy features. Our proposed model shows better
results compared to the existing methods for DEAP dataset.
REFERENCES
[1] I. B. Mauss and M. D. Robinson, “Measures of emotion: A review,” Cognition and Emotion, vol. 23, no. 2,
pp. 209–237, 2009.
[2] J. Posner, J. A. Russell, and B. S. Peterson, “The circumplex model of affect: An integrative approach to affective
neuroscience, cognitive development, and psychopathology,” Development and Psychopathology, vol. 17, no. 3,
2005.
[3] T. Musha, Y. Terasaki, H. A. Haque, and G. A. Ivamitsky, “Feature extraction from EEGs associated with
emotions,” Artificial Life and Robotics, vol. 1, no. 1, pp. 15-19, 1997.
[4] N. Thammasan, K. Moriyama, K.-I. Fukui, and M. Numao, “Familiarity effects in EEG-based emotion recognition,”
Brain Informatics, vol. 4, no. 1, pp. 39-50, 2016.
[5] P. C. Petrantonakis and L. J. Hadjileontiadis, “A Novel Emotion Elicitation Index Using Frontal Brain Asymmetry
for Enhanced EEG-Based Emotion Recognition,” IEEE Transactions on Information Technology in Biomedicine,
vol. 15, no. 5, pp. 737-746, 2011.
[6] T. Musha, Y. Terasaki, H. A. Haque, and G. A. Ivamitsky, “Feature extraction from EEGs associated with
emotions,” Artificial Life and Robotics, vol. 1, no. 1, pp. 15-19, 1997.
[7] Xiang Li, Dawei Song, Peng Zhang, Guangliang Yu, Yuexian Hou and Bin Hu, “Emotion recognition from
multichannel EEG data through Convolutional Recurrent Neural Network,” 2016 IEEE International Conference on
Bioinformatics and Biomedicine (BIBM), Shenzhen, 2016, pp. 352-359.
[8] D. Huang, S. Zhang, and Y. Zhang, “EEG-based emotion recognition using empirical wavelet transform,” 2017 4th
International Conference on Systems and Informatics (ICSAI), 2017.
[9] X. Jie, R. Cao, and L. Li, “Emotion recognition based on the sample entropy of EEG,” Bio-medical materials and
engineering, vol. 24, no. 1, pp. 1185-92, 2014.
[10] T. D. Pham, D. Tran, W. Ma, and N. T. Tran, “Enhancing Performance of EEG-based Emotion Recognition Systems
Using Feature Smoothing,” Neural Information Processing Lecture Notes in Computer Science, pp. 95-102, 2015.
[11] Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using EEG signal,” Neural
Computing and Applications, vol. 28, no. 8, pp. 1985-1990, 2016.
[12] M. Murugappan and S. Murugappan, “Human emotion recognition through short time Electroencephalogram (EEG)
signals using Fast Fourier Transform (FFT),” 2013 IEEE 9th International Colloquium on Signal Processing and its
Applications, 2013.
Int J Elec & Comp Eng ISSN: 2088-8708 
Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George)
1019
[13] R. Biswas, J. Uddin, and J. Hasan, “A New Approach of Iris Detection and Recognition,” International Journal of
Electrical and Computer Engineering (IJECE), vol. 7, no. 5, pp. 2530-2536, 2017.
[14] N.A. Rakib, S.Z. Farhan, M.B. Sobhan, J. Uddin, A. Habib, “A Novel 2D Feature Extraction Method for
Fingerprints Using Minutiae Points and Their Intersections,” International Journal of Electrical and Computer
Engineering (IJECE), vol. 7, no. 5, pp. 2547-2554, 2017.
[15] Xiang Li, Dawei Song, Peng Zhang, Guangliang Yu, Yuexian Hou and Bin Hu, “Emotion recognition from
multichannel EEG data through Convolutional Recurrent Neural Network,” 2016 IEEE International Conference on
Bioinformatics and Biomedicine (BIBM), Shenzhen, 2016, pp. 352-359.
[16] M. Murugappan, “Human emotion classification using wavelet transform and KNN,” 2011 International Conference
on Pattern Analysis and Intelligence Robotics, 2011.
[17] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “DEAP: A
Database for Emotion Analysis; Using Physiological Signals,” IEEE Transactions on Affective Computing, vol. 3,
no. 1, pp. 18-31, 2012.
[18] R. Jenke, A. Peer, and M. Buss, “Feature Extraction and Selection for Emotion Recognition from EEG,” IEEE
Transactions on Affective Computing, vol. 5, no. 3, pp. 327-339, Jan. 2014.
[19] J. Edmonds and R. M. Karp, “Theoretical improvements in algorithmic efficiency for network flow problems,”
Journal of the ACM (JACM), vol. 19, no. 2, pp. 248-264, 1972.
[20] Z. Lan, O. Sourina, L. Wang, and Y. Liu, “Stability of Features in Real-Time EEG-based Emotion Recognition
Algorithm,” 2014 International Conference on Cyberworlds, 2014.
[21] K. Kalimeri and C. Saitis, “Exploring multimodal biosignal features for stress detection during indoor mobility,”
Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016, 2016.
[22] V. N. Vapnik, “Direct Methods in Statistical Learning Theory,” The Nature of Statistical Learning Theory,
pp. 225-265, 2000.
[23] A. Chatchinarat, K. W. Wong, and C. C. Fung, “A comparison study on the relationship between the selection of
EEG electrode channels and frequency bands used in classification for emotion recognition,” 2016 International
Conference on Machine Learning and Cybernetics (ICMLC), 2016.
[24] G. H. Golub, M. Heath, and G. Wahba, “Generalized cross-validation as a method for choosing a good ridge
parameter,” Technometrics, vol. 21, no. 2, pp. 215-223, 1979.
[25] C.-C. Chang and C.-J. Lin, “Libsvm,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3,
pp. 1-27, Jan. 2011.
[26] M. Ali, A. H. Mosa, F. Al Machot, and K. Kyamakya, “EEG-based emotion recognition approach for e-healthcare
applications,” in Ubiquitous and Future Networks (ICUFN), 2016 Eighth International Conference on, 2016,
pp. 946-950: IEEE.
[27] J. Chen, B. Hu, P. Moore, X. Zhang, and X. Ma, “Electroencephalogram-based emotion assessment system using
ontology and data mining techniques,” Applied Soft Computing, vol. 30, pp. 663-674, May 2015.
[28] T. F. Bastos-Filho, A. Ferreira, A. C. Atencio, S. Arjunan, and D. Kumar, “Evaluation of feature extraction
techniques in emotional state recognition,” in Intelligent human computer interaction (IHCI), 2012, pp. 1-6.
[29] Y. Liu and O. Sourina, “Real-time subject-dependent EEG-based emotion recognition algorithm,” Transactions on
Computational Science XXIII: Springer, pp. 199-223, 2014.
[30] A. Vijayan, D. Sen, and A. Sudheer, “EEG-Based Emotion Recognition Using Statistical Measures and Auto-
Regressive Modeling,” in Computational Intelligence and Communication Technology (CICT), pp. 587-591, 2015.
[31] P. Ackermann, C. Kohlschein, J. A. Bitsch, K. Wehrle, and S. Jeschke, “EEG-based automatic emotion recognition:
Feature extraction, selection and classification methods,” 2016 IEEE 18th International Conference on e-Health
Networking, Applications and Services (Healthcom), pp. 1-6, 2016.
BIOGRAPHIES OF AUTHORS
Fabian Parsia George pursued her graduation on Bachelor of Science (B.Sc.) in Computer
Science and Engineering at BRAC University in the Department of Computer Science and
Engineering in September 2018. She has served as a Student Tutor (ST) in the Department of
Computer Science and Engineering at BRAC University. Currently, she is working as a Teacher
in Green Dale International School. She aspires to pursue a career in the field of Machine
Learning, Brain-Computer Interface, Image Processing, and Data Mining.
Istiaque Mannafee Shaikat pursued his graduation on Bachelor of Science (B.Sc.) in
Computer Science at BRAC University in the Department of Computer Science and Engineering
in May 2018. Currently, he is working in the department of Digital Intelligence at
Grameenphone Ltd., Bangladesh. He aspires to pursue a career in the field of Machine Learning,
Brain-Computer Interface, Image Processing, and Data Mining.
 ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020
1020
Prommy Sultana Ferdawoos is an undergraduate student from the Department of Computer
Science and Engineering at BRAC University. She will be completing her graduation on
Bachelor of Science (B.Sc.) in Computer Science by the end of December 2018. She aspires to
pursue a career in the field of Brain-Computer Interface, BioInformatics, Artificial Intelligence,
Networking and Image Processing.
Dr. Mohammad Zavid Parvez is currently working as an Assistant Professor in the Department
of Computer Science and Engineering at BRAC University, Bangladesh. He received his PhD in
Computer Science from Charles Sturt University-Australia, MSc. Degree in Signal Processing
from Blekinge Institute of Technology-Sweden, and BSc. Eng. (Hons.) Degree in Computer
Science and Engineering from Asian University of Bangladesh. Moreover, he worked as a
postdoctoral researcher under the European Union’s Horizon 2020 research and innovation
programme. Parvez’s research interests include biomedical signal processing and machine
learning. Dr. Parvez has published several journal and conference papers in the area of signal
processing and cognitive radio.
Dr. Jia Uddin received a B.Sc. degree in Computer and Communication Engineering from the
International Islamic University, Chittagong (IIUC), Bangladesh, in 2005, an MS degree in
Electrical Engineering with an emphasis on Telecommunications from the Blekinge Institute of
Technology (BTH), Sweden, in 2010. He completed his Ph.D. (Computer Engineering) at the
University of Ulsan (UoU), South Korea in 2015. He is an Associate Professor in the Department
of Computer Science and Engineering at BRAC University, Bangladesh. His research interests
include Fault Diagnosis, Parallel Computing, and Wireless Networks. He is a member of the IEB
and the IACSIT.

More Related Content

What's hot

Detection of speech under stress using spectral analysis
Detection of speech under stress using spectral analysisDetection of speech under stress using spectral analysis
Detection of speech under stress using spectral analysiseSAT Publishing House
 
Epileptic Seizure Detection using An EEG Sensor
Epileptic Seizure Detection using An EEG SensorEpileptic Seizure Detection using An EEG Sensor
Epileptic Seizure Detection using An EEG SensorIRJET Journal
 
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...IJCSES Journal
 
IRJET- Human Emotions Detection using Brain Wave Signals
IRJET- Human Emotions Detection using Brain Wave SignalsIRJET- Human Emotions Detection using Brain Wave Signals
IRJET- Human Emotions Detection using Brain Wave SignalsIRJET Journal
 
Using Brain Waves as New Biometric Feature for Authenticating a Computer User...
Using Brain Waves as New Biometric Feature for Authenticatinga Computer User...Using Brain Waves as New Biometric Feature for Authenticatinga Computer User...
Using Brain Waves as New Biometric Feature for Authenticating a Computer User...Buthainah Hamdy
 
Distinguishing Cognitive Tasks Using Statistical Analysis Techniques
Distinguishing Cognitive Tasks Using Statistical Analysis TechniquesDistinguishing Cognitive Tasks Using Statistical Analysis Techniques
Distinguishing Cognitive Tasks Using Statistical Analysis TechniquesIOSR Journals
 
Detection of Type-B Artifacts in VEPs using Median Deviation Algorithm
Detection of Type-B Artifacts in VEPs using Median Deviation AlgorithmDetection of Type-B Artifacts in VEPs using Median Deviation Algorithm
Detection of Type-B Artifacts in VEPs using Median Deviation AlgorithmIOSRJECE
 
Improved feature exctraction process to detect seizure using CHBMIT-dataset ...
Improved feature exctraction process to detect seizure  using CHBMIT-dataset ...Improved feature exctraction process to detect seizure  using CHBMIT-dataset ...
Improved feature exctraction process to detect seizure using CHBMIT-dataset ...IJECEIAES
 
IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...
IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...
IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...IRJET Journal
 
HUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVM
HUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVMHUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVM
HUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVMcsandit
 
IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...
IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...
IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...IRJET Journal
 
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...IJERA Editor
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSijistjournal
 
Robot Motion Control Using the Emotiv EPOC EEG System
Robot Motion Control Using the Emotiv EPOC EEG SystemRobot Motion Control Using the Emotiv EPOC EEG System
Robot Motion Control Using the Emotiv EPOC EEG SystemjournalBEEI
 
ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...
ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...
ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...sipij
 
Denoising of EEG Signals for Analysis of Brain Disorders: A Review
Denoising of EEG Signals for Analysis of Brain Disorders: A ReviewDenoising of EEG Signals for Analysis of Brain Disorders: A Review
Denoising of EEG Signals for Analysis of Brain Disorders: A ReviewIRJET Journal
 

What's hot (18)

Detection of speech under stress using spectral analysis
Detection of speech under stress using spectral analysisDetection of speech under stress using spectral analysis
Detection of speech under stress using spectral analysis
 
Epileptic Seizure Detection using An EEG Sensor
Epileptic Seizure Detection using An EEG SensorEpileptic Seizure Detection using An EEG Sensor
Epileptic Seizure Detection using An EEG Sensor
 
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...
Overview of Machine Learning and Deep Learning Methods in Brain Computer Inte...
 
104 108
104 108104 108
104 108
 
IRJET- Human Emotions Detection using Brain Wave Signals
IRJET- Human Emotions Detection using Brain Wave SignalsIRJET- Human Emotions Detection using Brain Wave Signals
IRJET- Human Emotions Detection using Brain Wave Signals
 
Using Brain Waves as New Biometric Feature for Authenticating a Computer User...
Using Brain Waves as New Biometric Feature for Authenticatinga Computer User...Using Brain Waves as New Biometric Feature for Authenticatinga Computer User...
Using Brain Waves as New Biometric Feature for Authenticating a Computer User...
 
Distinguishing Cognitive Tasks Using Statistical Analysis Techniques
Distinguishing Cognitive Tasks Using Statistical Analysis TechniquesDistinguishing Cognitive Tasks Using Statistical Analysis Techniques
Distinguishing Cognitive Tasks Using Statistical Analysis Techniques
 
Detection of Type-B Artifacts in VEPs using Median Deviation Algorithm
Detection of Type-B Artifacts in VEPs using Median Deviation AlgorithmDetection of Type-B Artifacts in VEPs using Median Deviation Algorithm
Detection of Type-B Artifacts in VEPs using Median Deviation Algorithm
 
Improved feature exctraction process to detect seizure using CHBMIT-dataset ...
Improved feature exctraction process to detect seizure  using CHBMIT-dataset ...Improved feature exctraction process to detect seizure  using CHBMIT-dataset ...
Improved feature exctraction process to detect seizure using CHBMIT-dataset ...
 
IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...
IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...
IRJET-Analysis of EEG Signals and Biomedical Changes due to Meditation on Bra...
 
HUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVM
HUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVMHUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVM
HUMAN EMOTION ESTIMATION FROM EEG AND FACE USING STATISTICAL FEATURES AND SVM
 
IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...
IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...
IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...
 
E44082429
E44082429E44082429
E44082429
 
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...Wavelet-based EEG processing for computer-aided seizure detection and epileps...
Wavelet-based EEG processing for computer-aided seizure detection and epileps...
 
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGSMETHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
METHODS OF COMMAND RECOGNITION USING SINGLE-CHANNEL EEGS
 
Robot Motion Control Using the Emotiv EPOC EEG System
Robot Motion Control Using the Emotiv EPOC EEG SystemRobot Motion Control Using the Emotiv EPOC EEG System
Robot Motion Control Using the Emotiv EPOC EEG System
 
ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...
ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...
ANALYSIS OF BRAIN COGNITIVE STATE FOR ARITHMETIC TASK AND MOTOR TASK USING EL...
 
Denoising of EEG Signals for Analysis of Brain Disorders: A Review
Denoising of EEG Signals for Analysis of Brain Disorders: A ReviewDenoising of EEG Signals for Analysis of Brain Disorders: A Review
Denoising of EEG Signals for Analysis of Brain Disorders: A Review
 

Similar to Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier

CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...
CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...
CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...IRJET Journal
 
Survey analysis for optimization algorithms applied to electroencephalogram
Survey analysis for optimization algorithms applied to electroencephalogramSurvey analysis for optimization algorithms applied to electroencephalogram
Survey analysis for optimization algorithms applied to electroencephalogramIJECEIAES
 
An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...journalBEEI
 
Health electroencephalogram epileptic classification based on Hilbert probabi...
Health electroencephalogram epileptic classification based on Hilbert probabi...Health electroencephalogram epileptic classification based on Hilbert probabi...
Health electroencephalogram epileptic classification based on Hilbert probabi...IJECEIAES
 
Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...IAESIJAI
 
IRJET- Deep Learning Technique for Feature Classification of Eeg to Acces...
IRJET-  	  Deep Learning Technique for Feature Classification of Eeg to Acces...IRJET-  	  Deep Learning Technique for Feature Classification of Eeg to Acces...
IRJET- Deep Learning Technique for Feature Classification of Eeg to Acces...IRJET Journal
 
EEG S IGNAL Q UANTIFICATION B ASED ON M ODUL L EVELS
EEG S IGNAL  Q UANTIFICATION  B ASED ON M ODUL  L EVELS EEG S IGNAL  Q UANTIFICATION  B ASED ON M ODUL  L EVELS
EEG S IGNAL Q UANTIFICATION B ASED ON M ODUL L EVELS sipij
 
Transfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram imagesTransfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram imagesIAESIJAI
 
Prediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEGPrediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEGIRJET Journal
 
An Approach of Human Emotional States Classification and Modeling from EEG
An Approach of Human Emotional States Classification and Modeling from EEGAn Approach of Human Emotional States Classification and Modeling from EEG
An Approach of Human Emotional States Classification and Modeling from EEGCSCJournals
 
IRJET-Estimation of Meditation Effect on Attention Level using EEG
IRJET-Estimation of Meditation Effect on Attention Level using EEGIRJET-Estimation of Meditation Effect on Attention Level using EEG
IRJET-Estimation of Meditation Effect on Attention Level using EEGIRJET Journal
 
Lvq based person identification system
Lvq based person identification systemLvq based person identification system
Lvq based person identification systemIAEME Publication
 
Improved Algorithm for Brain Signal Analysis
Improved Algorithm for Brain Signal AnalysisImproved Algorithm for Brain Signal Analysis
Improved Algorithm for Brain Signal AnalysisBRNSSPublicationHubI
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
 
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONA COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONsipij
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classificationsipij
 
Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...iaemedu
 
Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...iaemedu
 

Similar to Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier (20)

CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...
CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...
CLASSIFICATION OF ELECTROENCEPHALOGRAM SIGNALS USING XGBOOST ALGORITHM AND SU...
 
Survey analysis for optimization algorithms applied to electroencephalogram
Survey analysis for optimization algorithms applied to electroencephalogramSurvey analysis for optimization algorithms applied to electroencephalogram
Survey analysis for optimization algorithms applied to electroencephalogram
 
An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...An approach of re-organizing input dataset to enhance the quality of emotion ...
An approach of re-organizing input dataset to enhance the quality of emotion ...
 
STRESS ANALYSIS
STRESS ANALYSISSTRESS ANALYSIS
STRESS ANALYSIS
 
Health electroencephalogram epileptic classification based on Hilbert probabi...
Health electroencephalogram epileptic classification based on Hilbert probabi...Health electroencephalogram epileptic classification based on Hilbert probabi...
Health electroencephalogram epileptic classification based on Hilbert probabi...
 
Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...Convolutional neural network with binary moth flame optimization for emotion ...
Convolutional neural network with binary moth flame optimization for emotion ...
 
IRJET- Deep Learning Technique for Feature Classification of Eeg to Acces...
IRJET-  	  Deep Learning Technique for Feature Classification of Eeg to Acces...IRJET-  	  Deep Learning Technique for Feature Classification of Eeg to Acces...
IRJET- Deep Learning Technique for Feature Classification of Eeg to Acces...
 
EEG S IGNAL Q UANTIFICATION B ASED ON M ODUL L EVELS
EEG S IGNAL  Q UANTIFICATION  B ASED ON M ODUL  L EVELS EEG S IGNAL  Q UANTIFICATION  B ASED ON M ODUL  L EVELS
EEG S IGNAL Q UANTIFICATION B ASED ON M ODUL L EVELS
 
Transfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram imagesTransfer learning for epilepsy detection using spectrogram images
Transfer learning for epilepsy detection using spectrogram images
 
Prediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEGPrediction Model for Emotion Recognition Using EEG
Prediction Model for Emotion Recognition Using EEG
 
An Approach of Human Emotional States Classification and Modeling from EEG
An Approach of Human Emotional States Classification and Modeling from EEGAn Approach of Human Emotional States Classification and Modeling from EEG
An Approach of Human Emotional States Classification and Modeling from EEG
 
IRJET-Estimation of Meditation Effect on Attention Level using EEG
IRJET-Estimation of Meditation Effect on Attention Level using EEGIRJET-Estimation of Meditation Effect on Attention Level using EEG
IRJET-Estimation of Meditation Effect on Attention Level using EEG
 
Lvq based person identification system
Lvq based person identification systemLvq based person identification system
Lvq based person identification system
 
Improved Algorithm for Brain Signal Analysis
Improved Algorithm for Brain Signal AnalysisImproved Algorithm for Brain Signal Analysis
Improved Algorithm for Brain Signal Analysis
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
 
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATIONA COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
A COMPARATIVE STUDY OF MACHINE LEARNING ALGORITHMS FOR EEG SIGNAL CLASSIFICATION
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
 
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal ClassificationA Comparative Study of Machine Learning Algorithms for EEG Signal Classification
A Comparative Study of Machine Learning Algorithms for EEG Signal Classification
 
Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...
 
Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...Sample size determination for classification of eeg signals using power analy...
Sample size determination for classification of eeg signals using power analy...
 

More from IJECEIAES

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...IJECEIAES
 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionIJECEIAES
 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...IJECEIAES
 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...IJECEIAES
 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningIJECEIAES
 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...IJECEIAES
 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...IJECEIAES
 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...IJECEIAES
 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...IJECEIAES
 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyIJECEIAES
 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationIJECEIAES
 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceIJECEIAES
 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...IJECEIAES
 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...IJECEIAES
 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...IJECEIAES
 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...IJECEIAES
 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...IJECEIAES
 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...IJECEIAES
 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...IJECEIAES
 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...IJECEIAES
 

More from IJECEIAES (20)

Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...Cloud service ranking with an integration of k-means algorithm and decision-m...
Cloud service ranking with an integration of k-means algorithm and decision-m...
 
Prediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regressionPrediction of the risk of developing heart disease using logistic regression
Prediction of the risk of developing heart disease using logistic regression
 
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...Predictive analysis of terrorist activities in Thailand's Southern provinces:...
Predictive analysis of terrorist activities in Thailand's Southern provinces:...
 
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...Optimal model of vehicular ad-hoc network assisted by  unmanned aerial vehicl...
Optimal model of vehicular ad-hoc network assisted by unmanned aerial vehicl...
 
Improving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learningImproving cyberbullying detection through multi-level machine learning
Improving cyberbullying detection through multi-level machine learning
 
Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...Comparison of time series temperature prediction with autoregressive integrat...
Comparison of time series temperature prediction with autoregressive integrat...
 
Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...Strengthening data integrity in academic document recording with blockchain a...
Strengthening data integrity in academic document recording with blockchain a...
 
Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...Design of storage benchmark kit framework for supporting the file storage ret...
Design of storage benchmark kit framework for supporting the file storage ret...
 
Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...Detection of diseases in rice leaf using convolutional neural network with tr...
Detection of diseases in rice leaf using convolutional neural network with tr...
 
A systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancyA systematic review of in-memory database over multi-tenancy
A systematic review of in-memory database over multi-tenancy
 
Agriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimizationAgriculture crop yield prediction using inertia based cat swarm optimization
Agriculture crop yield prediction using inertia based cat swarm optimization
 
Three layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performanceThree layer hybrid learning to improve intrusion detection system performance
Three layer hybrid learning to improve intrusion detection system performance
 
Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...Non-binary codes approach on the performance of short-packet full-duplex tran...
Non-binary codes approach on the performance of short-packet full-duplex tran...
 
Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...Improved design and performance of the global rectenna system for wireless po...
Improved design and performance of the global rectenna system for wireless po...
 
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...Advanced hybrid algorithms for precise multipath channel estimation in next-g...
Advanced hybrid algorithms for precise multipath channel estimation in next-g...
 
Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...Performance analysis of 2D optical code division multiple access through unde...
Performance analysis of 2D optical code division multiple access through unde...
 
On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...On performance analysis of non-orthogonal multiple access downlink for cellul...
On performance analysis of non-orthogonal multiple access downlink for cellul...
 
Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...Phase delay through slot-line beam switching microstrip patch array antenna d...
Phase delay through slot-line beam switching microstrip patch array antenna d...
 
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...A simple feed orthogonal excitation X-band dual circular polarized microstrip...
A simple feed orthogonal excitation X-band dual circular polarized microstrip...
 
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
A taxonomy on power optimization techniques for fifthgeneration heterogenous ...
 

Recently uploaded

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 

Recently uploaded (20)

HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 

Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier

  • 1. International Journal of Electrical and Computer Engineering (IJECE) Vol. 9, No. 2, April 2019, pp. 1012~1020 ISSN: 2088-8708, DOI: 10.11591/ijece.v9i2.pp1012-1020  1012 Journal homepage: http://iaescore.com/journals/index.php/IJECE Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier Fabian Parsia George, Istiaque Mannafee Shaikat, Prommy Sultana Ferdawoos, Mohammad Zavid Parvez, Jia Uddin Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh Article Info ABSTRACT Article history: Received Apr 17, 2018 Revised Nov 5, 2018 Accepted Dec 3, 2018 The recognition of emotions is a vast significance and a high developing field of research in the recent years. The applications of emotion recognition have left an exceptional mark in various fields including education and research. Traditional approaches used facial expressions or voice intonation to detect emotions, however, facial gestures and spoken language can lead to biased and ambiguous results. This is why, researchers have started to use electroencephalogram (EEG) technique which is well defined method for emotion recognition. Some approaches used standard and pre-defined methods of the signal processing area and some worked with either fewer channels or fewer subjects to record EEG signals for their research. This paper proposed an emotion detection method based on time-frequency domain statistical features. Box-and-whisker plot is used to select the optimal features, which are later feed to SVM classifier for training and testing the DEAP dataset, where 32 participants with different gender and age groups are considered. The experimental results show that the proposed method exhibits 92.36% accuracy for our tested dataset. In addition, the proposed method outperforms than the state-of-art methods by exhibiting higher accuracy. Keywords: Accuracy EEG signal Emotions Features FFT Frequency bands SVM Copyright © 2019 Institute of Advanced Engineering and Science. All rights reserved. Corresponding Author: Jia Uddin, Department of Computer Science and Engineering, BRAC University, 66 Mohakhali, Dhaka-1212, Bangladesh. Email: jia.uddin@bracu.ac.bd 1. INTRODUCTION Emotion, being a non-verbal vital approach for social interaction, is a psychological and mental state of mind that provokes us to effectively react to a certain situation based on past experience [1]. The implementation of emotion recognition can be applied to various fields such as education, cognitive science, entertainment, machine learning, self-control and security, biomedical engineering, marketing and production. According to Russell, any discrete emotion can be deducted from their level of arousal and valence using his Circumplex Model of Emotion [2]. Many existing methods implemented facial expression, speech signals, and self-ratings to classify emotions [3], [4]. However, the systems used in these existing methods usually fail to acknowledge all the detailed emotional inputs for processing, such as the hand gestures or the tone of the voice, thus leading to vague and biased outcome [3]. Some approaches used subjective measurement that can affect the end result as the presence of anomalous trials can be significant [4]. After the high influence of Electroencephalogram (EEG) signals on the field of research, it was observed that human emotion can be represented more accurately with EEG signals than with facial gestures, speech signals, or self-reporting information [5]. Emotional activities cause the brain to generate signals in the forms of waves and the technique used to record these signals is known as EEG [5]. Predicting emotions using EEG signals was first introduced by Musha et al. [6]. From then onward, studies on this area is bringing more interest in the current years for
  • 2. Int J Elec & Comp Eng ISSN: 2088-8708  Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George) 1013 machine learning purpose and the provision of less expensive EEG measuring devices. In numerous investigations, experiments were restricted to very few subjects, but the methods implemented for a single subject were inadequately simplified and could not be utilized broadly for multiple subjects [7]. Many studies were limited to few channels in order to avoid high complexity and computational costs nevertheless, this can lead to feeble result due to lack of adequate information [8]. For the feature extraction process, different methodologies were used to retrieve features in previous studies and these included Sample Entropy [9], Autoregressive (AR) Model [10], Discrete Wavelet Transform (DWT) [11], and Fast Fourier Transformation (FFT) [12]. Classification of these extracted features was done using various classifiers by the researchers such as Support Vector Machine (SVM) [13, 14], Neural Network [15], and k-nearest neighbor (KNN) [16]. Referring to the previously specified issues [3], [4], [7], [8], a novel methodology, for emotion recognition based on time-frequency analysis, is proposed and evaluated with EEG signals from DEAP dataset [17]. In the proposed model, initially, the EEG signals from only the prefrontal cortex are retrieved for further work since emotional activities occur mainly in the frontal and temporal lobe of the brain [18]. Additionally, this also reduced the total number of channels that are to be used in the feature extraction method, as the irrelevant channels are already discarded beforehand. This lead to less computational cost and thus increased the efficiency of the algorithms used in our technique [19]. Existing methods, working with frequency bands, extracted all the five types of frequency bands, namely delta (0.5-4 Hz), theta (4-7 Hz), alpha (7-13 Hz), beta (13-30 Hz), and gamma (30-60 Hz) [20]. According to the state-of-the-art methods, the emotional and cognitive activities of the brain can be well signified using the alpha, beta, and theta frequency bands [8], [21]. Therefore, these specific bands were considered in this paper. The dataset is distributed into four emotional quadrants, which are high arousal - high valence (HAHV), low arousal - high valence (LAHV), low arousal - low valence (LALV), and high arousal - low valence (HALV). The dataset in each specific quadrant was then averaged for all participants for the specific emotion. The reason for averaging the samples according to the quadrant is due to the inconsistency in emotions felt by the participants. Not all the participants feel the same emotion for a particular video which indicates that some samples in the EEG signals are anomalous and thus can greatly affect the end result. The premise of averaging is to reduce data deviation and to statistically reach close to the actual value. This hypothesis might improve the accuracy of the classification process. Finally, the statistical features, extracted in the frequency domain, were then fed into the SVM classifier in order to classify the emotions. The subsequent sections of the paper have been organized as follows. Section 2 introduces the dataset used in this paper, as well as the proposed approach for recognizing emotion. Section 3 provides the experimental results along with their analysis. Finally, Section 4 concludes the paper. 2. PROPOSED METHOD The proposed model illustrated in Figure 1 represents the overall flow of our work. For this research, the EEG signals were first accumulated followed by data preprocessing. Next, bands of specific frequencies were extracted from the preprocessed data. Subsequently, suitable features were extracted and selected to be fed into the classifier. Finally, SVM classifier was used to classify these selected features. Figure 1. Workflow of the proposed method 2.1. Data description In our research, we used DEAP dataset [17] as the source of brain signals. It is a multimodal dataset which can be used to analyze the human affective states. The data collection process was carried out in the controlled light environment. Biosemi ActiveTwo system was used to record the EEG signals of each participant. Two computers were used which were synchronized periodically with the help of markers - one for recording the signals and another for presentation of stimuli. The experiment was carried out using a 1-minute 40 music videos displayed in 17-inch screen but (800x600) resolution was maintained to minimize EEG Signals Signal Preprocessing Signal Refining Band Extraction Feature Extraction Emotion Classification
  • 3.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020 1014 the eye movements. For the experiment, 32 AgCl electrodes were used to record the EEG signals at a sampling rate of 512 Hz. 16 male and 16 female participants with ages ranging from 19 to 37 were chosen to conduct the experiment. After 20 video trials, the subjects could take a break for snacks. Each trial lasted for 63 seconds, a total of 60-second video and 3-second pre-trial baseline. At the end of each video trial, a manual rating was done by the subjects on a scale of 1-9 to determine four different emotions (Arousal, Dominance, Liking, and Valence). 2.2. Signal preprocessing The dataset was downsampled into 128 Hz and then Electrooculography (EOG) artifacts were removed due to eye movements. The signals were then filtered with a minimum of 4 Hz and a maximum of 45 Hz using a band-pass filter. To create a common reference, the data were averaged. Later on, the data was segmented into 60 seconds by removing the 3 seconds pre-trial baseline and was arranged in Experiment_id order. For our research, we have chosen pre-processed data files that included EEG signals of each participant. All participants file have two arrays such as data and label, which is clarified below in Table 1. Table 1. Contents of each Participant Files Name Size Description Data 40×40×8064 Video/trial×Channel×Data Label 40×4 Video/trial×Label (Valence, Arousal, Dominance, Liking) The data array contains EEG signals for all the participants and all the videos inclusive whereas the labels array contains the video ID classifications according to the label (valence, arousal, dominance, and liking). 2.3. Signal refining As the first step towards band extraction, the data were first rearranged to make it appropriate for the extraction process. The preprocessed data from DEAP dataset was used for our work which contained 32 files representing each participant. Each file contained two arrays: one was 3D array, named Data, of size 40x40x8064 and another was the 2D array, named Label, of size 40x4. For our study, the 3D data array was used throughout the course of our work. A total of 40 channels were used to record the EEG signals, out of which 32 were EEG channels and 8 were peripheral channels. Previous studies illustrated that the information related to emotions are focused mostly in the frontal and temporal areas of the brain [18]. However, in order to decrease the computational costs of our proposed method, we only worked with the channels that are related to the frontal lobe of the brain and these channels are Fp1, F3, F7, FC5, FC1, Fp2, Fz, F4, F8, FC6, and FC2. Classification and feature extraction from 3D array were laborious as it was hard to manipulate the data as per our requirements. For this reason, the preprocessed data was sorted to 40 files each representing the music video used in the DEAP dataset. Each video file contained an array of size 8064x352 where the rows represent the length of data and columns represent the total number of channels of the 32 participants as described in Table 2. Table 2. Array Representation of the Video File Array Name Array Size (Row × Column) Array Contents (Row × Column) Video_no 8064 × 352 Data × subjectNo_channelNo 2.4. Band extraction The EEG signals used for our research are on the time domain. Existed researches demonstrated that, in order to recognize emotional activities with better accuracy the features are extracted in the frequency domain [22]. This is done by applying FFT to the time domain signal. FFT is an algorithm that is used to convert a signal from the time domain to the frequency domain. For X and Y of length n, these transforms are defined as follows: ∑ (1) where is one of roots of unity and .
  • 4. Int J Elec & Comp Eng ISSN: 2088-8708  Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George) 1015 As it was discussed earlier, emotional activities cause the brain to generate signals in the form of waves. These signals, which can be subdivided into 5 frequency bands, hold a correlation with the emotional activities. These 5 different types of frequency bands are comprised of delta, theta, alpha, beta, and gamma [21]. As stated by [21], the alpha, beta, and theta can well represent the emotional and cognitive process of the brain than the other 2 bands. This is why, we have extracted these 3 bands using Butterworth band-pass filter after applying FFT on the EEG signals. 2.5. Feature extraction The experimenters in [17] also provided information regarding emotions that are supposed to be felt after watching each video. It was estimated that each video can be placed in any of the 4 emotional quadrants which are HAHV, LAHV, LALV, and HALV as represented in Figure 2. Individuals can respond differently for a specific video, which can result in the presence of irregular samples in the EEG signals. These erroneous samples are required to be ruled out of each quadrant in order to minimize the inconsistency in the samples. For our research, in order to reduce the data deviation, the extracted band values of the videos were averaged according to their corresponding quadrant along with specific emotion as illustrated in Table 3. Figure 2. Four quadrants of emotion Table 3. List of 40 Videos (Denoted by the Experiment_Id) and their Corresponding Quadrant HAHV LAHV LALV HALV 1 8 16 10 2 9 22 21 3 12 23 31 4 13 24 32 5 14 25 33 6 15 26 34 7 17 27 35 11 18 28 36 - 19 29 37 - 20 30 38 After sorting the video in accordance with their quadrant and averaging the bands of all the videos from each quadrant, 4 video files were created which contained only the averages values of the extracted bands. These band values were further scaled so that the SVM classifier does not get influenced by large band values. Once the band values were scaled, the features of the input signals were then extracted. For our work, we have extracted the statistical features minimum, maximum, variance, standard deviation, wave entropy, power bandwidth, skewness, and kurtosis based the location or central tendency (statistical Features I), the dispersion or spread (Statistical Features II), and the shape of distribution (Statistical Features III) as illustrated in Table 4.
  • 5.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020 1016 Table 4. Types of Features Extracted For Classification Statistical Features I Statistical Features II Statistical Features III Minimum Variance Skewness Maximum Standard deviation Kurtosis Mean Wave entropy - - Power bandwidth - 2.6. Emotion classification Numerous machine learning algorithms have been in the existed studies, out of SVM is measured as one of the most efficient classifiers for classifying emotions [8], [9]. The basic perception of the SVM is to determine a decision hyperplane in order to classify data samples into two classes. The optimum hyperplane for differentiating two groups is determined by maximizing the distances between nearest data point of both the classes and the hyperplane [23]. The classification procedure includes predicting a confusion matrix model by partitioning the sample data into a training set and a test set, for training and validation respectively, using a technique called k-fold cross validation [24]. This technique randomly divides the data into k equal subset of the data and is repeated 10 times. Each time, one of the k subsets is used as the test set and the other k-1 subsets are put together to form a training set [24]. In our paper, different combinations of features were used for training and testing the SVM classifier in order to generate the confusion matrix model. This model was then used to find the accuracy depending on the k-fold cross validation. Here, we integrated SVM with 10-fold cross fold validation with the parameters, kernel and regularization, which were selected by the grid-search method. In order to implement SVM, LIBSVM library is used, which is a widely used library for support vector machines [25]. 3. RESULTS AND ANALYSIS Classifying the statistical features for obtaining a decent outcome was not an easy process. Various aspects were required to be considered prior to reaching to a conclusion as the initial trials did not generate a satisfying output. In this paper, 10-fold cross validation was incorporated with the SVM classifier using the regularization and kernel parameter, which were selected via a grid-search approach. In order to determine the accuracy of k-fold cross validation for the classification technique, (2) was used. The equation for accuracy is defined as: (2) where TP: the number of true positive, TN: the number of true negative, FP: the number of false positive, and FN: the number of false negative. At the very beginning, prior to averaging the extracted band values according to their corresponding quadrants, all the statistical features from Table 4 were fed into the SVM classifier at once. They were used to train and test the SVM with a specific end goal to construct the confusion matrix model. This model was then used to determine the accuracy based on the 10-fold cross validation. However, the very first trial did not provide an expected output since the accuracy was only found to be 2.03%. This occurred because the features, before averaging the data, do not contain distinguishable characteristics as it can be seen from Figure 3. A box-and-whisker plot was used to graphically represent each feature through their quartiles, as shown in Figure 3(a). (a) (b) Figure 3. Boxplot of the statistical features: (a) before averaging, (b) after averaging the video data
  • 6. Int J Elec & Comp Eng ISSN: 2088-8708  Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George) 1017 Afterward, we averaged the band values, in accordance with their quadrant to reduce the data deviation and improve the accuracy of the classification process. Once the data were averaged, these were scaled and the statistical features were extracted again as discussed in Section 2.5. The features were once more fed into the classifier all at once. This time, there was a significant improvement in the percentage accuracy for classification as it increased drastically from 2.03% to 32.14%. The percentage of accuracy before and after averaging the data of all the features is summarized in Table 5. It can be observed from the table that the features do not offer satisfactory outcome prior to averaging the data because not all the participants feel the same emotion for a specific category of video. Hence, the SVM classifier could not create our expected model due to the irregularity in data for each quadrant. Table 5. Accuracy for Classification before and after Averaging the Video Data State of the Data Accuracy (%) Before averaging the data 2.03 After averaging the data 32.14 Even though the accuracy for classification improved by a certain amount after averaging the band values, it still did not provide the expected output. Subsequently, we reconstructed the boxplot to analyze each feature that can be seen from Figure 3(b), but this time after averaging the data according to their corresponding quadrants. A box-and-whisker plot was used to distinguish each feature once the data were averaged with respect to their equivalent quadrants, as shown in Figure 3(b). It can be observed from the figure that the data of the features skewness, kurtosis, and wave entropy can be easily distinguished from each other as the data do not overlap and are significantly deviated. The situation is also similar for the features skewness and power bandwidth as these two features contained significant distinguishable characteristics. On the other hand, the features mean, variance and standard deviation were relatively deviated from each other, but most of the data were still seen to be overlapping. Furthermore, it was seen that the difference in the features minimum, maximum, and variance was not as prominent as the deviation in the data was very insignificant. Based on the above parameters, we segregated all the features into the following combinations: 1) Feature combination A: mean, standard deviation, variance. 2) Feature combination B: skewness, kurtosis, wave entropy. 3) Feature combination C: minimum, maximum, variance. 4) Feature combination D: skewness, power bandwidth. It was observed that the accuracy for classification improved by a significant amount as illustrated in Table 6. It can be noticed that the combination B provided a better result with an accuracy of 92.36% for the quadrant HAHV_LALV and 89.11% for the quadrant HALV_LAHV, whereas the combination C provided the least result with an accuracy of 11.23% for the quadrant HAHV_LALV and 15.69% for the quadrant HALV_LAHV. Table 6. Accuracy of the Feature Combinations after Averaging the Video Data Accuracy of Feature Combinations (%) HAHV_LALV HALV_LAHV A 25.34±5.20 21.21±3.20 B 92.36±6.30 89.11±8.30 C 11.23±5.20 15.69±3.50 D 44.31±7.50 49.23±8.30 The reason for feature combination B providing better result is that the samples of the features skewness, kurtosis, and wave entropy can be easily distinguished from each other as the data are significantly deviated (see Figure 3(b). Similarly, the accuracy for feature combination D was also seemed to be satisfactory for the same reason. On the other hand, the feature combinations A, and C could not provide a satisfactory result due to the overlapping and similarity of the data. This affects the SVM classifier in generating a confusion matrix model that fails to depict the better accuracy. By observing the above results, we could conclude that the shape of the distribution could well represent the emotional activities of the brain and thus has provided a better percentage accuracy for classification. As DEAP is a public dataset, we took a step forward to compare and analyze the existing methods that have already used the DEAP dataset for recognizing emotions with our proposed approach. Table 7 illustrates some of the existing methods for emotion recognition using the DEAP dataset. It also provides
  • 7.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020 1018 information regarding the emotions identified, the features extracted and the accuracy for emotion classification by each method. It can be observed that most of the existing methods extracted more than one type of features and few identified more than two emotions. However, for our work, only the statistical features were considered for the feature extraction process and only two states of emotion were identified, namely valence and arousal. Furthermore, it can be also seen from the table that the accuracy of the existing methodologies is limited within 80%, whereas our proposed approach provided an accuracy of approximately 92.36%. Thus, it can be said that our approach is more efficient in terms of classifying emotions than many of the existing approaches. Table 7. Accuracy of Existing Approaches based on DEAP Dataset Reference Emotions Features Accuracy (%) [26] Valence and arousal DWT, WE, and Statistical 71.40 [27] Male/female valence and male/female arousal Statistical, Linear, and Non-statistical 78.60 [28] Stress and calm Statistical, PSD, and HOC 71.40 [29] Valence, arousal, and dominance Statistical and HFD 71.40 [30] Excitation, happiness, sadness, and hatred WT (db5), SE, CC, and AR 78.60 [31] Anger, surprise, and other HHS, HOC, and STFT 78.60 Proposed Approach Valence and arousal Statistical 92.36 4. CONCLUSION In this paper, the preprocessed EEG signals from DEAP dataset were used to classify two types of emotions, namely valence and arousal. The samples in the dataset were first transferred from time domain to frequency domain by applying FFT followed by the extraction of the alpha, beta, and theta frequency bands that are particularly significant for emotion recognition. Subsequently, the extracted bands were averaged in correspondence to their quadrant for each emotion and the averaged band values were used to extract statistical features. After that, the extracted features were scaled and various feature combinations were fed into the SVM classifier for emotion recognition. It was observed that our approach predicts emotions with an accuracy of 92.36% using skewness, kurtosis, and wave entropy features. Our proposed model shows better results compared to the existing methods for DEAP dataset. REFERENCES [1] I. B. Mauss and M. D. Robinson, “Measures of emotion: A review,” Cognition and Emotion, vol. 23, no. 2, pp. 209–237, 2009. [2] J. Posner, J. A. Russell, and B. S. Peterson, “The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology,” Development and Psychopathology, vol. 17, no. 3, 2005. [3] T. Musha, Y. Terasaki, H. A. Haque, and G. A. Ivamitsky, “Feature extraction from EEGs associated with emotions,” Artificial Life and Robotics, vol. 1, no. 1, pp. 15-19, 1997. [4] N. Thammasan, K. Moriyama, K.-I. Fukui, and M. Numao, “Familiarity effects in EEG-based emotion recognition,” Brain Informatics, vol. 4, no. 1, pp. 39-50, 2016. [5] P. C. Petrantonakis and L. J. Hadjileontiadis, “A Novel Emotion Elicitation Index Using Frontal Brain Asymmetry for Enhanced EEG-Based Emotion Recognition,” IEEE Transactions on Information Technology in Biomedicine, vol. 15, no. 5, pp. 737-746, 2011. [6] T. Musha, Y. Terasaki, H. A. Haque, and G. A. Ivamitsky, “Feature extraction from EEGs associated with emotions,” Artificial Life and Robotics, vol. 1, no. 1, pp. 15-19, 1997. [7] Xiang Li, Dawei Song, Peng Zhang, Guangliang Yu, Yuexian Hou and Bin Hu, “Emotion recognition from multichannel EEG data through Convolutional Recurrent Neural Network,” 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, 2016, pp. 352-359. [8] D. Huang, S. Zhang, and Y. Zhang, “EEG-based emotion recognition using empirical wavelet transform,” 2017 4th International Conference on Systems and Informatics (ICSAI), 2017. [9] X. Jie, R. Cao, and L. Li, “Emotion recognition based on the sample entropy of EEG,” Bio-medical materials and engineering, vol. 24, no. 1, pp. 1185-92, 2014. [10] T. D. Pham, D. Tran, W. Ma, and N. T. Tran, “Enhancing Performance of EEG-based Emotion Recognition Systems Using Feature Smoothing,” Neural Information Processing Lecture Notes in Computer Science, pp. 95-102, 2015. [11] Z. Mohammadi, J. Frounchi, and M. Amiri, “Wavelet-based emotion recognition system using EEG signal,” Neural Computing and Applications, vol. 28, no. 8, pp. 1985-1990, 2016. [12] M. Murugappan and S. Murugappan, “Human emotion recognition through short time Electroencephalogram (EEG) signals using Fast Fourier Transform (FFT),” 2013 IEEE 9th International Colloquium on Signal Processing and its Applications, 2013.
  • 8. Int J Elec & Comp Eng ISSN: 2088-8708  Recognition of emotional states using EEG signals based on time-frequency … (Fabian Parsia George) 1019 [13] R. Biswas, J. Uddin, and J. Hasan, “A New Approach of Iris Detection and Recognition,” International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 5, pp. 2530-2536, 2017. [14] N.A. Rakib, S.Z. Farhan, M.B. Sobhan, J. Uddin, A. Habib, “A Novel 2D Feature Extraction Method for Fingerprints Using Minutiae Points and Their Intersections,” International Journal of Electrical and Computer Engineering (IJECE), vol. 7, no. 5, pp. 2547-2554, 2017. [15] Xiang Li, Dawei Song, Peng Zhang, Guangliang Yu, Yuexian Hou and Bin Hu, “Emotion recognition from multichannel EEG data through Convolutional Recurrent Neural Network,” 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Shenzhen, 2016, pp. 352-359. [16] M. Murugappan, “Human emotion classification using wavelet transform and KNN,” 2011 International Conference on Pattern Analysis and Intelligence Robotics, 2011. [17] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “DEAP: A Database for Emotion Analysis; Using Physiological Signals,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 18-31, 2012. [18] R. Jenke, A. Peer, and M. Buss, “Feature Extraction and Selection for Emotion Recognition from EEG,” IEEE Transactions on Affective Computing, vol. 5, no. 3, pp. 327-339, Jan. 2014. [19] J. Edmonds and R. M. Karp, “Theoretical improvements in algorithmic efficiency for network flow problems,” Journal of the ACM (JACM), vol. 19, no. 2, pp. 248-264, 1972. [20] Z. Lan, O. Sourina, L. Wang, and Y. Liu, “Stability of Features in Real-Time EEG-based Emotion Recognition Algorithm,” 2014 International Conference on Cyberworlds, 2014. [21] K. Kalimeri and C. Saitis, “Exploring multimodal biosignal features for stress detection during indoor mobility,” Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016, 2016. [22] V. N. Vapnik, “Direct Methods in Statistical Learning Theory,” The Nature of Statistical Learning Theory, pp. 225-265, 2000. [23] A. Chatchinarat, K. W. Wong, and C. C. Fung, “A comparison study on the relationship between the selection of EEG electrode channels and frequency bands used in classification for emotion recognition,” 2016 International Conference on Machine Learning and Cybernetics (ICMLC), 2016. [24] G. H. Golub, M. Heath, and G. Wahba, “Generalized cross-validation as a method for choosing a good ridge parameter,” Technometrics, vol. 21, no. 2, pp. 215-223, 1979. [25] C.-C. Chang and C.-J. Lin, “Libsvm,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1-27, Jan. 2011. [26] M. Ali, A. H. Mosa, F. Al Machot, and K. Kyamakya, “EEG-based emotion recognition approach for e-healthcare applications,” in Ubiquitous and Future Networks (ICUFN), 2016 Eighth International Conference on, 2016, pp. 946-950: IEEE. [27] J. Chen, B. Hu, P. Moore, X. Zhang, and X. Ma, “Electroencephalogram-based emotion assessment system using ontology and data mining techniques,” Applied Soft Computing, vol. 30, pp. 663-674, May 2015. [28] T. F. Bastos-Filho, A. Ferreira, A. C. Atencio, S. Arjunan, and D. Kumar, “Evaluation of feature extraction techniques in emotional state recognition,” in Intelligent human computer interaction (IHCI), 2012, pp. 1-6. [29] Y. Liu and O. Sourina, “Real-time subject-dependent EEG-based emotion recognition algorithm,” Transactions on Computational Science XXIII: Springer, pp. 199-223, 2014. [30] A. Vijayan, D. Sen, and A. Sudheer, “EEG-Based Emotion Recognition Using Statistical Measures and Auto- Regressive Modeling,” in Computational Intelligence and Communication Technology (CICT), pp. 587-591, 2015. [31] P. Ackermann, C. Kohlschein, J. A. Bitsch, K. Wehrle, and S. Jeschke, “EEG-based automatic emotion recognition: Feature extraction, selection and classification methods,” 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), pp. 1-6, 2016. BIOGRAPHIES OF AUTHORS Fabian Parsia George pursued her graduation on Bachelor of Science (B.Sc.) in Computer Science and Engineering at BRAC University in the Department of Computer Science and Engineering in September 2018. She has served as a Student Tutor (ST) in the Department of Computer Science and Engineering at BRAC University. Currently, she is working as a Teacher in Green Dale International School. She aspires to pursue a career in the field of Machine Learning, Brain-Computer Interface, Image Processing, and Data Mining. Istiaque Mannafee Shaikat pursued his graduation on Bachelor of Science (B.Sc.) in Computer Science at BRAC University in the Department of Computer Science and Engineering in May 2018. Currently, he is working in the department of Digital Intelligence at Grameenphone Ltd., Bangladesh. He aspires to pursue a career in the field of Machine Learning, Brain-Computer Interface, Image Processing, and Data Mining.
  • 9.  ISSN: 2088-8708 Int J Elec & Comp Eng, Vol. 9, No. 2, April 2019 : 1012 - 1020 1020 Prommy Sultana Ferdawoos is an undergraduate student from the Department of Computer Science and Engineering at BRAC University. She will be completing her graduation on Bachelor of Science (B.Sc.) in Computer Science by the end of December 2018. She aspires to pursue a career in the field of Brain-Computer Interface, BioInformatics, Artificial Intelligence, Networking and Image Processing. Dr. Mohammad Zavid Parvez is currently working as an Assistant Professor in the Department of Computer Science and Engineering at BRAC University, Bangladesh. He received his PhD in Computer Science from Charles Sturt University-Australia, MSc. Degree in Signal Processing from Blekinge Institute of Technology-Sweden, and BSc. Eng. (Hons.) Degree in Computer Science and Engineering from Asian University of Bangladesh. Moreover, he worked as a postdoctoral researcher under the European Union’s Horizon 2020 research and innovation programme. Parvez’s research interests include biomedical signal processing and machine learning. Dr. Parvez has published several journal and conference papers in the area of signal processing and cognitive radio. Dr. Jia Uddin received a B.Sc. degree in Computer and Communication Engineering from the International Islamic University, Chittagong (IIUC), Bangladesh, in 2005, an MS degree in Electrical Engineering with an emphasis on Telecommunications from the Blekinge Institute of Technology (BTH), Sweden, in 2010. He completed his Ph.D. (Computer Engineering) at the University of Ulsan (UoU), South Korea in 2015. He is an Associate Professor in the Department of Computer Science and Engineering at BRAC University, Bangladesh. His research interests include Fault Diagnosis, Parallel Computing, and Wireless Networks. He is a member of the IEB and the IACSIT.