This document presents research on developing an Arabic speech emotion recognition system using a convolutional neural network (CNN) model. The researchers propose a model called ASERS-CNN and evaluate it on an Arabic speech dataset containing recordings of 4 emotions. Their best performing model achieves 98.52% accuracy using 5 acoustic features extracted from the speech and preprocessed with normalization and silence removal. They compare this result to their previous ASERS-LSTM model and other state-of-the-art methods, finding that ASERS-CNN outperforms ASERS-LSTM and existing approaches.
Signal & Image Processing : An International Journal sipij
Signal & Image Processing : An International Journal is an Open Access peer-reviewed journal intended for researchers from academia and industry, who are active in the multidisciplinary field of signal & image processing. The scope of the journal covers all theoretical and practical aspects of the Digital Signal Processing & Image processing, from basic research to development of application.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Signal & Image processing.
ASERS-LSTM: Arabic Speech Emotion Recognition System Based on LSTM Modelsipij
The swift progress in the study field of human-computer interaction (HCI) causes to increase in the interest in systems for Speech emotion recognition (SER). The speech Emotion Recognition System is the system that can identify the emotional states of human beings from their voice. There are well works in Speech Emotion Recognition for different language but few researches have implemented for Arabic SER systems and that because of the shortage of available Arabic speech emotion databases. The most commonly considered languages for SER is English and other European and Asian languages. Several machine learning-based classifiers that have been used by researchers to distinguish emotional classes: SVMs, RFs, and the KNN algorithm, hidden Markov models (HMMs), MLPs and deep learning. In this paper we propose ASERS-LSTM model for Arabic Speech Emotion Recognition based on LSTM model. We extracted five features from the speech: Mel-Frequency Cepstral Coefficients (MFCC) features, chromagram, Melscaled spectrogram, spectral contrast and tonal centroid features (tonnetz). We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech corpus (BAES-DB). In addition of that we also construct a DNN for classify the Emotion and compare the accuracy between LSTM and DNN model. For DNN the accuracy is 93.34% and for LSTM is 96.81%
Emotion recognition based on the energy distribution of plosive syllablesIJECEIAES
We usually encounter two problems during speech emotion recognition (SER): expression and perception problems, which vary considerably between speakers, languages, and sentence pronunciation. In fact, finding an optimal system that characterizes the emotions overcoming all these differences is a promising prospect. In this perspective, we considered two emotional databases: Moroccan Arabic dialect emotional database (MADED), and Ryerson audio-visual database on emotional speech and song (RAVDESS) which present notable differences in terms of type (natural/acted), and language (Arabic/English). We proposed a detection process based on 27 acoustic features extracted from consonant-vowel (CV) syllabic units: \ba, \du, \ki, \ta common to both databases. We tested two classification strategies: multiclass (all emotions combined: joy, sadness, neutral, anger) and binary (neutral vs. others, positive emotions (joy) vs. negative emotions (sadness, anger), sadness vs. anger). These strategies were tested three times: i) on MADED, ii) on RAVDESS, iii) on MADED and RAVDESS. The proposed method gave better recognition accuracy in the case of binary classification. The rates reach an average of 78% for the multi-class classification, 100% for neutral vs. other cases, 100% for the negative emotions (i.e. anger vs. sadness), and 96% for the positive vs. negative emotions.
This document discusses an analysis of an emotion recognition system through speech signals using K-nearest neighbors (KNN) and Gaussian mixture model (KNN) classifiers. It provides background on the challenges of automatic emotion recognition from speech and describes common features extracted from speech like mel frequency cepstrum coefficients and prosodic features. The document outlines the process of an emotion recognition system including feature extraction, training classifiers on a speech database, and classifying emotions. It then gives more detail on the KNN and GMM classifiers and how they were used to classify six emotional states from the Berlin emotional speech database.
Emotion Detection from Voice Based Classified Frame-Energy Signal Using K-Mea...ijseajournal
Emotion detection is a new research era in health informatics and forensic technology. Besides having some challenges, voice based emotion recognition is getting popular, as the situation where the facial image is not available, the voice is the only way to detect the emotional or psychiatric condition of a
person. However, the voice signal is so dynamic even in a short-time frame so that, a voice of the same person can differ within a very subtle period of time. Therefore, in this research basically two key criterion have been considered; firstly, this is clear that there is a necessity to partition the training data according
to the emotional stage of each individual speaker. Secondly, rather than using the entire voice signal, short time significant frames can be used, which would be enough to identify the emotional condition of the speaker. In this research, Cepstral Coefficient (CC) has been used as voice feature and a fixed valued kmeans clustered method has been used for feature classification. The value of k will depend on the number
of emotional situations in human physiology is being an evaluation. Consequently, the value of k does not necessarily consider the volume of experimental dataset. In this experiment, three emotional conditions: happy, angry and sad have been detected from eight female and seven male voice signals. This methodology has increased the emotion detection accuracy rate significantly comparing to some recent works and also reduced the CPU time of cluster formation and matching.
SPEECH EMOTION RECOGNITION SYSTEM USING RNNIRJET Journal
This document discusses a speech emotion recognition system using recurrent neural networks (RNNs). It begins with an abstract describing speech emotion recognition and its importance. Then it provides background on speech emotion databases, feature extraction using MFCC, and classification approaches like RNNs. It reviews related work on speech emotion recognition using various methods. Finally, it concludes that MFCC feature extraction and RNN classification was used in the proposed system to take advantage of their performance in machine learning applications. The system aims to help machines understand human interaction and respond based on the user's emotion.
Literature Review On: ”Speech Emotion Recognition Using Deep Neural Network”IRJET Journal
The document discusses speech emotion recognition using deep neural networks. It first provides an overview of SER and the challenges in the field. It then reviews 20 research papers on the topic, finding that most use deep neural network techniques like CNNs and DNNs for model building. The papers evaluated various datasets and algorithms, with accuracy ranging from 84% to 90%. Overall limitations identified included the need for more data, handling of multiple simultaneous emotions, and improving cross-corpus performance. The literature review contributes to knowledge in using machine learning for SER.
Signal Processing Tool for Emotion Recognitionidescitation
In the course of realization of modern day robots,
which not only perform tasks, but also behaves like human
beings during their interaction with the natural environment,
it is essential for us to impart knowledge of the underlying
emotions in the spoken utterances of human beings to the
robots, enabling them to be consistent, whole, complete and
perfect. To this end, it is essential for them too to understand
and identify the human emotions. For this reason, stress is
laid now-a-days on the study of emotional content of the speech
and accordingly speech emotion recognition engines have been
proposed. This paper is a survey of the main aspects of speech
emotion recognition, namely, features extractions and types
of features commonly used, selection of most informed
features from the original dataset of the features, and
classification of the features according to different classifying
techniques based on relative information regarding commonly
used database for the speech emotion recognition.
Signal & Image Processing : An International Journal sipij
Signal & Image Processing : An International Journal is an Open Access peer-reviewed journal intended for researchers from academia and industry, who are active in the multidisciplinary field of signal & image processing. The scope of the journal covers all theoretical and practical aspects of the Digital Signal Processing & Image processing, from basic research to development of application.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Signal & Image processing.
ASERS-LSTM: Arabic Speech Emotion Recognition System Based on LSTM Modelsipij
The swift progress in the study field of human-computer interaction (HCI) causes to increase in the interest in systems for Speech emotion recognition (SER). The speech Emotion Recognition System is the system that can identify the emotional states of human beings from their voice. There are well works in Speech Emotion Recognition for different language but few researches have implemented for Arabic SER systems and that because of the shortage of available Arabic speech emotion databases. The most commonly considered languages for SER is English and other European and Asian languages. Several machine learning-based classifiers that have been used by researchers to distinguish emotional classes: SVMs, RFs, and the KNN algorithm, hidden Markov models (HMMs), MLPs and deep learning. In this paper we propose ASERS-LSTM model for Arabic Speech Emotion Recognition based on LSTM model. We extracted five features from the speech: Mel-Frequency Cepstral Coefficients (MFCC) features, chromagram, Melscaled spectrogram, spectral contrast and tonal centroid features (tonnetz). We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech corpus (BAES-DB). In addition of that we also construct a DNN for classify the Emotion and compare the accuracy between LSTM and DNN model. For DNN the accuracy is 93.34% and for LSTM is 96.81%
Emotion recognition based on the energy distribution of plosive syllablesIJECEIAES
We usually encounter two problems during speech emotion recognition (SER): expression and perception problems, which vary considerably between speakers, languages, and sentence pronunciation. In fact, finding an optimal system that characterizes the emotions overcoming all these differences is a promising prospect. In this perspective, we considered two emotional databases: Moroccan Arabic dialect emotional database (MADED), and Ryerson audio-visual database on emotional speech and song (RAVDESS) which present notable differences in terms of type (natural/acted), and language (Arabic/English). We proposed a detection process based on 27 acoustic features extracted from consonant-vowel (CV) syllabic units: \ba, \du, \ki, \ta common to both databases. We tested two classification strategies: multiclass (all emotions combined: joy, sadness, neutral, anger) and binary (neutral vs. others, positive emotions (joy) vs. negative emotions (sadness, anger), sadness vs. anger). These strategies were tested three times: i) on MADED, ii) on RAVDESS, iii) on MADED and RAVDESS. The proposed method gave better recognition accuracy in the case of binary classification. The rates reach an average of 78% for the multi-class classification, 100% for neutral vs. other cases, 100% for the negative emotions (i.e. anger vs. sadness), and 96% for the positive vs. negative emotions.
This document discusses an analysis of an emotion recognition system through speech signals using K-nearest neighbors (KNN) and Gaussian mixture model (KNN) classifiers. It provides background on the challenges of automatic emotion recognition from speech and describes common features extracted from speech like mel frequency cepstrum coefficients and prosodic features. The document outlines the process of an emotion recognition system including feature extraction, training classifiers on a speech database, and classifying emotions. It then gives more detail on the KNN and GMM classifiers and how they were used to classify six emotional states from the Berlin emotional speech database.
Emotion Detection from Voice Based Classified Frame-Energy Signal Using K-Mea...ijseajournal
Emotion detection is a new research era in health informatics and forensic technology. Besides having some challenges, voice based emotion recognition is getting popular, as the situation where the facial image is not available, the voice is the only way to detect the emotional or psychiatric condition of a
person. However, the voice signal is so dynamic even in a short-time frame so that, a voice of the same person can differ within a very subtle period of time. Therefore, in this research basically two key criterion have been considered; firstly, this is clear that there is a necessity to partition the training data according
to the emotional stage of each individual speaker. Secondly, rather than using the entire voice signal, short time significant frames can be used, which would be enough to identify the emotional condition of the speaker. In this research, Cepstral Coefficient (CC) has been used as voice feature and a fixed valued kmeans clustered method has been used for feature classification. The value of k will depend on the number
of emotional situations in human physiology is being an evaluation. Consequently, the value of k does not necessarily consider the volume of experimental dataset. In this experiment, three emotional conditions: happy, angry and sad have been detected from eight female and seven male voice signals. This methodology has increased the emotion detection accuracy rate significantly comparing to some recent works and also reduced the CPU time of cluster formation and matching.
SPEECH EMOTION RECOGNITION SYSTEM USING RNNIRJET Journal
This document discusses a speech emotion recognition system using recurrent neural networks (RNNs). It begins with an abstract describing speech emotion recognition and its importance. Then it provides background on speech emotion databases, feature extraction using MFCC, and classification approaches like RNNs. It reviews related work on speech emotion recognition using various methods. Finally, it concludes that MFCC feature extraction and RNN classification was used in the proposed system to take advantage of their performance in machine learning applications. The system aims to help machines understand human interaction and respond based on the user's emotion.
Literature Review On: ”Speech Emotion Recognition Using Deep Neural Network”IRJET Journal
The document discusses speech emotion recognition using deep neural networks. It first provides an overview of SER and the challenges in the field. It then reviews 20 research papers on the topic, finding that most use deep neural network techniques like CNNs and DNNs for model building. The papers evaluated various datasets and algorithms, with accuracy ranging from 84% to 90%. Overall limitations identified included the need for more data, handling of multiple simultaneous emotions, and improving cross-corpus performance. The literature review contributes to knowledge in using machine learning for SER.
Signal Processing Tool for Emotion Recognitionidescitation
In the course of realization of modern day robots,
which not only perform tasks, but also behaves like human
beings during their interaction with the natural environment,
it is essential for us to impart knowledge of the underlying
emotions in the spoken utterances of human beings to the
robots, enabling them to be consistent, whole, complete and
perfect. To this end, it is essential for them too to understand
and identify the human emotions. For this reason, stress is
laid now-a-days on the study of emotional content of the speech
and accordingly speech emotion recognition engines have been
proposed. This paper is a survey of the main aspects of speech
emotion recognition, namely, features extractions and types
of features commonly used, selection of most informed
features from the original dataset of the features, and
classification of the features according to different classifying
techniques based on relative information regarding commonly
used database for the speech emotion recognition.
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...ijtsrd
Suctioning is a common procedure performed by nurses to maintain the gas exchange, adequate oxygenation and alveolar ventilation in critical ill patients under mechanical ventilation and aim of this research is to provide knowledge regarding maintaining airway patency with suctioning care that will help in the implementation of the quality of nursing care, eventually it will lead to better results. The planned study is a pre experimental study to assess the effectiveness of planned teaching programme on knowledge regarding airway patency on patients with mechanical ventilator among the B.Sc. internship students of selected college of nursing at Moradabad. To assess the level of knowledge regarding maintaining airway patency in patients with mechanical ventilator among B.Sc. Nursing internship students. To assess the effectiveness of planned teaching programme in term of knowledge regarding airway patency among B.Sc. nursing internship students. The purpose of this study is to examine the association between knowledge and effectiveness regarding airway patency among B.Sc. Nursing internship demographic students and their selected partner variables. A pre experimental study was conducted among 86 participants, selected by non probability convenient sampling method. Demographic Performa and self structured questionnaire was used to collect the data from the B.Sc. internship students. Nafees Ahmed | Sana Usmani "A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledge Regarding Maintaining Airway Patency in Patients with Mechanical Ventilator" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47917.pdf Paper URL: https://www.ijtsrd.com/medicine/nursing/47917/a-study-to-assess-the-effectiveness-of-planned-teaching-programme-on-knowledge-regarding-maintaining-airway-patency-in-patients-with-mechanical-ventilator/nafees-ahmed
Speech Emotion Recognition Using Neural Networksijtsrd
Speech is the most natural and easy method for people to communicate, and interpreting speech is one of the most sophisticated tasks that the human brain conducts. The goal of Speech Emotion Recognition SER is to identify human emotion from speech. This is due to the fact that tone and pitch of the voice frequently reflect underlying emotions. Librosa was used to analyse audio and music, sound file was used to read and write sampled sound file formats, and sklearn was used to create the model. The current study looked on the effectiveness of Convolutional Neural Networks CNN in recognising spoken emotions. The networks input characteristics are spectrograms of voice samples. Mel Frequency Cepstral Coefficients MFCC are used to extract characteristics from audio. Our own voice dataset is utilised to train and test our algorithms. The emotions of the speech happy, sad, angry, neutral, shocked, disgusted will be determined based on the evaluation. Anirban Chakraborty "Speech Emotion Recognition Using Neural Networks" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47958.pdf Paper URL: https://www.ijtsrd.com/other-scientific-research-area/other/47958/speech-emotion-recognition-using-neural-networks/anirban-chakraborty
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
Speech emotion recognition enables a computer system to records sounds and realizes the emotion of the
speaker. we are still far from having a natural interaction between the human and machine because
machines cannot distinguishes the emotion of the speaker. For this reason it has been established a new
investigation field, namely “the speech emotion recognition systems”. The accuracy of these systems
depend on the various factors such as the type and the number of the emotion states and also the classifier
type. In this paper, the classification methods of C5.0, Support Vector Machine (SVM), and the
combination of C5.0 and SVM (SVM-C5.0) are verified, and their efficiencies in speech emotion
recognition are compared. The utilized features in this research include energy, Zero Crossing Rate (ZCR),
pitch, and Mel-scale Frequency Cepstral Coefficients (MFCC). The results of paper demonstrate that the
effectiveness proposed SVM-C5.0 classification method is more efficient in recognizing the emotion of the
between -5.5 % and 8.9 % depending on the number of emotion states than SVM, C5.0.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
Speech emotion recognition enables a computer system to records sounds and realizes the emotion of the
speaker. we are still far from having a natural interaction between the human and machine because
machines cannot distinguishes the emotion of the speaker. For this reason it has been established a new
investigation field, namely “the speech emotion recognition systems”. The accuracy of these systems
depend on the various factors such as the type and the number of the emotion states and also the classifier
type. In this paper, the classification methods of C5.0, Support Vector Machine (SVM), and the
combination of C5.0 and SVM (SVM-C5.0) are verified, and their efficiencies in speech emotion
recognition are compared. The utilized features in this research include energy, Zero Crossing Rate (ZCR),
pitch, and Mel-scale Frequency Cepstral Coefficients (MFCC). The results of paper demonstrate that the
effectiveness proposed SVM-C5.0 classification method is more efficient in recognizing the emotion of the
between -5.5 % and 8.9 % depending on the number of emotion states than SVM, C5.0.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
This document summarizes a research paper that evaluates different classification methods for speech emotion recognition, including Support Vector Machine (SVM), C5.0, and a combination of SVM and C5.0 (SVM-C5.0). The paper extracts features like energy, zero crossing rate, pitch, and MFCCs from speech samples in the Berlin Emotional Speech Database, which contains utterances expressing seven emotions. These features are classified using SVM, C5.0, and SVM-C5.0, and the results show that SVM-C5.0 performs best, achieving recognition rates between 5.5-8.9% higher than SVM or C5.0 alone depending on the number of emotions.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
Speech emotion recognition enables a computer system to records sounds and realizes the emotion of the
speaker. we are still far from having a natural interaction between the human and machine because
machines cannot distinguishes the emotion of the speaker. For this reason it has been established a new
investigation field, namely “the speech emotion recognition systems”. The accuracy of these systems
depend on the various factors such as the type and the number of the emotion states and also the classifier
type. In this paper, the classification methods of C5.0, Support Vector Machine (SVM), and the
combination of C5.0 and SVM (SVM-C5.0) are verified, and their efficiencies in speech emotion
recognition are compared. The utilized features in this research include energy, Zero Crossing Rate (ZCR),
pitch, and Mel-scale Frequency Cepstral Coefficients (MFCC). The results of paper demonstrate that the
effectiveness proposed SVM-C5.0 classification method is more efficient in recognizing the emotion of the
between -5.5 % and 8.9 % depending on the number of emotion states than SVM, C5.0.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
This document summarizes a research paper that evaluated different machine learning classifiers for speech emotion recognition using the Berlin Emotional Speech Database. The paper compared the performance of Support Vector Machine (SVM), C5.0, and a combination of SVM and C5.0 (SVM-C5.0) classifiers. Features extracted from the speech data included energy, zero crossing rate, pitch, and Mel-frequency cepstral coefficients. The results showed that the proposed SVM-C5.0 method achieved 8.9% to 5.5% better emotion recognition accuracy compared to SVM and C5.0 alone, depending on the number of emotion states.
This paper introduces new features based on histograms of MFCC extracted from audio files to improve emotion recognition from speech. Experimental results on the Berlin and PAU databases using SVM and Random Forest classifiers show the proposed features achieve better classification results than current methods. Detailed analysis is provided on speech type (acted vs natural) and gender.
Human emotion recognition is an upcoming research field of human computer interaction based on facial gestures and is being used for real-time analysis in classifying cognitive affective states from a facial video data. Since computers have become an integral part of life, many researchers are using emotion recognition and classification of data based on audio and text. But these approaches offer limited accuracy and relevance in emotion classification. Therefore we have introduced and analyzed a hybrid approach which could outperform the existing strategies that uses an innovative approach supported by selection of audio and video data characteristics for classification. The research uses SVM for classifying the data using audiovisual savee database and the results obtained show maximum classification accuracy with respect to audio data about 91.6 could be improved to 99.2% after the application of hybrid strategy.
Speech emotion recognition using 2D-convolutional neural networkIJECEIAES
This research proposes a speech emotion recognition model to predict human emotions using the convolutional neural network (CNN) by learning segmented audio of specific emotions. Speech emotion recognition utilizes the extracted features of audio waves to learn speech emotion characteristics; one of them is mel frequency cepstral coefficient (MFCC). Dataset takes a vital role to obtain valuable results in model learning. Hence this research provides the leverage of dataset combination implementation. The model learns a combined dataset with audio segmentation and zero padding using 2D-CNN. Audio segmentation and zero padding equalize the extracted audio features to learn the characteristics. The model results in 83.69% accuracy to predict seven emotions: neutral, happy, sad, angry, fear, disgust, and surprise from the combined dataset with the segmentation of the audio files.
This document summarizes a research paper on classifying speech using Mel frequency cepstrum coefficients (MFCC) and power spectrum analysis. The paper reviews different classifiers used for speech emotion recognition, including neural networks, Gaussian mixture models, and support vector machines. It proposes using MFCC and power spectrum features as inputs to an artificial neural network classifier to identify emotions in speech, such as anger, happiness, sadness, and neutral states. Testing is performed on emotional speech samples to evaluate the performance and limitations of the proposed speech emotion recognition system.
On the use of voice activity detection in speech emotion recognitionjournalBEEI
Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.
Efficient Speech Emotion Recognition using SVM and Decision TreesIRJET Journal
This document discusses efficient speech emotion recognition using support vector machines and decision trees. It summarizes a research paper that extracted speech features like variance, standard deviation, energy and pitch from an emotional speech corpus containing 535 speech segments expressing seven emotions. The extracted features were used to train and test an SVM classifier for emotion recognition. The classifier achieved an average accuracy of 85% across training and test sets at recognizing the seven emotions. Feature selection techniques were used to address the curse of dimensionality caused by the large number of extracted features.
Emotion Recognition through Speech Analysis using various Deep Learning Algor...IRJET Journal
This document summarizes a research paper on emotion recognition through speech analysis using deep learning algorithms. The researchers used datasets containing speech samples labeled with seven emotions to train and test convolutional neural network (CNN), support vector machine (SVM), recurrent neural network (RNN), and random forest models. They found that the RNN model achieved the highest testing accuracy at 75.31% for emotion recognition. The researchers concluded that speech emotion recognition systems could be useful for applications like dialogue systems, call centers, student voice reviews, and building healthy relationships.
The document describes AffectNet, a new database of over 1 million facial images collected from the internet and annotated for facial expressions, valence, and arousal. About 450,000 images were manually annotated for the presence of 7 discrete facial expressions in the categorical model and intensity of valence and arousal in the dimensional model. This makes AffectNet the largest database of facial expressions in the wild annotated for both categorical and dimensional models of affect. Two neural network baselines are used to classify images by expression in the categorical model and predict valence and arousal in the dimensional model, showing improved performance over conventional methods.
IRJET- Facial Expression Recognition System using Neural Network based on...IRJET Journal
This document describes a facial expression recognition system using a neural network approach. It uses the Japanese Female Facial Expressions (JAFFE) database to classify 7 facial expressions. The system extracts features using 2D discrete cosine transform (DCT), local binary patterns (LBP), and histogram of oriented gradients (HOG). These features are used to create a hybrid feature vector for each image. A single hidden layer feedforward neural network is trained on the feature vectors using different learning algorithms to classify the expressions. Experimental results show that a neural network trained with gradient descent and adaptive learning rate achieves the highest average accuracy of 97.2% for classifying expressions in the JAFFE database.
A comparative analysis of classifiers in emotion recognition thru acoustic fea...Pravena Duplex
This document presents a comparative analysis of different classifiers for emotion recognition through acoustic features. It analyzes prosody features like energy and pitch as well as spectral features like MFCCs. Feature fusion, which combines prosody and spectral features, improves classification performance for LDA, RDA, SVM and kNN classifiers by around 20% compared to using features individually. Results on the Berlin and Spanish emotional speech databases show that RDA performs best as it avoids the singularity problem that affects LDA when dimensionality is high relative to the number of training samples.
IRJET- Prediction of Human Facial Expression using Deep LearningIRJET Journal
This document discusses predicting human facial expressions using deep learning. It begins with an introduction to emotion recognition, facial emotion recognition, and deep learning techniques. It then reviews related literature on facial expression recognition using methods like CNNs, active appearance models, and analyzing facial features. The proposed system and methodology are described, including face registration, feature extraction focusing on action units, and using classifiers like KNN, MLP, and SVM to classify emotions based on these features. The goal is to recognize seven basic emotions (neutral, joy, sadness, surprise, anger, fear, disgust) in still images using deep learning techniques.
STUDENT TEACHER INTERACTION ANALYSIS WITH EMOTION RECOGNITION FROM VIDEO AND ...IRJET Journal
1) The document discusses a study analyzing student-teacher interactions through emotion recognition from video and audio inputs.
2) It aims to understand how students' emotional states relate to comprehension by defining facial behaviors associated with emotions.
3) The study examines the usefulness of recognizing facial expressions and voice between a teacher and student in a classroom setting. It analyzes whether students' facial expressions convey emotions in relation to understanding.
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...ijtsrd
Suctioning is a common procedure performed by nurses to maintain the gas exchange, adequate oxygenation and alveolar ventilation in critical ill patients under mechanical ventilation and aim of this research is to provide knowledge regarding maintaining airway patency with suctioning care that will help in the implementation of the quality of nursing care, eventually it will lead to better results. The planned study is a pre experimental study to assess the effectiveness of planned teaching programme on knowledge regarding airway patency on patients with mechanical ventilator among the B.Sc. internship students of selected college of nursing at Moradabad. To assess the level of knowledge regarding maintaining airway patency in patients with mechanical ventilator among B.Sc. Nursing internship students. To assess the effectiveness of planned teaching programme in term of knowledge regarding airway patency among B.Sc. nursing internship students. The purpose of this study is to examine the association between knowledge and effectiveness regarding airway patency among B.Sc. Nursing internship demographic students and their selected partner variables. A pre experimental study was conducted among 86 participants, selected by non probability convenient sampling method. Demographic Performa and self structured questionnaire was used to collect the data from the B.Sc. internship students. Nafees Ahmed | Sana Usmani "A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledge Regarding Maintaining Airway Patency in Patients with Mechanical Ventilator" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47917.pdf Paper URL: https://www.ijtsrd.com/medicine/nursing/47917/a-study-to-assess-the-effectiveness-of-planned-teaching-programme-on-knowledge-regarding-maintaining-airway-patency-in-patients-with-mechanical-ventilator/nafees-ahmed
Speech Emotion Recognition Using Neural Networksijtsrd
Speech is the most natural and easy method for people to communicate, and interpreting speech is one of the most sophisticated tasks that the human brain conducts. The goal of Speech Emotion Recognition SER is to identify human emotion from speech. This is due to the fact that tone and pitch of the voice frequently reflect underlying emotions. Librosa was used to analyse audio and music, sound file was used to read and write sampled sound file formats, and sklearn was used to create the model. The current study looked on the effectiveness of Convolutional Neural Networks CNN in recognising spoken emotions. The networks input characteristics are spectrograms of voice samples. Mel Frequency Cepstral Coefficients MFCC are used to extract characteristics from audio. Our own voice dataset is utilised to train and test our algorithms. The emotions of the speech happy, sad, angry, neutral, shocked, disgusted will be determined based on the evaluation. Anirban Chakraborty "Speech Emotion Recognition Using Neural Networks" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47958.pdf Paper URL: https://www.ijtsrd.com/other-scientific-research-area/other/47958/speech-emotion-recognition-using-neural-networks/anirban-chakraborty
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
Speech emotion recognition enables a computer system to records sounds and realizes the emotion of the
speaker. we are still far from having a natural interaction between the human and machine because
machines cannot distinguishes the emotion of the speaker. For this reason it has been established a new
investigation field, namely “the speech emotion recognition systems”. The accuracy of these systems
depend on the various factors such as the type and the number of the emotion states and also the classifier
type. In this paper, the classification methods of C5.0, Support Vector Machine (SVM), and the
combination of C5.0 and SVM (SVM-C5.0) are verified, and their efficiencies in speech emotion
recognition are compared. The utilized features in this research include energy, Zero Crossing Rate (ZCR),
pitch, and Mel-scale Frequency Cepstral Coefficients (MFCC). The results of paper demonstrate that the
effectiveness proposed SVM-C5.0 classification method is more efficient in recognizing the emotion of the
between -5.5 % and 8.9 % depending on the number of emotion states than SVM, C5.0.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
Speech emotion recognition enables a computer system to records sounds and realizes the emotion of the
speaker. we are still far from having a natural interaction between the human and machine because
machines cannot distinguishes the emotion of the speaker. For this reason it has been established a new
investigation field, namely “the speech emotion recognition systems”. The accuracy of these systems
depend on the various factors such as the type and the number of the emotion states and also the classifier
type. In this paper, the classification methods of C5.0, Support Vector Machine (SVM), and the
combination of C5.0 and SVM (SVM-C5.0) are verified, and their efficiencies in speech emotion
recognition are compared. The utilized features in this research include energy, Zero Crossing Rate (ZCR),
pitch, and Mel-scale Frequency Cepstral Coefficients (MFCC). The results of paper demonstrate that the
effectiveness proposed SVM-C5.0 classification method is more efficient in recognizing the emotion of the
between -5.5 % and 8.9 % depending on the number of emotion states than SVM, C5.0.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
This document summarizes a research paper that evaluates different classification methods for speech emotion recognition, including Support Vector Machine (SVM), C5.0, and a combination of SVM and C5.0 (SVM-C5.0). The paper extracts features like energy, zero crossing rate, pitch, and MFCCs from speech samples in the Berlin Emotional Speech Database, which contains utterances expressing seven emotions. These features are classified using SVM, C5.0, and SVM-C5.0, and the results show that SVM-C5.0 performs best, achieving recognition rates between 5.5-8.9% higher than SVM or C5.0 alone depending on the number of emotions.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
Speech emotion recognition enables a computer system to records sounds and realizes the emotion of the
speaker. we are still far from having a natural interaction between the human and machine because
machines cannot distinguishes the emotion of the speaker. For this reason it has been established a new
investigation field, namely “the speech emotion recognition systems”. The accuracy of these systems
depend on the various factors such as the type and the number of the emotion states and also the classifier
type. In this paper, the classification methods of C5.0, Support Vector Machine (SVM), and the
combination of C5.0 and SVM (SVM-C5.0) are verified, and their efficiencies in speech emotion
recognition are compared. The utilized features in this research include energy, Zero Crossing Rate (ZCR),
pitch, and Mel-scale Frequency Cepstral Coefficients (MFCC). The results of paper demonstrate that the
effectiveness proposed SVM-C5.0 classification method is more efficient in recognizing the emotion of the
between -5.5 % and 8.9 % depending on the number of emotion states than SVM, C5.0.
Speech Emotion Recognition by Using Combinations of Support Vector Machine (S...mathsjournal
This document summarizes a research paper that evaluated different machine learning classifiers for speech emotion recognition using the Berlin Emotional Speech Database. The paper compared the performance of Support Vector Machine (SVM), C5.0, and a combination of SVM and C5.0 (SVM-C5.0) classifiers. Features extracted from the speech data included energy, zero crossing rate, pitch, and Mel-frequency cepstral coefficients. The results showed that the proposed SVM-C5.0 method achieved 8.9% to 5.5% better emotion recognition accuracy compared to SVM and C5.0 alone, depending on the number of emotion states.
This paper introduces new features based on histograms of MFCC extracted from audio files to improve emotion recognition from speech. Experimental results on the Berlin and PAU databases using SVM and Random Forest classifiers show the proposed features achieve better classification results than current methods. Detailed analysis is provided on speech type (acted vs natural) and gender.
Human emotion recognition is an upcoming research field of human computer interaction based on facial gestures and is being used for real-time analysis in classifying cognitive affective states from a facial video data. Since computers have become an integral part of life, many researchers are using emotion recognition and classification of data based on audio and text. But these approaches offer limited accuracy and relevance in emotion classification. Therefore we have introduced and analyzed a hybrid approach which could outperform the existing strategies that uses an innovative approach supported by selection of audio and video data characteristics for classification. The research uses SVM for classifying the data using audiovisual savee database and the results obtained show maximum classification accuracy with respect to audio data about 91.6 could be improved to 99.2% after the application of hybrid strategy.
Speech emotion recognition using 2D-convolutional neural networkIJECEIAES
This research proposes a speech emotion recognition model to predict human emotions using the convolutional neural network (CNN) by learning segmented audio of specific emotions. Speech emotion recognition utilizes the extracted features of audio waves to learn speech emotion characteristics; one of them is mel frequency cepstral coefficient (MFCC). Dataset takes a vital role to obtain valuable results in model learning. Hence this research provides the leverage of dataset combination implementation. The model learns a combined dataset with audio segmentation and zero padding using 2D-CNN. Audio segmentation and zero padding equalize the extracted audio features to learn the characteristics. The model results in 83.69% accuracy to predict seven emotions: neutral, happy, sad, angry, fear, disgust, and surprise from the combined dataset with the segmentation of the audio files.
This document summarizes a research paper on classifying speech using Mel frequency cepstrum coefficients (MFCC) and power spectrum analysis. The paper reviews different classifiers used for speech emotion recognition, including neural networks, Gaussian mixture models, and support vector machines. It proposes using MFCC and power spectrum features as inputs to an artificial neural network classifier to identify emotions in speech, such as anger, happiness, sadness, and neutral states. Testing is performed on emotional speech samples to evaluate the performance and limitations of the proposed speech emotion recognition system.
On the use of voice activity detection in speech emotion recognitionjournalBEEI
Emotion recognition through speech has many potential applications, however the challenge comes from achieving a high emotion recognition while using limited resources or interference such as noise. In this paper we have explored the possibility of improving speech emotion recognition by utilizing the voice activity detection (VAD) concept. The emotional voice data from the Berlin Emotion Database (EMO-DB) and a custom-made database LQ Audio Dataset are firstly preprocessed by VAD before feature extraction. The features are then passed to the deep neural network for classification. In this paper, we have chosen MFCC to be the sole determinant feature. From the results obtained using VAD and without, we have found that the VAD improved the recognition rate of 5 emotions (happy, angry, sad, fear, and neutral) by 3.7% when recognizing clean signals, while the effect of using VAD when training a network with both clean and noisy signals improved our previous results by 50%.
Efficient Speech Emotion Recognition using SVM and Decision TreesIRJET Journal
This document discusses efficient speech emotion recognition using support vector machines and decision trees. It summarizes a research paper that extracted speech features like variance, standard deviation, energy and pitch from an emotional speech corpus containing 535 speech segments expressing seven emotions. The extracted features were used to train and test an SVM classifier for emotion recognition. The classifier achieved an average accuracy of 85% across training and test sets at recognizing the seven emotions. Feature selection techniques were used to address the curse of dimensionality caused by the large number of extracted features.
Emotion Recognition through Speech Analysis using various Deep Learning Algor...IRJET Journal
This document summarizes a research paper on emotion recognition through speech analysis using deep learning algorithms. The researchers used datasets containing speech samples labeled with seven emotions to train and test convolutional neural network (CNN), support vector machine (SVM), recurrent neural network (RNN), and random forest models. They found that the RNN model achieved the highest testing accuracy at 75.31% for emotion recognition. The researchers concluded that speech emotion recognition systems could be useful for applications like dialogue systems, call centers, student voice reviews, and building healthy relationships.
The document describes AffectNet, a new database of over 1 million facial images collected from the internet and annotated for facial expressions, valence, and arousal. About 450,000 images were manually annotated for the presence of 7 discrete facial expressions in the categorical model and intensity of valence and arousal in the dimensional model. This makes AffectNet the largest database of facial expressions in the wild annotated for both categorical and dimensional models of affect. Two neural network baselines are used to classify images by expression in the categorical model and predict valence and arousal in the dimensional model, showing improved performance over conventional methods.
IRJET- Facial Expression Recognition System using Neural Network based on...IRJET Journal
This document describes a facial expression recognition system using a neural network approach. It uses the Japanese Female Facial Expressions (JAFFE) database to classify 7 facial expressions. The system extracts features using 2D discrete cosine transform (DCT), local binary patterns (LBP), and histogram of oriented gradients (HOG). These features are used to create a hybrid feature vector for each image. A single hidden layer feedforward neural network is trained on the feature vectors using different learning algorithms to classify the expressions. Experimental results show that a neural network trained with gradient descent and adaptive learning rate achieves the highest average accuracy of 97.2% for classifying expressions in the JAFFE database.
A comparative analysis of classifiers in emotion recognition thru acoustic fea...Pravena Duplex
This document presents a comparative analysis of different classifiers for emotion recognition through acoustic features. It analyzes prosody features like energy and pitch as well as spectral features like MFCCs. Feature fusion, which combines prosody and spectral features, improves classification performance for LDA, RDA, SVM and kNN classifiers by around 20% compared to using features individually. Results on the Berlin and Spanish emotional speech databases show that RDA performs best as it avoids the singularity problem that affects LDA when dimensionality is high relative to the number of training samples.
IRJET- Prediction of Human Facial Expression using Deep LearningIRJET Journal
This document discusses predicting human facial expressions using deep learning. It begins with an introduction to emotion recognition, facial emotion recognition, and deep learning techniques. It then reviews related literature on facial expression recognition using methods like CNNs, active appearance models, and analyzing facial features. The proposed system and methodology are described, including face registration, feature extraction focusing on action units, and using classifiers like KNN, MLP, and SVM to classify emotions based on these features. The goal is to recognize seven basic emotions (neutral, joy, sadness, surprise, anger, fear, disgust) in still images using deep learning techniques.
STUDENT TEACHER INTERACTION ANALYSIS WITH EMOTION RECOGNITION FROM VIDEO AND ...IRJET Journal
1) The document discusses a study analyzing student-teacher interactions through emotion recognition from video and audio inputs.
2) It aims to understand how students' emotional states relate to comprehension by defining facial behaviors associated with emotions.
3) The study examines the usefulness of recognizing facial expressions and voice between a teacher and student in a classroom setting. It analyzes whether students' facial expressions convey emotions in relation to understanding.
Similar to Signal & Image Processing : An International Journal (20)
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Advanced control scheme of doubly fed induction generator for wind turbine us...
Signal & Image Processing : An International Journal
1. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
DOI: 10.5121/sipij.2022.13104 45
ASERS-CNN: ARABIC SPEECH EMOTION
RECOGNITION SYSTEM BASED ON CNN MODEL
Mohammed Tajalsir1
, Susana Mu˜noz Hern´andez2
and Fatima Abdalbagi Mohammed3
1
Department of Computer Science,
Sudan University of Science and Technology, Khartoum, Sudan
2
Technical University of Madrid (UPM),
Computer Science School (FI), Madrid, Spain
3
Department of Computer Science,
Sudan University of Science and Technology, Khartoum, Sudan
ABSTRACT
When two people are on the phone, although they cannot observe the other person's facial expression and
physiological state, it is possible to estimate the speaker's emotional state by voice roughly. In medical
care, if the emotional state of a patient, especially a patient with an expression disorder, can be known,
different care measures can be made according to the patient's mood to increase the amount of care. The
system that capable for recognize the emotional states of human being from his speech is known as Speech
emotion recognition system (SER). Deep learning is one of most technique that has been widely used in
emotion recognition studies, in this paper we implement CNN model for Arabic speech emotion
recognition. We propose ASERS-CNN model for Arabic Speech Emotion Recognition based on CNN
model. We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech
corpus (BAES-DB). In addition of that we compare the accuracy between our previous ASERS-LSTM and
new ASERS-CNN model proposed in this paper and we comes out that our new proposed mode is
outperformed ASERS-LSTM model where it get 98.18% accuracy.
KEYWORDS:
BAES-DB, ASERS-LSTM, Deep learning, Speech emotion recognition.
1. INTRODUCTION
Speech is the compound words that are useful in the situation and used for communication, to
express the thoughts and opinions. The emotions are the mental states such as happiness, love,
fear, anger, or joy that effect on the human behavior. is easy for the Human using available
senses to detect the emotional states from speaker's speech , but this is a very difficult task for
machines. The research work on speech emotion recognition generally starts with the study of
phonetic features in the field of phonetics, and some studies make use of the characteristics of
linguistics. For example, some special vocabulary, syntax, etc., in general, all studies have shown
that the performance of automatic speech emotion recognition is inferior to the ability of humans
to recognize emotions.
The Arabic language is spoken by more than 300 million Arabic speakers. Since that, there are
needs for developing an Arabic Speech Recognition system to make the computer not just can
recognize the speech, but to recognize how has spoken. However, the Arabic Speech Recognition
system is challenged by many factors such as the cultural effects and determining the suitable
2. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
46
features that classify the emotions. There are few Arabic speech dataset to be used for training
Arabic Speech Recognition models, the most difficult task related the speech samples is to find
reliable data, most of dataset are not public, and most of datasets related sentiment analysis are
not real or simulated. And it is very difficult to record people's real emotions. In some works
related to detecting lying or stress you can force the people to lie or be stressed. But this is very
difficult in other interesting emotions as happiness, sadness, anger.
Deep Learning is a new area of Machine Learning research, the relationship between the artificial
intelligence.
There are few research work that was focused on Arabic emotions recognition such as the work
in [2], [3], [4], [5], [6], [7] and [8]. In [2] recognizing emotions was performed using Arabic
natural real life utterances for the first time. The realistic speech corpus collected from Arabic TV
shows and the three emotions happy, angry and surprised are recognized. The low-level
descriptors (LLDs); acoustic and spectral features extracted and calculate 15 statistical functions
and the delta coefficient is computed for every feature, then ineffective features removed using
Kruskal–Wallis non-parametric test leading to new database with 845 features. Thirty-five
classifiers belonging to six classification groups Trees, Rules, Bayes, Lazy, Functions and
Metawere applied over the extracted features. The Sequential minimal optimization (SMO)
classifier (Functions group) overcomes the others giving 95.52% accuracy.
in [3] they enhancement the emotion recognition system by proposed a new two phase, the first
phase aim to remove the units that were misclassified by most classifiers from the original
corpora by labelling each video unit misclassified by more than twenty four methods as "1" and
good units as "0" and phase two remove all videos with type "1" from original database than all
classification models over the new database. The new enhancement model improved the accuracy
by 3% for all for all thirty five classification models. The result accuracy of Sequential minimal
optimization (SMO) classifier improved from 95.52% to 98.04%.
[4] They performed two neural architectures to develop an emotion recognition system for Arabic
data using KSUEmotions dataset; an attention-based CNNLSTM-DNN model and a strong deep
CNN model as a baseline. In the first emotion classifier the CNN layers used to extract audio
signal features and the bi-directional LSTM (BLSTM) layers used to process the sequential
phenomena of the speech signal then followed by an attention layer to extracts a summary vector
which is fed to a DNN layer which finally connects to a softmax layer. The results show that an
attention-based CNNLSTM-DNN approach can produce to significant improvements (2.2%
absolute improvements) over the baseline system.
A semi-natural Egyptian Arabic speech emotion (EYASE) database is introduced by [5], the
EYASE database has been created from Egyptian TV series. Prosodic, spectral and wavelet
features in addition to pitch, intensity, formants, Mel-frequency cepstral coefficients (MFCC),
long-term average spectrum (LTAS) and wavelet parameters are extracted to recognize four
emotions: angry, happy, neutral and sad. Several experiments were performed to detect emotions:
emotion vs. neutral classifications, arousal & valence classifications and multi-emotion
classifications for both speaker independent and dependent experiments. The experiments
analysis finds that the gender and culture effects on SER. Furthermore, For the EYASE database,
anger emotion most readily detected while happiness was the most challenging. Arousal
(angry/sad) recognition rates were shown to be superior to valence (angry/happy) recognition
rates. In most cases the speaker dependent SER performance overcomes the speaker independent
SER performance.
3. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
47
In [6] five ensemble models {Bagging, Adaboost, Logitboost, Random Subspace and Random
Committee} were employed and studied their effect on a speech emotions recognition system.
The highest accuracy was obtained by SMO 95.52% through all single classifiers for recognizing
happy, angry, and surprise emotion from The Arabic Natural Audio Dataset (ANAD). And after
applying the ensemble models on 19 single classifiers the result accuracy of SMO classifier
improved from 95.52% to 95.95% as the best enhanced. The Boosting technique having the
Naïve Bayes Multinomial as base classifier achieved the highest improvement in accuracy
19.09%.
In [7] they built a system to automatically recognize emotion in speech using a corpus of Arabic
expressive sentences phonetically balanced. They study the influence of the speaker dependency
on their result. The joy, sadness, anger and neutral emotions were recognized after extracts the
cepstral features, their first and second derivatives, Shimmer, Jitter and the duration. They used a
multilayer perceptron neural network (MLPs) to recognize emotion. The experiments shows that
in the case of an intra-speaker classification recognition rate could reach more than 98% and in
the case of an inter-speaker classification recognition rate could reach 54.75%.thus the system's
dependence on speaker is obvious.
A natural Arabic visual-audio dataset was designed by [8] which consist of audio-visual records
from the Algerian TV talk show “Red line” for the present. The dataset was records by 14
speakers with 1, 443 complete sentences speech. The openSMILE feature extraction tool used to
extracts variety of acoustic features to recognize five emotions enthusiasm, admiration,
disapproval, neutral, and joy. The min, max, range, standard deviation and mean statistical
functions are applied over all the extracted features (energy, pitch, ZCR, spectral features,
MFCCs, and Line Spectral Frequencies (LSP)). The WEKA toolkit was used to perform some
classification algorithms. The experiments results shows that using the SMO classifier with the
Energy, pitch, ZCR, spectral features, MFCCs, LSP features set (430 features) achieves the better
classification results (0.48) which are measured by a weighted average of f-measure. And
recognizing the Enthusiasm is the most difficult task through the five emotions.
The rest of this paper is organized as flow: section 2 briefly describes the methods and techniques
used in this work. The results are detailed and discussed in section 3. Finally, in section 4 the
conclusions are drawn.
2. METHODS AND TECHNIQUES
Pre-processing the signal and Feature Extraction, training the model and finally testing the model
these are the phases for our methodology to recognize the Emotion as we can see in Figure 2.
4. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
48
Figure 1: The Methodology to recognize Emotion
2.1. Dataset
Basic Arabic Expressive Speech corpus (BAES-DB) is the dataset that we used for our
experimental. The corpus consists of 13 speakers, 4 emotions and 10 sentences; in total it
contains 520 sound files. Each file of them contains the ten utterances of a sentence in one of the
four emotions states for each of the thirteen speakers. The four selected emotions are neutral, joy,
sadness and anger. Any speaker recorded the all ten sentences in a specific situation before
moving on to the next one. The first four speakers were recorded while sitting, whereas the 9
others were standing.
2.2. Preprocessing
For preprocessing step, Pre-emphasis and Silent Remover are the two preprocessing algorithms
that we apply it in here in this paper same as we did in our previous work.
5. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
49
2.3. Feature Extraction
Human beings can assume the emotional state of the speaker by the voice of the other
party. Subjectively, we know that when a person is angry, his speech rate will increase, the
loudness of the voice will increase, and the tone will increase. When a person is sad, the speech
rate will slow down, the loudness will decrease, the tone will decrease, etc. This shows that the
human voice situation will be significantly affected by the speaker's emotions. When people are
in different emotions, the sounds produced will have different acoustic characteristics or
generally call it as Feature. So we need firstly to extract the feature from speech to detect the
speaker's emotions. Here in this paper five different feature types are investigated using the CNN
architecture: Mel-Frequency Cepstral Coefficients (MFCC) features, chromagram, Mel-scaled
spectrogram, spectral contrast and tonal centroid features (tonnetz).
2.4. Convolutional Neural Network
The convolutional neural network (CNN) defines as: one of the most popular algorithms for deep
learning with images and video [10]. CNN it's composed of an input layer, an output layer, and
many hidden layers in between just like other neural networks.
2.4.1. Training CNN model
We implemented the CNN model for emotion classification using Keras deep learning library
with Tensorflow backend on laptop with processor Intel(R) Core™ i7-3520 CPU @ 2.90GHz,
8GB RAM and 64 bit operating system. The ASERS-CNN model Architecture proposed here in
this paper is consist of 4 block, each one of them contain of the convolution, Batch
Normalization, Dropout and MaxPooling layer as shown in Figure 4.
6. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
50
Figure 2: ASERS-CNN model
The model was trained using Adam optimization algorithm with dropout rate=0.2. The initial
learning rate was set to 0.001 and the batch size was set to 32 for 50 Epoch, Categorical cross
entropy loss function was used. The accuracy and loss curves for the model are shown in Figure
5.
7. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
51
(a) (b)
Figure 3: (a) CNN Model Accuracy curve (b) CNN Model loss curve
3. RESULTS AND DISCUSSION
In order to evaluate the performance of the proposed CNN model we used Basic Arabic
Expressive Speech corpus (BAES-DB) database, which 520 wav files for 13 speakers. For
training and evaluation the model, we used four categorical emotions Angry, Happy, Sad and
Neutral, which represent the majority of the emotion categories in the database. For low-level
acoustic features, we extract 5 features: Mel-Frequency Cepstral Coefficients (MFCC) features,
chromagram, Mel-scaled spectrogram, spectral contrast and tonal centroid features (tonnetz). To
determine whether the number of Epoch for the training has any effect on the accuracy of
emotion classification, we was trained the model using 200 and 50 Epoch. Also we study the
effect of the number of feature that extracted from each wav file.
In Table 1 we summarize the Compression between the three models DNN, LSTM and CNN
Where the DNN its get Accuracy 93.34% before applying the Pre-processing step and after Pre-
processing it get 97.78% when using 5 Feature. But when using 7 Feature and before Pre-
processing is getting 92.95% and after Pre-processing it gets 98.06%. When decrease number of
Epoch from 200 to 50 it get 97.04%. The CNN model its get Accuracy 98.18% before applying
the Pre-processing step and after Pre-processing it get 98.52% when using 5 Feature. But when
using 7 Feature and before Pre-processing is getting 98.40% and after Pre-processing it gets
98.46%. When decrease number of Epoch from 200 to 50 it get 95.79%.
The LSTM model its get Accuracy 96.81% before applying the Pre-processing step and after Pre-
processing it get 97.44% when using 5 Feature. But when using 7 Feature and before Pre-
processing is getting 96.98% and after Pre-processing it gets 95.84%. When decrease number of
Epoch from 200 to 50 it get 95.16%. As we can see in Table 1, the results show the best
Accuracy for CNN model is when we use 200 epochs with two pre-processing and five extracted
features.
8. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
52
Table 1: Compression the Accuracy between CNN, DNN and LSTM models
The Accuracy (%)
200 Epoch 50 Epoch
5 Feature 7 Feature
7 Feature
With
Preprocessing
Without
Preprocessing
With
Preprocessing
Without
Preprocessing
With
Preprocessing
Classifier
DNN 93.34% 97.78% 92.95% 98.06% 97.04%
CNN 98.18% 98.52% 98.40% 98.46% 95.79%
LSTM 96.81% 97.44% 96.98% 95.84% 95.16%
The results are compared with some of related works include (Klaylat et al., 2018a), )Klaylat et
al., 2018b(, (Schuller, Rigoll and Lang, 2014),(Shaw, 2016( and (Farooque et al., 2004) as shown
in Table 2 bellow. Our Proposed CNN it obviously from the Table 2 it's outperformed the other
state-of-the-art model which it's obtain 98.52%. (Klaylat et al., 2018b) it's obtaining better
Accuracy than Our Proposed LSTM where it get 98.04% and Our Proposed LSTM it get 97.44%.
Our Proposed DNN it's outperformed (Klaylat et al., 2018a) , [12], [13] and [14].
Table 2: Compression between our proposed models and some of related works
Ref Classifier Features Database Accuracy
[2]
Thirty-five classifiers
belonging to six groups
low-level descriptors (LLDs);
acoustic and spectral features
created
database
95.52%
[3]
Thirty-five classifiers
belonging to six groups
low-level descriptors (LLDs);
acoustic and spectral features
created
database
98.04%.
[12] CHM, GMM. Pitch and energy. - 78%
[13] ANN
Pitch, Energy, MFCC, Formant
Frequencies
created
database
85%.
[14] RFuzzy model
SP-IN, TDIFFVUV, and
TDIFFW
created
database
90%
Our
DNN
DNN Network
Mel-Frequency Cepstral
Coefficients (MFCC) features,
chromagram, Mel-scaled
spectrogram, spectral contrast
and tonal centroid features
BAES-
DB
98.06%
Our
CNN
CNN Network 98.52%
Our
LSTM
LSTM Network 97.44%
9. Signal & Image Processing: An International Journal (SIPIJ) Vol.13, No.1, February 2022
53
4. CONCLUSION
in This paper we propose ASERS-CNN Deep learning model for Arabic speech emotion
recognition (ASER),we use Arabic speech dataset named Basic Arabic Expressive Speech corpus
(BAES-DB) for our evaluation the model get Accuracy 98.18% before applying any Pre-
processing and after Pre-processing it get 98.52% when using 5 Feature. When we use 7 Feature
and before Pre-processing are getting 98.40% and after Pre-processing it get 98.46%. we also
decrease number of Epoch from 200 to 50 and it get 95.79%.We compared the accuracy of
proposed ASERS-CNN model with our previous LSTM and DNN models. The proposed
ASERS-CNN model it's outperformed the previous LSTM and DNN model where it gets 98.52%
accuracy and LSTM and DNN was gets 98.06% and 97.44%, respectively. We also compared the
accuracy of proposed ASERS-CNN model with some of related works include (Klaylat et al.,
2018a), ( Klaylat et al., 2018b), (Schuller, Rigoll and Lang, 2014),(Shaw, 2016) and (Farooque et
al., 2004) as shown in Table 2 bellow. Our Proposed CNN it obviously from the Table 2 it's
outperformed the other state-of-the-art model.
REFERENCES
[1] F. Chollet, Deep Learning with Python, vol. 10, no. 2. 2011.
[2] S. Klaylat, Z. Osman, L. Hamandi, and R. Zantout, “Emotion recognition in Arabic speech,” Analog
Integr. Circuits Signal Process., vol. 96, no. 2, pp. 337–351, 2018.
[3] S. Klaylat, Z. Osman, L. Hamandi, and R. Zantout, “Enhancement of an Arabic Speech Emotion
Recognition System,” Int. J. Appl. Eng. Res., vol. 13, no. 5, pp. 2380–2389, 2018.
[4] Y. Hifny and A. Ali, “Efficient Arabic emotion recognition using deep neural networks,” pp. 6710–
6714, 2019.
[5] L. Abdel-Hamid, “Egyptian Arabic speech emotion recognition using prosodic, spectral and wavelet
features,” Speech Commun., vol. 122, pp. 19–30, 2020.
[6] S. Klaylat, Z. Osman, L. Hamandi, and R. Zantout, “Ensemble Models for Enhancement of an Arabic
Speech Emotion Recognition System,” vol. 70, no. January, pp. 299–311, 2020.
[7] I. Hadjadji, L. Falek, L. Demri, and H. Teffahi, “Emotion recognition in Arabic speech,” 2019 Int.
Conf. Adv. Electr. Eng. ICAEE 2019, 2019.
[8] H. Dahmani, H. Hussein, B. Meyer-Sickendiek, and O. Jokisch, “Natural Arabic Language Resources
for Emotion Recognition in Algerian Dialect,” vol. 2, no. October, pp. 18–33, 2019.
[9] B. Y. Goodfellow Ian and Courville Aaron, Deep learning 简介一、什么是 Deep Learning ?, vol.
29. 2019.
[10] Mathworks, “Introducing Deep Learning with MATLAB,” Introd. Deep Learn. with MATLAB, p. 15,
2017.
[11] S. Hung-Il, “Chapter 1 - An Introduction to Neural Networks and Deep Learning,” Deep Learning for
Medical Image Analysis. pp. 3–24, 2017.
[12] B. Schuller, G. Rigoll, and M. Lang, “Hidden Markov Model-based Speech Emotion Recognition,”
no. August 2003, 2014.
[13] A. Shaw, “Emotion Recognition and Classification in Speech using Artificial Neural Networks,” vol.
145, no. 8, pp. 5–9, 2016.
[14] M. Farooque, S. Munoz-hernandez, C. De Montegancedo, and B. Monte, “Prototype from Voice
Speech Analysis,” 2004.