Human emotion recognition is an upcoming research field of human computer interaction based on facial gestures and is being used for real-time analysis in classifying cognitive affective states from a facial video data. Since computers have become an integral part of life, many researchers are using emotion recognition and classification of data based on audio and text. But these approaches offer limited accuracy and relevance in emotion classification. Therefore we have introduced and analyzed a hybrid approach which could outperform the existing strategies that uses an innovative approach supported by selection of audio and video data characteristics for classification. The research uses SVM for classifying the data using audiovisual savee database and the results obtained show maximum classification accuracy with respect to audio data about 91.6 could be improved to 99.2% after the application of hybrid strategy.
Emotion recognition based on the energy distribution of plosive syllablesIJECEIAES
We usually encounter two problems during speech emotion recognition (SER): expression and perception problems, which vary considerably between speakers, languages, and sentence pronunciation. In fact, finding an optimal system that characterizes the emotions overcoming all these differences is a promising prospect. In this perspective, we considered two emotional databases: Moroccan Arabic dialect emotional database (MADED), and Ryerson audio-visual database on emotional speech and song (RAVDESS) which present notable differences in terms of type (natural/acted), and language (Arabic/English). We proposed a detection process based on 27 acoustic features extracted from consonant-vowel (CV) syllabic units: \ba, \du, \ki, \ta common to both databases. We tested two classification strategies: multiclass (all emotions combined: joy, sadness, neutral, anger) and binary (neutral vs. others, positive emotions (joy) vs. negative emotions (sadness, anger), sadness vs. anger). These strategies were tested three times: i) on MADED, ii) on RAVDESS, iii) on MADED and RAVDESS. The proposed method gave better recognition accuracy in the case of binary classification. The rates reach an average of 78% for the multi-class classification, 100% for neutral vs. other cases, 100% for the negative emotions (i.e. anger vs. sadness), and 96% for the positive vs. negative emotions.
Speech emotion recognition using 2D-convolutional neural networkIJECEIAES
This research proposes a speech emotion recognition model to predict human emotions using the convolutional neural network (CNN) by learning segmented audio of specific emotions. Speech emotion recognition utilizes the extracted features of audio waves to learn speech emotion characteristics; one of them is mel frequency cepstral coefficient (MFCC). Dataset takes a vital role to obtain valuable results in model learning. Hence this research provides the leverage of dataset combination implementation. The model learns a combined dataset with audio segmentation and zero padding using 2D-CNN. Audio segmentation and zero padding equalize the extracted audio features to learn the characteristics. The model results in 83.69% accuracy to predict seven emotions: neutral, happy, sad, angry, fear, disgust, and surprise from the combined dataset with the segmentation of the audio files.
This document presents an audio-visual emotion recognition system that uses multiple modalities and machine learning techniques. It extracts audio features like MFCCs and visual features like facial landmarks from video clips. It uses classifiers like CNNs and stacks their confidence outputs to predict emotions. The system achieves state-of-the-art performance on several databases according to experiments. It represents an improvement over previous work by combining audio, visual and classifier fusion approaches for multimodal emotion recognition.
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...ijtsrd
Suctioning is a common procedure performed by nurses to maintain the gas exchange, adequate oxygenation and alveolar ventilation in critical ill patients under mechanical ventilation and aim of this research is to provide knowledge regarding maintaining airway patency with suctioning care that will help in the implementation of the quality of nursing care, eventually it will lead to better results. The planned study is a pre experimental study to assess the effectiveness of planned teaching programme on knowledge regarding airway patency on patients with mechanical ventilator among the B.Sc. internship students of selected college of nursing at Moradabad. To assess the level of knowledge regarding maintaining airway patency in patients with mechanical ventilator among B.Sc. Nursing internship students. To assess the effectiveness of planned teaching programme in term of knowledge regarding airway patency among B.Sc. nursing internship students. The purpose of this study is to examine the association between knowledge and effectiveness regarding airway patency among B.Sc. Nursing internship demographic students and their selected partner variables. A pre experimental study was conducted among 86 participants, selected by non probability convenient sampling method. Demographic Performa and self structured questionnaire was used to collect the data from the B.Sc. internship students. Nafees Ahmed | Sana Usmani "A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledge Regarding Maintaining Airway Patency in Patients with Mechanical Ventilator" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47917.pdf Paper URL: https://www.ijtsrd.com/medicine/nursing/47917/a-study-to-assess-the-effectiveness-of-planned-teaching-programme-on-knowledge-regarding-maintaining-airway-patency-in-patients-with-mechanical-ventilator/nafees-ahmed
Speech Emotion Recognition Using Neural Networksijtsrd
Speech is the most natural and easy method for people to communicate, and interpreting speech is one of the most sophisticated tasks that the human brain conducts. The goal of Speech Emotion Recognition SER is to identify human emotion from speech. This is due to the fact that tone and pitch of the voice frequently reflect underlying emotions. Librosa was used to analyse audio and music, sound file was used to read and write sampled sound file formats, and sklearn was used to create the model. The current study looked on the effectiveness of Convolutional Neural Networks CNN in recognising spoken emotions. The networks input characteristics are spectrograms of voice samples. Mel Frequency Cepstral Coefficients MFCC are used to extract characteristics from audio. Our own voice dataset is utilised to train and test our algorithms. The emotions of the speech happy, sad, angry, neutral, shocked, disgusted will be determined based on the evaluation. Anirban Chakraborty "Speech Emotion Recognition Using Neural Networks" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47958.pdf Paper URL: https://www.ijtsrd.com/other-scientific-research-area/other/47958/speech-emotion-recognition-using-neural-networks/anirban-chakraborty
Emotion Detection from Voice Based Classified Frame-Energy Signal Using K-Mea...ijseajournal
Emotion detection is a new research era in health informatics and forensic technology. Besides having some challenges, voice based emotion recognition is getting popular, as the situation where the facial image is not available, the voice is the only way to detect the emotional or psychiatric condition of a
person. However, the voice signal is so dynamic even in a short-time frame so that, a voice of the same person can differ within a very subtle period of time. Therefore, in this research basically two key criterion have been considered; firstly, this is clear that there is a necessity to partition the training data according
to the emotional stage of each individual speaker. Secondly, rather than using the entire voice signal, short time significant frames can be used, which would be enough to identify the emotional condition of the speaker. In this research, Cepstral Coefficient (CC) has been used as voice feature and a fixed valued kmeans clustered method has been used for feature classification. The value of k will depend on the number
of emotional situations in human physiology is being an evaluation. Consequently, the value of k does not necessarily consider the volume of experimental dataset. In this experiment, three emotional conditions: happy, angry and sad have been detected from eight female and seven male voice signals. This methodology has increased the emotion detection accuracy rate significantly comparing to some recent works and also reduced the CPU time of cluster formation and matching.
ASERS-CNN: ARABIC SPEECH EMOTION RECOGNITION SYSTEM BASED ON CNN MODELsipij
This document presents research on developing an Arabic speech emotion recognition system using a convolutional neural network (CNN) model. The researchers propose a model called ASERS-CNN and evaluate it on an Arabic speech dataset containing recordings of 4 emotions. Their results show the ASERS-CNN achieves 98.18% accuracy, outperforming their previous ASERS-LSTM model which achieved 97.44% accuracy. They also find that using 5 acoustic feature types and 50 training epochs leads to the best ASERS-CNN performance of 98.52% accuracy.
ASERS-CNN: Arabic Speech Emotion Recognition System based on CNN Modelsipij
When two people are on the phone, although they cannot observe the other person's facial expression and
physiological state, it is possible to estimate the speaker's emotional state by voice roughly. In medical
care, if the emotional state of a patient, especially a patient with an expression disorder, can be known,
different care measures can be made according to the patient's mood to increase the amount of care. The
system that capable for recognize the emotional states of human being from his speech is known as Speech
emotion recognition system (SER). Deep learning is one of most technique that has been widely used in
emotion recognition studies, in this paper we implement CNN model for Arabic speech emotion
recognition. We propose ASERS-CNN model for Arabic Speech Emotion Recognition based on CNN
model. We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech
corpus (BAES-DB). In addition of that we compare the accuracy between our previous ASERS-LSTM and
new ASERS-CNN model proposed in this paper and we comes out that our new proposed mode is
outperformed ASERS-LSTM model where it get 98.18% accuracy.
Emotion recognition based on the energy distribution of plosive syllablesIJECEIAES
We usually encounter two problems during speech emotion recognition (SER): expression and perception problems, which vary considerably between speakers, languages, and sentence pronunciation. In fact, finding an optimal system that characterizes the emotions overcoming all these differences is a promising prospect. In this perspective, we considered two emotional databases: Moroccan Arabic dialect emotional database (MADED), and Ryerson audio-visual database on emotional speech and song (RAVDESS) which present notable differences in terms of type (natural/acted), and language (Arabic/English). We proposed a detection process based on 27 acoustic features extracted from consonant-vowel (CV) syllabic units: \ba, \du, \ki, \ta common to both databases. We tested two classification strategies: multiclass (all emotions combined: joy, sadness, neutral, anger) and binary (neutral vs. others, positive emotions (joy) vs. negative emotions (sadness, anger), sadness vs. anger). These strategies were tested three times: i) on MADED, ii) on RAVDESS, iii) on MADED and RAVDESS. The proposed method gave better recognition accuracy in the case of binary classification. The rates reach an average of 78% for the multi-class classification, 100% for neutral vs. other cases, 100% for the negative emotions (i.e. anger vs. sadness), and 96% for the positive vs. negative emotions.
Speech emotion recognition using 2D-convolutional neural networkIJECEIAES
This research proposes a speech emotion recognition model to predict human emotions using the convolutional neural network (CNN) by learning segmented audio of specific emotions. Speech emotion recognition utilizes the extracted features of audio waves to learn speech emotion characteristics; one of them is mel frequency cepstral coefficient (MFCC). Dataset takes a vital role to obtain valuable results in model learning. Hence this research provides the leverage of dataset combination implementation. The model learns a combined dataset with audio segmentation and zero padding using 2D-CNN. Audio segmentation and zero padding equalize the extracted audio features to learn the characteristics. The model results in 83.69% accuracy to predict seven emotions: neutral, happy, sad, angry, fear, disgust, and surprise from the combined dataset with the segmentation of the audio files.
This document presents an audio-visual emotion recognition system that uses multiple modalities and machine learning techniques. It extracts audio features like MFCCs and visual features like facial landmarks from video clips. It uses classifiers like CNNs and stacks their confidence outputs to predict emotions. The system achieves state-of-the-art performance on several databases according to experiments. It represents an improvement over previous work by combining audio, visual and classifier fusion approaches for multimodal emotion recognition.
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...ijtsrd
Suctioning is a common procedure performed by nurses to maintain the gas exchange, adequate oxygenation and alveolar ventilation in critical ill patients under mechanical ventilation and aim of this research is to provide knowledge regarding maintaining airway patency with suctioning care that will help in the implementation of the quality of nursing care, eventually it will lead to better results. The planned study is a pre experimental study to assess the effectiveness of planned teaching programme on knowledge regarding airway patency on patients with mechanical ventilator among the B.Sc. internship students of selected college of nursing at Moradabad. To assess the level of knowledge regarding maintaining airway patency in patients with mechanical ventilator among B.Sc. Nursing internship students. To assess the effectiveness of planned teaching programme in term of knowledge regarding airway patency among B.Sc. nursing internship students. The purpose of this study is to examine the association between knowledge and effectiveness regarding airway patency among B.Sc. Nursing internship demographic students and their selected partner variables. A pre experimental study was conducted among 86 participants, selected by non probability convenient sampling method. Demographic Performa and self structured questionnaire was used to collect the data from the B.Sc. internship students. Nafees Ahmed | Sana Usmani "A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledge Regarding Maintaining Airway Patency in Patients with Mechanical Ventilator" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47917.pdf Paper URL: https://www.ijtsrd.com/medicine/nursing/47917/a-study-to-assess-the-effectiveness-of-planned-teaching-programme-on-knowledge-regarding-maintaining-airway-patency-in-patients-with-mechanical-ventilator/nafees-ahmed
Speech Emotion Recognition Using Neural Networksijtsrd
Speech is the most natural and easy method for people to communicate, and interpreting speech is one of the most sophisticated tasks that the human brain conducts. The goal of Speech Emotion Recognition SER is to identify human emotion from speech. This is due to the fact that tone and pitch of the voice frequently reflect underlying emotions. Librosa was used to analyse audio and music, sound file was used to read and write sampled sound file formats, and sklearn was used to create the model. The current study looked on the effectiveness of Convolutional Neural Networks CNN in recognising spoken emotions. The networks input characteristics are spectrograms of voice samples. Mel Frequency Cepstral Coefficients MFCC are used to extract characteristics from audio. Our own voice dataset is utilised to train and test our algorithms. The emotions of the speech happy, sad, angry, neutral, shocked, disgusted will be determined based on the evaluation. Anirban Chakraborty "Speech Emotion Recognition Using Neural Networks" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-6 | Issue-1 , December 2021, URL: https://www.ijtsrd.com/papers/ijtsrd47958.pdf Paper URL: https://www.ijtsrd.com/other-scientific-research-area/other/47958/speech-emotion-recognition-using-neural-networks/anirban-chakraborty
Emotion Detection from Voice Based Classified Frame-Energy Signal Using K-Mea...ijseajournal
Emotion detection is a new research era in health informatics and forensic technology. Besides having some challenges, voice based emotion recognition is getting popular, as the situation where the facial image is not available, the voice is the only way to detect the emotional or psychiatric condition of a
person. However, the voice signal is so dynamic even in a short-time frame so that, a voice of the same person can differ within a very subtle period of time. Therefore, in this research basically two key criterion have been considered; firstly, this is clear that there is a necessity to partition the training data according
to the emotional stage of each individual speaker. Secondly, rather than using the entire voice signal, short time significant frames can be used, which would be enough to identify the emotional condition of the speaker. In this research, Cepstral Coefficient (CC) has been used as voice feature and a fixed valued kmeans clustered method has been used for feature classification. The value of k will depend on the number
of emotional situations in human physiology is being an evaluation. Consequently, the value of k does not necessarily consider the volume of experimental dataset. In this experiment, three emotional conditions: happy, angry and sad have been detected from eight female and seven male voice signals. This methodology has increased the emotion detection accuracy rate significantly comparing to some recent works and also reduced the CPU time of cluster formation and matching.
ASERS-CNN: ARABIC SPEECH EMOTION RECOGNITION SYSTEM BASED ON CNN MODELsipij
This document presents research on developing an Arabic speech emotion recognition system using a convolutional neural network (CNN) model. The researchers propose a model called ASERS-CNN and evaluate it on an Arabic speech dataset containing recordings of 4 emotions. Their results show the ASERS-CNN achieves 98.18% accuracy, outperforming their previous ASERS-LSTM model which achieved 97.44% accuracy. They also find that using 5 acoustic feature types and 50 training epochs leads to the best ASERS-CNN performance of 98.52% accuracy.
ASERS-CNN: Arabic Speech Emotion Recognition System based on CNN Modelsipij
When two people are on the phone, although they cannot observe the other person's facial expression and
physiological state, it is possible to estimate the speaker's emotional state by voice roughly. In medical
care, if the emotional state of a patient, especially a patient with an expression disorder, can be known,
different care measures can be made according to the patient's mood to increase the amount of care. The
system that capable for recognize the emotional states of human being from his speech is known as Speech
emotion recognition system (SER). Deep learning is one of most technique that has been widely used in
emotion recognition studies, in this paper we implement CNN model for Arabic speech emotion
recognition. We propose ASERS-CNN model for Arabic Speech Emotion Recognition based on CNN
model. We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech
corpus (BAES-DB). In addition of that we compare the accuracy between our previous ASERS-LSTM and
new ASERS-CNN model proposed in this paper and we comes out that our new proposed mode is
outperformed ASERS-LSTM model where it get 98.18% accuracy.
Signal & Image Processing : An International Journalsipij
This document presents research on developing an Arabic speech emotion recognition system using a convolutional neural network (CNN) model. The researchers propose a model called ASERS-CNN and evaluate it on an Arabic speech dataset containing recordings of 4 emotions. Their best performing model achieves 98.52% accuracy using 5 acoustic features extracted from the speech and preprocessed with normalization and silence removal. They compare this result to their previous ASERS-LSTM model and other state-of-the-art methods, finding that ASERS-CNN outperforms ASERS-LSTM and existing approaches.
Literature Review On: ”Speech Emotion Recognition Using Deep Neural Network”IRJET Journal
The document discusses speech emotion recognition using deep neural networks. It first provides an overview of SER and the challenges in the field. It then reviews 20 research papers on the topic, finding that most use deep neural network techniques like CNNs and DNNs for model building. The papers evaluated various datasets and algorithms, with accuracy ranging from 84% to 90%. Overall limitations identified included the need for more data, handling of multiple simultaneous emotions, and improving cross-corpus performance. The literature review contributes to knowledge in using machine learning for SER.
Analysis on techniques used to recognize and identifying the Human emotions IJECEIAES
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on various recognition techniques used to identify the complexity in recognizing the facial expression is presented.
Recognition of emotional states using EEG signals based on time-frequency ana...IJECEIAES
The recognition of emotions is a vast significance and a high developing field of research in the recent years. The applications of emotion recognition have left an exceptional mark in various fields including education and research. Traditional approaches used facial expressions or voice intonation to detect emotions, however, facial gestures and spoken language can lead to biased and ambiguous results. This is why, researchers have started to use electroencephalogram (EEG) technique which is well defined method for emotion recognition. Some approaches used standard and pre-defined methods of the signal processing area and some worked with either fewer channels or fewer subjects to record EEG signals for their research. This paper proposed an emotion detection method based on time-frequency domain statistical features. Box-and-whisker plot is used to select the optimal features, which are later feed to SVM classifier for training and testing the DEAP dataset, where 32 participants with different gender and age groups are considered. The experimental results show that the proposed method exhibits 92.36% accuracy for our tested dataset. In addition, the proposed method outperforms than the state-of-art methods by exhibiting higher accuracy.
A Review Paper on Speech Based Emotion Detection Using Deep LearningIRJET Journal
This document reviews techniques for speech-based emotion detection using deep learning. It discusses how deep learning techniques have been proposed as an alternative to traditional methods for speech emotion recognition. Feature extraction is an important part of speech emotion recognition, and deep learning can help minimize the complexity of extracted features. The document surveys related work on speech emotion recognition using techniques like deep neural networks, convolutional neural networks, recurrent neural networks, and more. It examines the limitations of current approaches and the potential for deep learning to improve speech-based emotion detection.
Emotion Recognition based on Speech and EEG Using Machine Learning Techniques...HanzalaSiddiqui8
This research investigates emotion recognition using both speech and EEG data through machine learning techniques, achieving 97% accuracy. An artificial neural network model is designed to effectively extract features and learn representations from the integrated speech and EEG data, demonstrating the potential of this multimodal approach. The high accuracy attained underscores applications in human-computer interaction, healthcare, and more.
IRJET- Prediction of Human Facial Expression using Deep LearningIRJET Journal
This document discusses predicting human facial expressions using deep learning. It begins with an introduction to emotion recognition, facial emotion recognition, and deep learning techniques. It then reviews related literature on facial expression recognition using methods like CNNs, active appearance models, and analyzing facial features. The proposed system and methodology are described, including face registration, feature extraction focusing on action units, and using classifiers like KNN, MLP, and SVM to classify emotions based on these features. The goal is to recognize seven basic emotions (neutral, joy, sadness, surprise, anger, fear, disgust) in still images using deep learning techniques.
ANALYSING SPEECH EMOTION USING NEURAL NETWORK ALGORITHMIRJET Journal
The document proposes a deep learning model to identify speech emotion using neural network algorithms. It discusses using convolutional neural networks and LSTM to extract acoustic features from speech signals and accurately identify emotional states. The model aims to improve the accuracy and robustness of speech emotion recognition systems, with applications in healthcare, customer service, and human-computer interaction.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
This study monitored physiological and behavioral data from teachers editing e-learning materials to analyze differences between holistic and analytic cognitive styles. The INTERFACE methodology simultaneously recorded heart rate variability, skin conductance, video of participants, and interaction data. 32 participants completed a 2-hour task designing materials in the Moodle learning management system. Questionnaires assessed cognitive styles, and statistical analysis is ongoing to explore relationships between cognitive styles, physiological responses, and task performance. The study aims to better understand how cognitive styles influence the human-computer interaction and usability of e-learning authoring tools.
Signal Processing Tool for Emotion Recognitionidescitation
In the course of realization of modern day robots,
which not only perform tasks, but also behaves like human
beings during their interaction with the natural environment,
it is essential for us to impart knowledge of the underlying
emotions in the spoken utterances of human beings to the
robots, enabling them to be consistent, whole, complete and
perfect. To this end, it is essential for them too to understand
and identify the human emotions. For this reason, stress is
laid now-a-days on the study of emotional content of the speech
and accordingly speech emotion recognition engines have been
proposed. This paper is a survey of the main aspects of speech
emotion recognition, namely, features extractions and types
of features commonly used, selection of most informed
features from the original dataset of the features, and
classification of the features according to different classifying
techniques based on relative information regarding commonly
used database for the speech emotion recognition.
Signal & Image Processing : An International Journal sipij
Signal & Image Processing : An International Journal is an Open Access peer-reviewed journal intended for researchers from academia and industry, who are active in the multidisciplinary field of signal & image processing. The scope of the journal covers all theoretical and practical aspects of the Digital Signal Processing & Image processing, from basic research to development of application.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Signal & Image processing.
ASERS-LSTM: Arabic Speech Emotion Recognition System Based on LSTM Modelsipij
The swift progress in the study field of human-computer interaction (HCI) causes to increase in the interest in systems for Speech emotion recognition (SER). The speech Emotion Recognition System is the system that can identify the emotional states of human beings from their voice. There are well works in Speech Emotion Recognition for different language but few researches have implemented for Arabic SER systems and that because of the shortage of available Arabic speech emotion databases. The most commonly considered languages for SER is English and other European and Asian languages. Several machine learning-based classifiers that have been used by researchers to distinguish emotional classes: SVMs, RFs, and the KNN algorithm, hidden Markov models (HMMs), MLPs and deep learning. In this paper we propose ASERS-LSTM model for Arabic Speech Emotion Recognition based on LSTM model. We extracted five features from the speech: Mel-Frequency Cepstral Coefficients (MFCC) features, chromagram, Melscaled spectrogram, spectral contrast and tonal centroid features (tonnetz). We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech corpus (BAES-DB). In addition of that we also construct a DNN for classify the Emotion and compare the accuracy between LSTM and DNN model. For DNN the accuracy is 93.34% and for LSTM is 96.81%
Music Emotion Classification based on Lyrics-Audio using Corpus based Emotion...IJECEIAES
Music has lyrics and audio. That‟s components can be a feature for music emotion classification. Lyric features were extracted from text data and audio features were extracted from audio signal data.In the classification of emotions, emotion corpus is required for lyrical feature extraction. Corpus Based Emotion (CBE) succeed to increase the value of F-Measure for emotion classification on text documents. The music document has an unstructured format compared with the article text document. So it requires good preprocessing and conversion process before classification process. We used MIREX Dataset for this research. Psycholinguistic and stylistic features were used as lyrics features. Psycholinguistic feature was a feature that related to the category of emotion. In this research, CBE used to support the extraction process of psycholinguistic feature. Stylistic features related with usage of unique words in the lyrics, e.g. „ooh‟, „ah‟, „yeah‟, etc. Energy, temporal and spectrum features were extracted for audio features.The best test result for music emotion classification was the application of Random Forest methods for lyrics and audio features. The value of F-measure was 56.8%.
PREPROCESSING CHALLENGES FOR REAL WORLD AFFECT RECOGNITIONCSEIJJournal
Real world human affect recognition requires immediate attention which is a significant aspect of humancomputer interaction. Audio-visual modalities can make a significant contribution by providing rich
contextual information. Preprocessing is an important step in which the relevant information is extracted.
It has a crucial impact on prominent feature extraction and further processing. The main aim is to
highlight the challenges in preprocessing real world data. The research focuses on experimental testing
and comparative analysis for preprocessing using OpenCV, Single Shot MultiBox Detector (SSD), DLib,
Multi-Task Cascaded Convolutional Neural Networks (MTCNN), and RetinaFace detectors. The
comparative analysis shows that MTCNN and RetinaFace give better performance in real world data. The
performance of facial affect recognition using a pre-trained CNN model is analysed with a lab-controlled
dataset CK+ and a representative wild dataset AFEW. This comparative analysis demonstrates the impact
of preprocessing issues on feature engineering framework in real world affect recognition.
Preprocessing Challenges for Real World Affect Recognition CSEIJJournal
Real world human affect recognition requires immediate attention which is a significant aspect of human-
computer interaction. Audio-visual modalities can make a significant contribution by providing rich
contextual information. Preprocessing is an important step in which the relevant information is extracted.
It has a crucial impact on prominent feature extraction and further processing. The main aim is to
highlight the challenges in preprocessing real world data. The research focuses on experimental testing
and comparative analysis for preprocessing using OpenCV, Single Shot MultiBox Detector (SSD), DLib,
Multi-Task Cascaded Convolutional Neural Networks (MTCNN), and RetinaFace detectors. The
comparative analysis shows that MTCNN and RetinaFace give better performance in real world data. The
performance of facial affect recognition using a pre-trained CNN model is analysed with a lab-controlled
dataset CK+ and a representative wild dataset AFEW. This comparative analysis demonstrates the impact
of preprocessing issues on feature engineering framework in real world affect recognition.
Efficient Speech Emotion Recognition using SVM and Decision TreesIRJET Journal
This document discusses efficient speech emotion recognition using support vector machines and decision trees. It summarizes a research paper that extracted speech features like variance, standard deviation, energy and pitch from an emotional speech corpus containing 535 speech segments expressing seven emotions. The extracted features were used to train and test an SVM classifier for emotion recognition. The classifier achieved an average accuracy of 85% across training and test sets at recognizing the seven emotions. Feature selection techniques were used to address the curse of dimensionality caused by the large number of extracted features.
A critical insight into multi-languages speech emotion databasesjournalBEEI
With increased interest of human-computer/human-human interactions, systems deducing and identifying emotional aspects of a speech signal has emerged as a hot research topic. Recent researches are directed towards the development of automated and intelligent analysis of human utterances. Although numerous researches have been put into place for designing systems, algorithms, classifiers in the related field; however the things are far from standardization yet. There still exists considerable amount of uncertainty with regard to aspects such as determining influencing features, better performing algorithms, number of emotion classification etc. Among the influencing factors, the uniqueness between speech databases such as data collection method is accepted to be significant among the research community. Speech emotion database is essentially a repository of varied human speech samples collected and sampled using a specified method. This paper reviews 34 `speech emotion databases for their characteristics and specifications. Furthermore critical insight into the imitational aspects for the same have also been highlighted.
Development of depth map from stereo images using sum of absolute differences...nooriasukmaningtyas
This article proposes a framework for the depth map reconstruction using stereo images. Fundamentally, this map provides an important information which commonly used in essential applications such as autonomous vehicle navigation, drone’s navigation and 3D surface reconstruction. To develop an accurate depth map, the framework must be robust against the challenging regions of low texture, plain color and repetitive pattern on the input stereo image. The development of this map requires several stages which starts with matching cost calculation, cost aggregation, optimization and refinement stage. Hence, this work develops a framework with sum of absolute difference (SAD) and the combination of two edge preserving filters to increase the robustness against the challenging regions. The SAD convolves using block matching technique to increase the efficiency of matching process on the low texture and plain color regions. Moreover, two edge preserving filters will increase the accuracy on the repetitive pattern region. The results show that the proposed method is accurate and capable to work with the challenging regions. The results are provided by the Middlebury standard dataset. The framework is also efficiently and can be applied on the 3D surface reconstruction. Moreover, this work is greatly competitive with previously available methods.
Model predictive controller for a retrofitted heat exchanger temperature cont...nooriasukmaningtyas
This paper aims to demonstrate the practical aspects of process control theory for undergraduate students at the Department of Chemical Engineering at the University of Bahrain. Both, the ubiquitous proportional integral derivative (PID) as well as model predictive control (MPC) and their auxiliaries were designed and implemented in a real-time framework. The latter was realized through retrofitting an existing plate-and-frame heat exchanger unit that has been operated using an analog PID temperature controller. The upgraded control system consists of a personal computer (PC), low-cost signal conditioning circuit, national instruments USB 6008 data acquisition card, and LabVIEW software. LabVIEW control design and simulation modules were used to design and implement the PID and MPC controllers. The performance of the designed controllers was evaluated while controlling the outlet temperature of the retrofitted plate-and-frame heat exchanger. The distinguished feature of the MPC controller in handling input and output constraints was perceived in real-time. From a pedagogical point of view, realizing the theory of process control through practical implementation was substantial in enhancing the student’s learning and the instructor’s teaching experience.
More Related Content
Similar to A hybrid strategy for emotion classification
Signal & Image Processing : An International Journalsipij
This document presents research on developing an Arabic speech emotion recognition system using a convolutional neural network (CNN) model. The researchers propose a model called ASERS-CNN and evaluate it on an Arabic speech dataset containing recordings of 4 emotions. Their best performing model achieves 98.52% accuracy using 5 acoustic features extracted from the speech and preprocessed with normalization and silence removal. They compare this result to their previous ASERS-LSTM model and other state-of-the-art methods, finding that ASERS-CNN outperforms ASERS-LSTM and existing approaches.
Literature Review On: ”Speech Emotion Recognition Using Deep Neural Network”IRJET Journal
The document discusses speech emotion recognition using deep neural networks. It first provides an overview of SER and the challenges in the field. It then reviews 20 research papers on the topic, finding that most use deep neural network techniques like CNNs and DNNs for model building. The papers evaluated various datasets and algorithms, with accuracy ranging from 84% to 90%. Overall limitations identified included the need for more data, handling of multiple simultaneous emotions, and improving cross-corpus performance. The literature review contributes to knowledge in using machine learning for SER.
Analysis on techniques used to recognize and identifying the Human emotions IJECEIAES
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on various recognition techniques used to identify the complexity in recognizing the facial expression is presented.
Recognition of emotional states using EEG signals based on time-frequency ana...IJECEIAES
The recognition of emotions is a vast significance and a high developing field of research in the recent years. The applications of emotion recognition have left an exceptional mark in various fields including education and research. Traditional approaches used facial expressions or voice intonation to detect emotions, however, facial gestures and spoken language can lead to biased and ambiguous results. This is why, researchers have started to use electroencephalogram (EEG) technique which is well defined method for emotion recognition. Some approaches used standard and pre-defined methods of the signal processing area and some worked with either fewer channels or fewer subjects to record EEG signals for their research. This paper proposed an emotion detection method based on time-frequency domain statistical features. Box-and-whisker plot is used to select the optimal features, which are later feed to SVM classifier for training and testing the DEAP dataset, where 32 participants with different gender and age groups are considered. The experimental results show that the proposed method exhibits 92.36% accuracy for our tested dataset. In addition, the proposed method outperforms than the state-of-art methods by exhibiting higher accuracy.
A Review Paper on Speech Based Emotion Detection Using Deep LearningIRJET Journal
This document reviews techniques for speech-based emotion detection using deep learning. It discusses how deep learning techniques have been proposed as an alternative to traditional methods for speech emotion recognition. Feature extraction is an important part of speech emotion recognition, and deep learning can help minimize the complexity of extracted features. The document surveys related work on speech emotion recognition using techniques like deep neural networks, convolutional neural networks, recurrent neural networks, and more. It examines the limitations of current approaches and the potential for deep learning to improve speech-based emotion detection.
Emotion Recognition based on Speech and EEG Using Machine Learning Techniques...HanzalaSiddiqui8
This research investigates emotion recognition using both speech and EEG data through machine learning techniques, achieving 97% accuracy. An artificial neural network model is designed to effectively extract features and learn representations from the integrated speech and EEG data, demonstrating the potential of this multimodal approach. The high accuracy attained underscores applications in human-computer interaction, healthcare, and more.
IRJET- Prediction of Human Facial Expression using Deep LearningIRJET Journal
This document discusses predicting human facial expressions using deep learning. It begins with an introduction to emotion recognition, facial emotion recognition, and deep learning techniques. It then reviews related literature on facial expression recognition using methods like CNNs, active appearance models, and analyzing facial features. The proposed system and methodology are described, including face registration, feature extraction focusing on action units, and using classifiers like KNN, MLP, and SVM to classify emotions based on these features. The goal is to recognize seven basic emotions (neutral, joy, sadness, surprise, anger, fear, disgust) in still images using deep learning techniques.
ANALYSING SPEECH EMOTION USING NEURAL NETWORK ALGORITHMIRJET Journal
The document proposes a deep learning model to identify speech emotion using neural network algorithms. It discusses using convolutional neural networks and LSTM to extract acoustic features from speech signals and accurately identify emotional states. The model aims to improve the accuracy and robustness of speech emotion recognition systems, with applications in healthcare, customer service, and human-computer interaction.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
This study monitored physiological and behavioral data from teachers editing e-learning materials to analyze differences between holistic and analytic cognitive styles. The INTERFACE methodology simultaneously recorded heart rate variability, skin conductance, video of participants, and interaction data. 32 participants completed a 2-hour task designing materials in the Moodle learning management system. Questionnaires assessed cognitive styles, and statistical analysis is ongoing to explore relationships between cognitive styles, physiological responses, and task performance. The study aims to better understand how cognitive styles influence the human-computer interaction and usability of e-learning authoring tools.
Signal Processing Tool for Emotion Recognitionidescitation
In the course of realization of modern day robots,
which not only perform tasks, but also behaves like human
beings during their interaction with the natural environment,
it is essential for us to impart knowledge of the underlying
emotions in the spoken utterances of human beings to the
robots, enabling them to be consistent, whole, complete and
perfect. To this end, it is essential for them too to understand
and identify the human emotions. For this reason, stress is
laid now-a-days on the study of emotional content of the speech
and accordingly speech emotion recognition engines have been
proposed. This paper is a survey of the main aspects of speech
emotion recognition, namely, features extractions and types
of features commonly used, selection of most informed
features from the original dataset of the features, and
classification of the features according to different classifying
techniques based on relative information regarding commonly
used database for the speech emotion recognition.
Signal & Image Processing : An International Journal sipij
Signal & Image Processing : An International Journal is an Open Access peer-reviewed journal intended for researchers from academia and industry, who are active in the multidisciplinary field of signal & image processing. The scope of the journal covers all theoretical and practical aspects of the Digital Signal Processing & Image processing, from basic research to development of application.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Signal & Image processing.
ASERS-LSTM: Arabic Speech Emotion Recognition System Based on LSTM Modelsipij
The swift progress in the study field of human-computer interaction (HCI) causes to increase in the interest in systems for Speech emotion recognition (SER). The speech Emotion Recognition System is the system that can identify the emotional states of human beings from their voice. There are well works in Speech Emotion Recognition for different language but few researches have implemented for Arabic SER systems and that because of the shortage of available Arabic speech emotion databases. The most commonly considered languages for SER is English and other European and Asian languages. Several machine learning-based classifiers that have been used by researchers to distinguish emotional classes: SVMs, RFs, and the KNN algorithm, hidden Markov models (HMMs), MLPs and deep learning. In this paper we propose ASERS-LSTM model for Arabic Speech Emotion Recognition based on LSTM model. We extracted five features from the speech: Mel-Frequency Cepstral Coefficients (MFCC) features, chromagram, Melscaled spectrogram, spectral contrast and tonal centroid features (tonnetz). We evaluated our model using Arabic speech dataset named Basic Arabic Expressive Speech corpus (BAES-DB). In addition of that we also construct a DNN for classify the Emotion and compare the accuracy between LSTM and DNN model. For DNN the accuracy is 93.34% and for LSTM is 96.81%
Music Emotion Classification based on Lyrics-Audio using Corpus based Emotion...IJECEIAES
Music has lyrics and audio. That‟s components can be a feature for music emotion classification. Lyric features were extracted from text data and audio features were extracted from audio signal data.In the classification of emotions, emotion corpus is required for lyrical feature extraction. Corpus Based Emotion (CBE) succeed to increase the value of F-Measure for emotion classification on text documents. The music document has an unstructured format compared with the article text document. So it requires good preprocessing and conversion process before classification process. We used MIREX Dataset for this research. Psycholinguistic and stylistic features were used as lyrics features. Psycholinguistic feature was a feature that related to the category of emotion. In this research, CBE used to support the extraction process of psycholinguistic feature. Stylistic features related with usage of unique words in the lyrics, e.g. „ooh‟, „ah‟, „yeah‟, etc. Energy, temporal and spectrum features were extracted for audio features.The best test result for music emotion classification was the application of Random Forest methods for lyrics and audio features. The value of F-measure was 56.8%.
PREPROCESSING CHALLENGES FOR REAL WORLD AFFECT RECOGNITIONCSEIJJournal
Real world human affect recognition requires immediate attention which is a significant aspect of humancomputer interaction. Audio-visual modalities can make a significant contribution by providing rich
contextual information. Preprocessing is an important step in which the relevant information is extracted.
It has a crucial impact on prominent feature extraction and further processing. The main aim is to
highlight the challenges in preprocessing real world data. The research focuses on experimental testing
and comparative analysis for preprocessing using OpenCV, Single Shot MultiBox Detector (SSD), DLib,
Multi-Task Cascaded Convolutional Neural Networks (MTCNN), and RetinaFace detectors. The
comparative analysis shows that MTCNN and RetinaFace give better performance in real world data. The
performance of facial affect recognition using a pre-trained CNN model is analysed with a lab-controlled
dataset CK+ and a representative wild dataset AFEW. This comparative analysis demonstrates the impact
of preprocessing issues on feature engineering framework in real world affect recognition.
Preprocessing Challenges for Real World Affect Recognition CSEIJJournal
Real world human affect recognition requires immediate attention which is a significant aspect of human-
computer interaction. Audio-visual modalities can make a significant contribution by providing rich
contextual information. Preprocessing is an important step in which the relevant information is extracted.
It has a crucial impact on prominent feature extraction and further processing. The main aim is to
highlight the challenges in preprocessing real world data. The research focuses on experimental testing
and comparative analysis for preprocessing using OpenCV, Single Shot MultiBox Detector (SSD), DLib,
Multi-Task Cascaded Convolutional Neural Networks (MTCNN), and RetinaFace detectors. The
comparative analysis shows that MTCNN and RetinaFace give better performance in real world data. The
performance of facial affect recognition using a pre-trained CNN model is analysed with a lab-controlled
dataset CK+ and a representative wild dataset AFEW. This comparative analysis demonstrates the impact
of preprocessing issues on feature engineering framework in real world affect recognition.
Efficient Speech Emotion Recognition using SVM and Decision TreesIRJET Journal
This document discusses efficient speech emotion recognition using support vector machines and decision trees. It summarizes a research paper that extracted speech features like variance, standard deviation, energy and pitch from an emotional speech corpus containing 535 speech segments expressing seven emotions. The extracted features were used to train and test an SVM classifier for emotion recognition. The classifier achieved an average accuracy of 85% across training and test sets at recognizing the seven emotions. Feature selection techniques were used to address the curse of dimensionality caused by the large number of extracted features.
A critical insight into multi-languages speech emotion databasesjournalBEEI
With increased interest of human-computer/human-human interactions, systems deducing and identifying emotional aspects of a speech signal has emerged as a hot research topic. Recent researches are directed towards the development of automated and intelligent analysis of human utterances. Although numerous researches have been put into place for designing systems, algorithms, classifiers in the related field; however the things are far from standardization yet. There still exists considerable amount of uncertainty with regard to aspects such as determining influencing features, better performing algorithms, number of emotion classification etc. Among the influencing factors, the uniqueness between speech databases such as data collection method is accepted to be significant among the research community. Speech emotion database is essentially a repository of varied human speech samples collected and sampled using a specified method. This paper reviews 34 `speech emotion databases for their characteristics and specifications. Furthermore critical insight into the imitational aspects for the same have also been highlighted.
Similar to A hybrid strategy for emotion classification (20)
Development of depth map from stereo images using sum of absolute differences...nooriasukmaningtyas
This article proposes a framework for the depth map reconstruction using stereo images. Fundamentally, this map provides an important information which commonly used in essential applications such as autonomous vehicle navigation, drone’s navigation and 3D surface reconstruction. To develop an accurate depth map, the framework must be robust against the challenging regions of low texture, plain color and repetitive pattern on the input stereo image. The development of this map requires several stages which starts with matching cost calculation, cost aggregation, optimization and refinement stage. Hence, this work develops a framework with sum of absolute difference (SAD) and the combination of two edge preserving filters to increase the robustness against the challenging regions. The SAD convolves using block matching technique to increase the efficiency of matching process on the low texture and plain color regions. Moreover, two edge preserving filters will increase the accuracy on the repetitive pattern region. The results show that the proposed method is accurate and capable to work with the challenging regions. The results are provided by the Middlebury standard dataset. The framework is also efficiently and can be applied on the 3D surface reconstruction. Moreover, this work is greatly competitive with previously available methods.
Model predictive controller for a retrofitted heat exchanger temperature cont...nooriasukmaningtyas
This paper aims to demonstrate the practical aspects of process control theory for undergraduate students at the Department of Chemical Engineering at the University of Bahrain. Both, the ubiquitous proportional integral derivative (PID) as well as model predictive control (MPC) and their auxiliaries were designed and implemented in a real-time framework. The latter was realized through retrofitting an existing plate-and-frame heat exchanger unit that has been operated using an analog PID temperature controller. The upgraded control system consists of a personal computer (PC), low-cost signal conditioning circuit, national instruments USB 6008 data acquisition card, and LabVIEW software. LabVIEW control design and simulation modules were used to design and implement the PID and MPC controllers. The performance of the designed controllers was evaluated while controlling the outlet temperature of the retrofitted plate-and-frame heat exchanger. The distinguished feature of the MPC controller in handling input and output constraints was perceived in real-time. From a pedagogical point of view, realizing the theory of process control through practical implementation was substantial in enhancing the student’s learning and the instructor’s teaching experience.
Control of a servo-hydraulic system utilizing an extended wavelet functional ...nooriasukmaningtyas
Servo-hydraulic systems have been extensively employed in various industrial applications. However, these systems are characterized by their highly complex and nonlinear dynamics, which complicates the control design stage of such systems. In this paper, an extended wavelet functional link neural network (EWFLNN) is proposed to control the displacement response of the servo-hydraulic system. To optimize the controller's parameters, a recently developed optimization technique, which is called the modified sine cosine algorithm (M-SCA), is exploited as the training method. The proposed controller has achieved remarkable results in terms of tracking two different displacement signals and handling external disturbances. From a comparative study, the proposed EWFLNN controller has attained the best control precision compared with those of other controllers, namely, a proportional-integralderivative (PID) controller, an artificial neural network (ANN) controller, a wavelet neural network (WNN) controller, and the original wavelet functional link neural network (WFLNN) controller. Moreover, compared to the genetic algorithm (GA) and the original sine cosine algorithm (SCA), the M-SCA has shown better optimization results in finding the optimal values of the controller's parameters.
Decentralised optimal deployment of mobile underwater sensors for covering la...nooriasukmaningtyas
This paper presents the problem of sensing coverage of layers of the ocean in three dimensional underwater environments. We propose distributed control laws to drive mobile underwater sensors to optimally cover a given confined layer of the ocean. By applying this algorithm at first the mobile underwater sensors adjust their depth to the specified depth. Then, they make a triangular grid across a given area. Afterwards, they randomly move to spread across the given grid. These control laws only rely on local information also they are easily implemented and computationally effective as they use some easy consensus rules. The feature of exchanging information just among neighbouring mobile sensors keeps the information exchange minimum in the whole networks and makes this algorithm practicable option for undersea. The efficiency of the presented control laws is confirmed via mathematical proof and numerical simulations.
Evaluation quality of service for internet of things based on fuzzy logic: a ...nooriasukmaningtyas
The development of the internet of thing (IoT) technology has become a major concern in sustainability of quality of service (SQoS) in terms of efficiency, measurement, and evaluation of services, such as our smart home case study. Based on several ambiguous linguistic and standard criteria, this article deals with quality of service (QoS). We used fuzzy logic to select the most appropriate and efficient services. For this reason, we have introduced a new paradigmatic approach to assess QoS. In this regard, to measure SQoS, linguistic terms were collected for identification of ambiguous criteria. This paper collects the results of other work to compare the traditional assessment methods and techniques in IoT. It has been proven that the comparison that traditional valuation methods and techniques could not effectively deal with these metrics. Therefore, fuzzy logic is a worthy method to provide a good measure of QoS with ambiguous linguistic and criteria. The proposed model addresses with constantly being improved, all the main axes of the QoS for a smart home. The results obtained also indicate that the model with its fuzzy performance importance index (FPII) has efficiently evaluate the multiple services of SQoS.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Smart monitoring system using NodeMCU for maintenance of production machinesnooriasukmaningtyas
Maintenance is an activity that helps to reduce risk, increase productivity, improve quality, and minimize production costs. The necessity for maintenance actions will increase efficiency and enhance the safety and quality of products and processes. On getting these conditions, it is necessary to implement a monitoring system used to observe machines' conditions from time to time, especially the machine parts that often experience problems. This paper presents a low-cost intelligent monitoring system using NodeMCU to continuously monitor machine conditions and provide warnings in the case of machine failure. Not only does it provide alerts, but this monitoring system also generates historical data on machine conditions to the Google Cloud (Google Sheet), includes which machines were down, downtime, issues occurred, repairs made, and technician handling. The results obtained are machine operators do not need to lose a relatively long time to call the technician. Likewise, the technicians assisted in carrying out machine maintenance activities and online reports so that errors that often occur due to human error do not happen again. The system succeeded in reducing the technician-calling time and maintenance workreporting time up to 50%. The availability of online and real-time maintenance historical data will support further maintenance strategy.
Design and simulation of a software defined networkingenabled smart switch, f...nooriasukmaningtyas
Using sustainable energy is the future of our planet earth, this became not only economically efficient but also a necessity for the preservation of life on earth. Because of such necessity, smart grids became a very important issue to be researched. Many literatures discussed this topic and with the development of internet of things (IoT) and smart sensors, smart grids are developed even further. On the other hand, software defined networking is a technology that separates the control plane from the data plan of the network. It centralizes the management and the orchestration of the network tasks by using a network controller. The network controller is the heart of the SDN-enabled network, and it can control other networking devices using software defined networking (SDN) protocols such as OpenFlow. A smart switching mechanism called (SDN-smgrid-sw) for the smart grid will be modeled and controlled using SDN. We modeled the environment that interact with the sensors, for the sun and the wind elements. The Algorithm is modeled and programmed for smart efficient power sharing that is managed centrally and monitored using SDN controller. Also, all if the smart grid elements (power sources) are connected to the IP network using IoT protocols.
Efficient wireless power transmission to remote the sensor in restenosis coro...nooriasukmaningtyas
In this study, the researchers have proposed an alternative technique for designing an asymmetric 4 coil-resonance coupling module based on the series-to-parallel topology at 27 MHz industrial scientific medical (ISM) band to avoid the tissue damage, for the constant monitoring of the in-stent restenosis coronary artery. This design consisted of 2 components, i.e., the external part that included 3 planar coils that were placed outside the body and an internal helical coil (stent) that was implanted into the coronary artery in the human tissue. This technique considered the output power and the transfer efficiency of the overall system, coil geometry like the number of coils per turn, and coil size. The results indicated that this design showed an 82% efficiency in the air if the transmission distance was maintained as 20 mm, which allowed the wireless power supply system to monitor the pressure within the coronary artery when the implanted load resistance was 400 Ω.
Grid reactive voltage regulation and cost optimization for electric vehicle p...nooriasukmaningtyas
Expecting large electric vehicle (EV) usage in the future due to environmental issues, state subsidies, and incentives, the impact of EV charging on the power grid is required to be closely analyzed and studied for power quality, stability, and planning of infrastructure. When a large number of energy storage batteries are connected to the grid as a capacitive load the power factor of the power grid is inevitably reduced, causing power losses and voltage instability. In this work large-scale 18K EV charging model is implemented on IEEE 33 network. Optimization methods are described to search for the location of nodes that are affected most due to EV charging in terms of power losses and voltage instability of the network. Followed by optimized reactive power injection magnitude and time duration of reactive power at the identified nodes. It is shown that power losses are reduced and voltage stability is improved in the grid, which also complements the reduction in EV charging cost. The result will be useful for EV charging stations infrastructure planning, grid stabilization, and reducing EV charging costs.
Topology network effects for double synchronized switch harvesting circuit on...nooriasukmaningtyas
Energy extraction takes place using several different technologies, depending on the type of energy and how it is used. The objective of this paper is to study topology influence for a smart network based on piezoelectric materials using the double synchronized switch harvesting (DSSH). In this work, has been presented network topology for circuit DSSH (DSSH Standard, Independent DSSH, DSSH in parallel, mono DSSH, and DSSH in series). Using simulation-based on a structure with embedded piezoelectric system harvesters, then compare different topology of circuit DSSH for knowledge is how to connect the circuit DSSH together and how to implement accurately this circuit strategy for maximizing the total output power. The network topology DSSH extracted power a technique allows again up to in terms of maximal power output compared with network topology standard extracted at the resonant frequency. The simulation results show that by using the same input parameters the maximum efficiency for topology DSSH in parallel produces 120% more energy than topology DSSH-series. In addition, the energy harvesting by mono-DSSH is more than DSSH-series by 650% and it has exceeded DSSHind by 240%.
Improving the design of super-lift Luo converter using hybrid switching capac...nooriasukmaningtyas
In this article, an improvement to the positive output super-lift Luo converter (POSLC) has been proposed to get high gain at a low duty cycle. Also, reduce the stress on the switch and diodes, reduce the current through the inductors to reduce loss, and increase efficiency. Using a hybrid switch unit composed of four inductors and two capacitors it is replaced by the main inductor in the elementary circuit. It’s charged in parallel with the same input voltage and discharged in series. The output voltage is increased according to the number of components. The gain equation is modeled. The boundary condition between continuous conduction mode (CCM) and discontinuous conduction mode (DCM) has been derived. Passive components are designed to get high output voltage (8 times at D=0.5) and low ripple about (0.004). The circuit is simulated and analyzed using MATLAB/Simulink. Maximum power point tracker (MPPT) controls the converter to provide the most interest from solar energy.
Third harmonic current minimization using third harmonic blocking transformernooriasukmaningtyas
Zero sequence blocking transformers (ZSBTs) are used to suppress third harmonic currents in 3-phase systems. Three-phase systems where singlephase loading is present, there is every chance that the load is not balanced. If there is zero-sequence current due to unequal load current, then the ZSBT will impose high impedance and the supply voltage at the load end will be varied which is not desired. This paper presents Third harmonic blocking transformer (THBT) which suppresses only higher harmonic zero sequences. The constructional features using all windings in single-core and construction using three single-phase transformers explained. The paper discusses the constructional features, full details of circuit usage, design considerations, and simulation results for different supply and load conditions. A comparison of THBT with ZSBT is made with simulation results by considering four different cases
Power quality improvement of distribution systems asymmetry caused by power d...nooriasukmaningtyas
With an increase of non-linear load in today’s electrical power systems, the rate of power quality drops and the voltage source and frequency deteriorate if not properly compensated with an appropriate device. Filters are most common techniques that employed to overcome this problem and improving power quality. In this paper an improved optimization technique of filter applies to the power system is based on a particle swarm optimization with using artificial neural network technique applied to the unified power flow quality conditioner (PSO-ANN UPQC). Design particle swarm optimization and artificial neural network together result in a very high performance of flexible AC transmission lines (FACTs) controller and it implements to the system to compensate all types of power quality disturbances. This technique is very powerful for minimization of total harmonic distortion of source voltages and currents as a limit permitted by IEEE-519. The work creates a power system model in MATLAB/Simulink program to investigate our proposed optimization technique for improving control circuit of filters. The work also has measured all power quality disturbances of the electrical arc furnace of steel factory and suggests this technique of filter to improve the power quality.
Studies enhancement of transient stability by single machine infinite bus sys...nooriasukmaningtyas
Maintaining network synchronization is important to customer service. Low fluctuations cause voltage instability, non-synchronization in the power system or the problems in the electrical system disturbances, harmonics current and voltages inflation and contraction voltage. Proper tunning of the parameters of stabilizer is prime for validation of stabilizer. To overcome instability issues and get reinforcement found a lot of the techniques are developed to overcome instability problems and improve performance of power system. Genetic algorithm was applied to optimize parameters and suppress oscillation. The simulation of the robust composite capacitance system of an infinite single-machine bus was studied using MATLAB was used for optimization purpose. The critical time is an indication of the maximum possible time during which the error can pass in the system to obtain stability through the simulation. The effectiveness improvement has been shown in the system
Renewable energy based dynamic tariff system for domestic load managementnooriasukmaningtyas
To deal with the present power-scenario, this paper proposes a model of an advanced energy management system, which tries to achieve peak clipping, peak to average ratio reduction and cost reduction based on effective utilization of distributed generations. This helps to manage conventional loads based on flexible tariff system. The main contribution of this work is the development of three-part dynamic tariff system on the basis of time of utilizing power, available renewable energy sources (RES) and consumers’ load profile. This incorporates consumers’ choice to suitably select for either consuming power from conventional energy sources and/or renewable energy sources during peak or off-peak hours. To validate the efficiency of the proposed model we have comparatively evaluated the model performance with existing optimization techniques using genetic algorithm and particle swarm optimization. A new optimization technique, hybrid greedy particle swarm optimization has been proposed which is based on the two aforementioned techniques. It is found that the proposed model is superior with the improved tariff scheme when subjected to load management and consumers’ financial benefit. This work leads to maintain a healthy relationship between the utility sectors and the consumers, thereby making the existing grid more reliable, robust, flexible yet cost effective.
Energy harvesting maximization by integration of distributed generation based...nooriasukmaningtyas
The purpose of distributed generation systems (DGS) is to enhance the distribution system (DS) performance to be better known with its benefits in the power sector as installing distributed generation (DG) units into the DS can introduce economic, environmental and technical benefits. Those benefits can be obtained if the DG units' site and size is properly determined. The aim of this paper is studying and reviewing the effect of connecting DG units in the DS on transmission efficiency, reactive power loss and voltage deviation in addition to the economical point of view and considering the interest and inflation rate. Whale optimization algorithm (WOA) is introduced to find the best solution to the distributed generation penetration problem in the DS. The result of WOA is compared with the genetic algorithm (GA), particle swarm optimization (PSO), and grey wolf optimizer (GWO). The proposed solutions methodologies have been tested using MATLAB software on IEEE 33 standard bus system
Intelligent fault diagnosis for power distribution systemcomparative studiesnooriasukmaningtyas
Short circuit is one of the most popular types of permanent fault in power distribution system. Thus, fast and accuracy diagnosis of short circuit failure is very important so that the power system works more effectively. In this paper, a newly enhanced support vector machine (SVM) classifier has been investigated to identify ten short-circuit fault types, including single line-toground faults (XG, YG, ZG), line-to-line faults (XY, XZ, YZ), double lineto-ground faults (XYG, XZG, YZG) and three-line faults (XYZ). The performance of this enhanced SVM model has been improved by using three different versions of particle swarm optimization (PSO), namely: classical PSO (C-PSO), time varying acceleration coefficients PSO (T-PSO) and constriction factor PSO (K-PSO). Further, utilizing pseudo-random binary sequence (PRBS)-based time domain reflectometry (TDR) method allows to obtain a reliable dataset for SVM classifier. The experimental results performed on a two-branch distribution line show the most optimal variant of PSO for short fault diagnosis.
A deep learning approach based on stochastic gradient descent and least absol...nooriasukmaningtyas
More than eighty-five to ninety percentage of the diabetic patients are affected with diabetic retinopathy (DR) which is an eye disorder that leads to blindness. The computational techniques can support to detect the DR by using the retinal images. However, it is hard to measure the DR with the raw retinal image. This paper proposes an effective method for identification of DR from the retinal images. In this research work, initially the Weiner filter is used for preprocessing the raw retinal image. Then the preprocessed image is segmented using fuzzy c-mean technique. Then from the segmented image, the features are extracted using grey level co-occurrence matrix (GLCM). After extracting the fundus image, the feature selection is performed stochastic gradient descent, and least absolute shrinkage and selection operator (LASSO) for accurate identification during the classification process. Then the inception v3-convolutional neural network (IV3-CNN) model is used in the classification process to classify the image as DR image or non-DR image. By applying the proposed method, the classification performance of IV3-CNN model in identifying DR is studied. Using the proposed method, the DR is identified with the accuracy of about 95%, and the processed retinal image is identified as mild DR.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
1. Indonesian Journal of Electrical Engineering and Computer Science
Vol. 21, No. 3, March 2021, pp. 1400~1406
ISSN: 2502-4752, DOI: 10.11591/ijeecs.v21.i3.pp1400-1406 1400
Journal homepage: http://ijeecs.iaescore.com
A hybrid strategy for emotion classification
Hussah Nasser Aleisa
Department of Computer Sciences, CCIS, Princess Nourah bint Abdulrahman University, Riyadh, KSA
Article Info ABSTRACT
Article history:
Received Mar 25, 2020
Revised Aug 30, 2020
Accepted Oct 1, 2020
Human emotion recognition is an upcoming research field of human
computer interaction based on facial gestures and is being used for real-time
analysis in classifying cognitive affective states from a facial video data.
Since computers have become an integral part of life, many researchers are
using emotion recognition and classification of data based on audio and text.
But these approaches offer limited accuracy and relevance in emotion
classification. Therefore we have introduced and analyzed a hybrid approach
which could outperform the existing strategies that uses an innovative
approach supported by selection of audio and video data characteristics for
classification. The research uses SVM for classifying the data using audio-
visual savee database and the results obtained show maximum classification
accuracy with respect to audio data about 91.6 could be improved to 99.2%
after the application of hybrid strategy.
Keywords:
Audio-videospeechrecognition
in car database
Emotion classification
Emotion detection
Emotion recognition
Support vector machine
This is an open access article under the CC BY-SA license.
Corresponding Author:
Hussah Nasser Aleisa
Department of Computer Sciences
College of Computer and Information Sciences
Princess Nourah Bint Abdulrahman University, Riyadh, KSA
Email: haleisa2019@gmail.com
1. INTRODUCTION
The facial expressions are assumed to change whenever an emotion is experienced, therefore
emotion detection could be achieved by detecting the facial expression associated to it. Facial actions can be
extracted from each facial expression. The changes of eyes, mouth and nose positioning could be determined
by the movements of facial muscles and computer programs implement the users’ facial expressions along
with head movements by image capturing approach by representing dots in the coordinate system. The
changes are then analyzed as happening of a facial action. There are about 46 facial action units (FAU) found
in facial action coding system (FACS) according to a research in 1980 by Ekman et al. By emotion a person
is able to communicate and express feelings such as interests, wishes, targets, requirements and much more.
Physiological responses are needed in many places of this expression and may change the voice of the
person. For instance energy consumption may be more for emotional events like anger (which raises vocal
cords vibration, modifies shape and rhythm of the breathing requirements in muscles). Therefore Emotion
Recognition represents human voice as data which most of the researchers apply for emotion recognition [1].
In recent studies lot of focus was on this kind of data so as to get better results. So based on speech, text and
image the researchers developed many hybrid approaches for emotion classification [2]. It is assumed that
voice changes are independent of language and speaker, therefore when classifying emotion, researchers
consider only the features of acoustic sound instead of other features as the energy, pitch and the speed of
emotional speech changes and mostly the variants because the strong acoustic characteristics of the speech
cannot be used individually to determine the emotions precisely and efficiently.
2. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A hybrid strategy for emotion classification (Hussah Nasser Aleisa)
1401
To improve the classification precision of emotions the use of extra voice features like the spectral
and prosodic features is done and the video features are then applied as complementary factor. This method
would help in enhancing the emotion classification precisely to a higher extent. The aim of this research
paper is to develop a mixed emotion classifier using both the audio/video characteristics . To analyze we take
into consideration seven types of emotion classes: anger, happiness, fear, hatred, puzzle, shock and neutral.
The hybrid strategy estimates the effect of video features usage on the classification precision. The emotions
classification is done with respect to audio data only followed by the comparison of results with classification
with respect to both audio and video features.
The paper organization initially begins with reviewing the existing work in the area of emotion
recognition and classification formulated in Section 2. The Section 3 provides the details of support required
for the experimental purpose such as the data collection and related database. Feature extractions is explained
in Section 4. The hybrid method is proposed in Section 5. The experimental results are shown in Section 6.
Finally, Section 7 concludes the paper and proposes directions for the future research.
2. LITERATURE SURVEY
The use of audio/video (AV) signals or a combination of such signals, to interpret human emotions
is a common method to classify and sense human emotion. This paper aims at proposing a good alternative
solution to the exixting slutions based on offering increased classification accuracy. There is probability to
encounter adverse emotions that add negativity to the emotions. Some of the words such as fright, over
stressing, sad and amazement emotions, according to the authors in [3] give negative sense to emotions. On
examining the physiological vital activities in humans such as temperature sensing, ecg, blood and air
pressure, pulse oximetry, etc within human organs, as was studies by authors [4], the researchers proposed
that emotion recognition can be accomplished based on expressions and actions produced by human beings.
They also identified and proposed that emotions variations of an individual are responsible for variations in
human voice characteristics and this follows that a person could be investigated for his sentiments and
emotions based on these Audio Video characteristics. The technique is to determine the basic frequency by
mining a persons emotions from his speech [5]. Depending on whether the person is a male or female since
the pitch of sound for a male is thicker than female, therefore male frequency range is usually 80-160 Hz and
that for female is 150-250 Hz. Small children upto 12 years of age has 200-400 Hz pitch of basic frequency.
Predicting a persons’ emotion would also be accomplished by other speech characteristics [6] like quickness
in speech, its quality and finally the energy criteria either singly or in combined state, thus a structure is
created depending on these speech characteristics referred to as Dialect-dependent Speaker Models. Few
researchers have covered this dialect determination based on a completely new structure where speaker is
considered totally an exclusive speaker who does not apply any knowledge about these features. This
anonymous perspective is under research as mentioned by the authors in [7]. The current research in
technology developments in the area of audio/speech could enhance the driver concentration by use of audio
signaling while driving. If a driver is equipped with an audio interface then it is possible for driver to avoid
distractions but there may be noise interference therefore visual data could be appended to improve the user
interface by image/video capturing, recording and disseminating both audio-visual data which could be
expensive. An AVICAR [8] database contains research dataset for vehicular audio/visual data but due to
timelag between audio data and video data streams it is needed to have synchronization since no specific
protocol support exists and it’s still a matter of exploring new ways to handle this situation.
Such system would offer safety to the driver and other riding people in life-threatening situations.
Some Researchers applied the sentiment or emotion identification [9] and hidden markov model (HMM) [10]
for categorizing the emotions using the audio signals to devise the results on four emotion classes analyzing
happy or angry mood, sadness and also the neutral. For creating and evaluating new models appropriate data
usage is important. Many databases are presented for emotion recognition and few are open source [11],
eNTERFACEA'05 EMOTION (audio-visual database containing emotional contexts). In [8], a speech corpus
database containing multi-channel audio-visual records is provided (AVICAR), given by the researchers in
university of Illinois (2003-2004). In [12, 13] there is IEMOCAP1 database for multimodal information
capture maintained by SAIL2 lab (University of Southern California).
SAVEE audio-visual database contains data of English speakers who are native males of age 27-31
years and are represented with the labels in [14] KL, DC, DJ and JK. Emotion is designated in separate seven
(7) classes: anger, happiness, fear, surprise, sadness, hatred and neutral. The text material in the study
consisted of about 120 utterances for every speaker in the 15 TIMIT sentences per emotion utilized. This
accounted for an overall size of 480 utterances and the recording of data was made dynamically. An overall
60 painted markers over the actors' frontal face were utilized on facial markers. The distribution of database
3. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1400 - 1406
1402
in Figure 1 [15] depicts the emotion classes in which columns denote number of video files containing
emotional data. It is clearly visible that all emotions are equivalent apart from neutral state.
Figure 1. SAVEE Database emotion classes distribution
3. RESEARCH METHODS
3.1. Feature extraction
In this section we review the Audio and Video Feature extraction methods. The Audio Feature
Extraction methods are summarized in Table 1 then the Video Feature Extraction follows. Detect Features
function of image processing algorithms could be used to extract the feature of an image but the dimension of
result obtained is high. Faces are marked by small blue signs in the SAVEE database. The marks are used to
identify the essential and effective points in determining a facial expression and to reduce the dimensions of
the features extracted. Colour tracking algorithms can be utilized to find these points. A sample of a database
image is shown in Figure 2.
Table 1. Summary features of audio feature extraction
Energy and related features Pitch and related features: Formant, bandwidth for the
first four formants
Mel-Frequency
Cepstrum coefficients
This factor is important for
speech signals. To obtainthe
statistics of energy in
speech, its value per frame
has to be extracted.
Therefore the resultant
statistics of energy like the
maximum value, minimum
value, average and standard
deviation [6] within the
whole speech sample are
obtained by e va lu ating
the energy.
Pitch is an important feature in
speech emotion recognition. The
shape of vocal cords and how
they vibrate are affected in
different emotional states. Since
pitch depends on vocal cords
tension and pressure under larynx,
and it also contains information
about emotion. Pitch signal is
called glottal wave-form.
Maximum, minimum, average and
variation range are different in
variety emotion.
formants determination is
based on vocal cords that
is affected differently in
different emotional states.
For instance, the highest
peak spectral peaks in the
spectrum of sound is the
first formant frequency. In
other words, formant is the
concentration of energy
around certain frequency.
Linear predictive coding
method is used for formant
calculation [16].
This feature is
commonly used in
speech with a
simple calculation.
Mel Frequency
Cepstrum has an
adequate resolution
in low frequency
region and has
enormous
resistance towards
noise. But accuracy
of emotion
recognition is not
satisfactory [17].
Figure 2. A sample of SAVEE database image and color marker of face
4. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A hybrid strategy for emotion classification (Hussah Nasser Aleisa)
1403
As shown in Figure 2, marker on the edge of nose (encircled in black) is taken as a reference. It is
considered as the centre of coordinate, and the remaining coordinates are obtained based on it. With
extraction of these features, i.e., by using the same coloured points marked on faces a 130- dimensional set is
obtained. This dimensional set includes only three speakers (JK, DC, DJ). It is due to the fact that 65
coloured points are detected on their faces. However, 60 points can be detected and visualized on the fourth
speaker face (KL); therefore, extracted features from his related files include 120 dimensions.
To assimilate dimensional features, identify points of difference and their coordinate were removed
from other speakers feature files. In Figure 3, the points of difference between the fourth speaker and others
(here is JE, the second for example) can be seen. In addition, the second speaker face is marked with yellow
circles to compare with the fourth speaker face.
Figure 3. Compare markers of two speakers in SAVEE database
3.2. Emotion classification
Many classification algorithms are being proposed by researchers owing to the wide popularity and
applications in Emotion recognition via the audio. Some of these algorithms are found in the following
papers [10, 3, 18-20] hidden markov model (HMM), neural networks algorithm (NN), maximum likelihood
bayesian classifier (MLC), gaussian mixture model (GMM), kernel regression, k-nearest neighbors algorithm
(KNN), and support vector machines (SVM) [21, 22]. SVM structures high-dimension vectors which is
actually the hyperplane max distance in the vector space. Although SVM is a simple and efficient algorithm
in machine learning, it is extensively being utilized in recognizing issues in classification and pattern
recognition. It out performs in terms of classification and the comparison of its performance with other
classifiers based on the similar terms when limited training data is easily evident. Therefore SVM is selected
as the classifier for emotion classification to analyze the problem under study. The type of SVM (Hard
margin) is a non-linear one whereas rhere is another which is an extended version SVM and is referred to as
soft margin SVM which contains the definition of a penalty coefficient (C) for data items having class
violation. Another SVM, a non-linear is based on Kernel function and separate different classes. Feature
space calculation could be expensive based on size and may have unlimited dimensions. Thus, the kernel is
used to overcome this problem with RBF kernel function, the use of choice of two parameters is very
important in this function. Penalty C in case of conflict and constant σ in Kernel Function [18, 19] are
identified, then the classifier can predict emotions comparatively accurately. More than two classes are used
in OAA algorithm by generalizing SVM [15, 16].
3.2.1. Emotion classification based on audio features
In this study seven main emotions have been used for identifying emotions: happiness, sadness,
anger, fear, disgust, surprise and neutral. Different variations are created in human voice features such as
pitch, energy and spectrum in various emotional states. Initially audio features are extracted using one of the
feature selection methods. So more effective features can be selected. In this study, z-score method of
normalized has been used. Generally, classification uses only audio features with 121 audio features
including energy, pitch, formant, coefficient Mel and speed of speech and values associated with them. These
features have been selected as they are common in most of works done in this area and the experiments are
repeated on multiple choices of audio features. A classification task usually involves separating data into
training and testing sets so the classifications were done by 7-fold Cross-Validation and they were examined
by different values of sigma (σ) too. In section 6, the results of experiments are investigated.
3.2.2. Emotion classification on audio-visual features (hybrid approach)
In the second phase, classification is done by combination both audio and video features. In other words,
part of the work is common with classification based on only audio features. In this phase, extraction of features
5. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1400 - 1406
1404
must be done first, and the process of dimension reduction must be accomplished. Finally, normalization should
take place. The vectors have been created from extracted features which has been used to train SVM in hybrid
approach should use audio features vectors and video features vectors simultaneously. A model is created based on
SVM classifier and multi-classification done by OAA algorithm. 7-fold cross validation has been used for
determining training and testing data sets. The classification accuracy can be improved by appropriate solutions
including changes in selected features and checking other ways for feature extraction.
4. RESULTS AND DISCUSSION
The experiments were conducted on MATLAB in which educational algorithms [17] of the
University of Rochester were applied to extract audio features. These were applied to improve performance
of SVMs, like One-Against-All. Using different kernels had excellent results to solve SVM problems
including the application of polynomial kernel [15] in research. They could improve classification accuracy
by increasing four percent (4%). Compared to other classification methods, support vector machine method
SVM proved to be effective and popular. In this research, the visual features presented in the SAVEE
database have been used. There are several tool boxes to easily work for audio and video features extraction
such as Open SMILE (and Praat for audio features) and OpenCV for video features that can be used instead
of writing algorithm in section feature extraction.
This study uses a limited number of features and has achieved good results compared with other
emotion classifiers which are based on audio or audio-visual. For the first stage, classification was done only
with audio features in different conditions. Different and remarkable results were obtained. Table 2 shows the
result of classification based on just audio features. In this study, we considered σ=9 and three different states
of audio features. According to Table 2, if the selected audio features are taken energy and pitch, the
classification accuracy is 75.62%. While adding formants to audio features set and retesting, classification
accuracy is increased to 82.68%. Next, classification is done with new features set, this time taking into
account MFCC and speed of speech in addition energy, pitch and formants features it can be seen that
classification accuracy increases based on the result shown and its value of 91%.
Emotion classification was done again with the same audio features, but different values of σ.
Values considered for σ were 5, 6.5, 8.5, 9, 10, of which 8.5 achieved the best result. Table 3 shows overall
result of models with three values of sigma. Maximum and minimum classification accuracy of seven
emotions with only audio features are 91.63% for the model with audio features energy, pitch, formants, Mel-
Frequency Cepstrum coefficients, speed and sigma as 8.5 and 75.49 for the model with audio features energy,
pitch and sigma as 8.5, respectively. Finally, we wanted to test the impact of adding video features on the
accuracy. Thus, we repeated experiments and changed the number of features and sigma with audio and
visual features together. The results are summarized in Table 4.
Table 2. Audio classification for seven emotion types
Features
Classes
sigma (σ)=9
Energy/Pitch Energy/PitchFormant
Energy/Pitch/
Formant/MFCCs
Sadness 64.97 84.94 93.97
Happiness 73.98 76.87 88.93
Anger 88.53 88.33 93.08
Fear 74.55 85.01 89.10
Disgust 56.37 70.89 89.95
Natural 84.01 87.31 88.99
Surprise 84.99 89.13 93.29
Accuracy 75.35 83.21 90.57
Table 3. Accuracy with only audio features and
different amount of sigma (σ)
Table 4. Models accuracy by audio-visual features
with different amount of sigma (σ)
Audio Features Audio-Visual Features
(𝜎)
Sigma
Energy/Pitch Energy/Pitch/
Formant
Energy/Pitch/
Formant/MFCC
s
(𝜎)
Sigma
Energy/Pitch/
VF
Energy/Pitch/
Formant, VF
Energy/Pitch/For
mant/MFCCs, VF
5 81.28 88.77 89.69 5.00 98.85 96.75 90.11
6.5 77.49 86.5 91.03 6.50 99.26 97.76 95.03
8.5 75.49 83.44 91.63 8.50 99.26 98.81 98.11
6. Indonesian J Elec Eng & Comp Sci ISSN: 2502-4752
A hybrid strategy for emotion classification (Hussah Nasser Aleisa)
1405
The maximum classification accuracy of seven emotions by the hybrid approach is 99.26% achieved
from the model with energy, pitch as audio features and video features and sigma as 6.5. Classification
accuracy with the same conditions but using only audio features was 77.49. The minimum accuracy was
90.06% obtained from the model with energy, pitch, formants, Mel-Frequency Cepstrum coefficients
(MFCC), speed as audio features and video features and sigma as 5. Using the same conditions classification
accuracy with only audio features was 89.69%. The comparison of classifications based on only audio
features to the hybrid approach (classification on audio-visual features) determines that the hybrid approach
increases classification accuracy in all three models. Therefore, the proposed hybrid approach produced more
promising results [23, 24] used audio signals for emotion recognition in their work and SAVEE database.
Selected features in the project are mainly related to energy, pitch, and statistics and spectral features MFCC
as well. They recognized emotions [25] by using linear kernel with binary tree classification strategy “One
Against One” (OAO) and “One Against All” (OAs). The best results of that work and this research with the
same number of common data and the same audio features are shown in Table 5. By OAO comparing the
results in Table 5, a good performance of classification by hybrid approach with the audio-visual features can
be seen.
Table 5. Comparing Sinith’s project with the proposed hybrid approach
Class Sinith's work Hybrid approach
Anger 65 96.55
Happiness 45 98.78
Natural 70 98.78
Sadness 65 97.5
Accuracy 61.25 97.9
The best accuracy in Sinith’s work by SAVEE database using linear kernel and binary tree is
61.25%, while the proposed hybrid method based on the same database, exhibits an accuracy equals to
97.90%. In another work given by Chandney [10] and team to recognize emotion which uses Hidden Markov
model and SAVEE database referencing four classes: surprise, sadness, fear and disgust. However the work
used only one audio feature to recognize emotions, MFCC for which accuracy rate of emotion recognition
was 94.17% on the other hand the with same database and same feature the new hybrid approach had raised
to 97.82% accuracy. The comparison and results are depicted in Table 6.
Table 6. Chandni’s project Vs Hybrid approach
Class Chandni's work Hybrid approach
Anger 90 98.24
Happiness 100 99.09
Natural 97 94.78
Sadness 90 99.18
Accuracy 94.17 97.82
5. CONCLUSION
This paper studied various emotion classification techniques and proposed a hybrid technique for
classification of human emotions which is the most challenging task in real time situation. We determined
our resuls base d on a hybrid criteria that combined audio and video data on a SVM classifier and the
improvement on the on the data we used was invincible and ultimately it was seen that the proposed approach
outperforms with an accuracy of 99.16 percent, a result more than the research studies currently available in
the recent times. This study has given an emotion recognition system which is independent of speaker and
language used, also the two audio features considered in the study are prosodic and spectral features unlike
the existing researches using different audio features in classification and emotion recognition. The future
plan is to investigate emotion recognition with a different perspective of dialect and language impact on
emotion recognition. Also we want to experiment the results of recognizing emotions and analyzing audio
features in various other languages to check whether it could enhance the accuracy of the classifier. Another
improvement would studying the influence of the accent on emotion expression and recognition for audio
features. Future research is concerned with the need to create specialized databases for different languages to
be considered and then the effective features in any chosen language could be analyzed from the available
database.
7. ISSN: 2502-4752
Indonesian J Elec Eng & Comp Sci, Vol. 21, No. 3, March 2021 : 1400 - 1406
1406
ACKNOWLEDGEMENTS
This research is funded by was funded by the Deanship of Scientific Research at Princess Nourah
bint Abdulrahman University through the Fast-track Research Funding Program and I am thankful to the
Research Unit for encouraging and giving the women researchers opportunities to do the research in
upcoming areas such as Image Processing and make their contributions.
REFERENCES
[1] Ververidis et al , “Emotional speech recognition: Resources, features, and methods,” Speech Communication,
vol. 48, no. 9, pp. 1162-1181, 2006.
[2] Bhaskar, Jasmine, et.al, “Hybrid Approach for Emotion Classification of Audio Conversation Based on Text and
Speech Mining,” Procedia Computer Science, vol. 46, pp. 635-643, 2015.
[3] E. H. Jang, et.al, “Emotion classification based on physiological signals induced by negative emotions:
Discriminantion of negative emotions by machine learning,” in Networking, Sensing and Control (ICNSC), 2012
9th IEEE International Conference on Beijing, 2012.
[4] C. Parlak. and B. Diri, “Emotion recognition from the human voice,” in Signal Processing and Communications
Applications Conference (SIU), 2013 21st, 2013.
[5] E. Ayadi, et. al, “Survey on speech emotion recognition: Features, classification schemes, and databases,”
Pattern Recognition, vol. 44, no. 3, pp. 572-587, 2011.
[6] Pathak B. V. S., et al.,”Extraction of Pitch and Formants and its Analysis to Identify Three Different Emotional
States of a Person” International Journal of Computer Science, vol. 9, no. 4, pp. 296-299, 2012.
[7] C. Lijiang, et.al, “Speech emotion recognition: Features and classification models,” Digital Signal Processing,
vol. 22, no. 6, pp. 1154-1160, 2012.
[8] N. Rajitha, et.al., “Recognising audio-visual speech in vehicles using the AVICAR database,” in Proceedings of
the 13th Australasian International Conference on Speech Science and Technology Melbourne, Vic, 2010.
[9] M. S. Sinith, et. al, “Emotion recognition from audio signals using Support Vector Machine,” in IEEE Recent
Advances in Intelligent Computational Systems (RAICS) Trivandrum, 2015.
[10] G. Chandni, et. al, “An automatic emotion recognizer using MFCCs and Hidden Markov Models,” in Ultra
Modern Telecommunications and Control Systems and Workshops (ICUMT), 2015 7th International Congress on
Brno, 2015.
[11] “eNTERFACE'05 EMOTION Database,” [Online]. Available: http://www.enterface.net/enterface05/..
[12] C. Busso, et. Al, “IEMOCAP: interactive emotional dyadic motion capture database,” Language Resources and
Evaluation, vol. 42, pp. 335-359, 2008.
[13] A. Metallinou, et al, “Visual emotion recognition using compact facial representations and viseme information,”
in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, 2010.
[14] “SAVEE Database,” [Online]. Available:http://kahlan.eps.surrey.ac.uk/savee/Database.html.
[15] M. Sidorov, et al, “Feature and decision level.audio-visual data fusion in emotion recognition problem,” in
Informatics in Control, Automation and Robotics (ICINCO), 2015 12th International Conference on Colmar,
2015.
[16] N. Yang, et. al, “Speech-based emotion classification using multiclass SVM with hybrid kernel and thresholding
fusion,” in Spoken Language Technology Workshop (SLT), 2012 IEEE Miami, FL, 2012.
[17] Y. Pan, et. al, “Speech Emotion Recognition Using Support Vector Machine,” International Journal of Smart
Home, vol. 6, no. 2, pp. 101-108, 2012.
[18] E. Sopov and I. Ivanov, “elf-Configuring Ensemble of Neural Network Classifiers for Emotion Recognition in
the Intelligent Human-Machine Interaction,” in Computational Intelligence, 2015 IEEE Symposium Series on
Cape Town, 2015.
[19] S. Agrawal and S. Dongaonkar, “Emotion recognition from speech using Gaussian Mixture Model and vector
quantization,” in Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions),
2015 4th International Conference on Noida, 2015.
[20] M. R. Mehmood and H. J. Lee, “Emotion classification of EEG brain signal using SVM and KNN,” in
Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on Turin, Italy, 2015.
[21] N. R. Kanth and S. Saraswathi, “Efficient speech emotion recognition using binary support vector machines &
multiclass SVM”, IEEE International Conference on Computational Intelligence and Computing Research
(ICCIC), Madurai, 2015.
[22] A. Metallinou, et. al, “Context-sensitive learning for enhanced audiovisual emotion classification (Extended
abstract),” in Affective Computing and Intelligent Interaction (ACII), 2015 International Conference on Xi'an,
2015.
[23] Y. Chavhan, M. L. Dhore and P. Yesaware, “Article: Speech Emotion Recognition Using Support Vector
Machine,” International Journal of Computer Applications, vol. 1, pp. 6-9, 2010.
[24] M. S. Sinith, et. al, “Emotion recognition from audio signals using Support Vector Machine,” in IEEE Recent
Advances in Intelligent Computational Systems (RAICS) Trivandrum, 2015.
[25] Fergyanto E. Gunawan, et. al, “Predicting the Level of Emotion by Means of Indonesian Speech Signal”,
TELKOMNIKA (Telecommunication, Computing, Electronics and Control), Vol.15, no.2, pp. 665~670 ISSN:
1693-6930, 2017.