With the increased use of computers, electronic devices and human interaction with computer in the broad spectrum of human life, the role of controlling emotions and increasing positive emotional states becomes more prominent. If a user's negative emotions increase, his/her efficiency will decrease greatly as well. Research has shown that colors are to be considered as one of the most influential basic functions in sight, identification, interpretation, perception and senses. It can be said that colors have impact on individuals' emotional states and can change them. In this paper, by learning the reactions of users with different personality types against each color, communication between the user's emotional states and personality and colors were modeled for the variable "emotional control". For the sake of learning, we used a memory-based system with the user’s interface color changing in accordance with the positive and negative experiences of users with different personalities. The end result of comparison of the testing methods demonstrated the superiority of memory-based learning in all three parameters of emotional control, enhancement of positive emotional states and reduction of negative emotional states. Moreover, the accuracy of memory- based learning method was almost 70 percent.
This document describes a deep neural network model for predicting emotions in images based on high-level semantic features like objects and backgrounds. The model was trained on a new image emotion dataset consisting of over 10,000 images labeled with valence and arousal values obtained through crowdsourcing. The model uses features like object detection, background semantics, and color statistics as inputs to a feedforward neural network that predicts continuous valence and arousal values, providing a more nuanced representation of emotions than discrete categories. Experiments showed the model could effectively predict emotions in images.
The document describes the Empathic Companion, an animated interface agent that recognizes a user's affective states in real-time through physiological sensors and addresses the user's emotions through empathetic feedback. An exploratory study found that the Empathic Companion had a positive effect on reducing a user's stress levels when answering interview questions, though an overall positive impact was not clearly shown. The paper discusses related work on using physiological data to track interface impact, reflect user affect, adapt interfaces based on affect, address user affect with agents, learn predictive user models, and disambiguate dialogue acts.
This document discusses using a deep neural network with vectorized facial features for emotion recognition. It introduces a novel method of vectorizing facial landmarks extracted from images to represent facial expressions. A deep neural network is trained on the vectorized facial feature data to classify emotions. The network achieves 84.33% accuracy on an emotion recognition test set. Compared to convolutional neural networks, the proposed deep neural network requires less training time and data.
The document describes a deep learning model called Deep-Emotion that uses an attentional convolutional network for facial expression recognition. The model focuses on important facial regions like the eyes and mouth that are most relevant to detecting emotions. The model achieves state-of-the-art accuracy on several facial expression datasets, outperforming previous methods. Visualization techniques show the model learns to focus on different facial regions for different emotions.
IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...IRJET Journal
This document presents a proposed system for performing sentiment analysis on audio and video reviews from social media platforms. The system first collects audio and video data from sites like YouTube and Facebook. It then separates the audio and video files, converts them to .wav format, and extracts text from the audio and video files. This extracted text is then analyzed using the VADER sentiment analysis algorithm to determine the sentiment polarity (positive, negative, neutral) expressed in the text. VADER is a lexicon-based approach that rates words based on sentiment and calculates overall sentiment scores. The proposed system aims to analyze sentiment in audio and video reviews to better understand user opinions expressed across various social media platforms.
This document presents an audio-visual emotion recognition system that uses multiple modalities and machine learning techniques. It extracts audio features like MFCCs and visual features like facial landmarks from video clips. It uses classifiers like CNNs and stacks their confidence outputs to predict emotions. The system achieves state-of-the-art performance on several databases according to experiments. It represents an improvement over previous work by combining audio, visual and classifier fusion approaches for multimodal emotion recognition.
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
IRJET- Emotion based Music Recommendation SystemIRJET Journal
This document discusses an emotion-based music recommendation system that uses facial expression recognition to determine a user's mood and generate an appropriate playlist. It first discusses existing approaches to emotion detection in music, including models that represent emotions in two-dimensional spaces of arousal and valence. It then outlines the objectives of developing a new system using machine learning approaches to detect emotions from facial images in order to provide personalized music recommendations based on the user's mood. Finally, it reviews several related works that use techniques like Active Appearance Models, Bezier curve fitting, and support vector machines for facial expression analysis and emotion recognition from images.
This document describes a deep neural network model for predicting emotions in images based on high-level semantic features like objects and backgrounds. The model was trained on a new image emotion dataset consisting of over 10,000 images labeled with valence and arousal values obtained through crowdsourcing. The model uses features like object detection, background semantics, and color statistics as inputs to a feedforward neural network that predicts continuous valence and arousal values, providing a more nuanced representation of emotions than discrete categories. Experiments showed the model could effectively predict emotions in images.
The document describes the Empathic Companion, an animated interface agent that recognizes a user's affective states in real-time through physiological sensors and addresses the user's emotions through empathetic feedback. An exploratory study found that the Empathic Companion had a positive effect on reducing a user's stress levels when answering interview questions, though an overall positive impact was not clearly shown. The paper discusses related work on using physiological data to track interface impact, reflect user affect, adapt interfaces based on affect, address user affect with agents, learn predictive user models, and disambiguate dialogue acts.
This document discusses using a deep neural network with vectorized facial features for emotion recognition. It introduces a novel method of vectorizing facial landmarks extracted from images to represent facial expressions. A deep neural network is trained on the vectorized facial feature data to classify emotions. The network achieves 84.33% accuracy on an emotion recognition test set. Compared to convolutional neural networks, the proposed deep neural network requires less training time and data.
The document describes a deep learning model called Deep-Emotion that uses an attentional convolutional network for facial expression recognition. The model focuses on important facial regions like the eyes and mouth that are most relevant to detecting emotions. The model achieves state-of-the-art accuracy on several facial expression datasets, outperforming previous methods. Visualization techniques show the model learns to focus on different facial regions for different emotions.
IRJET- Sentimental Analysis on Audio and Video using Vader Algorithm -Monali ...IRJET Journal
This document presents a proposed system for performing sentiment analysis on audio and video reviews from social media platforms. The system first collects audio and video data from sites like YouTube and Facebook. It then separates the audio and video files, converts them to .wav format, and extracts text from the audio and video files. This extracted text is then analyzed using the VADER sentiment analysis algorithm to determine the sentiment polarity (positive, negative, neutral) expressed in the text. VADER is a lexicon-based approach that rates words based on sentiment and calculates overall sentiment scores. The proposed system aims to analyze sentiment in audio and video reviews to better understand user opinions expressed across various social media platforms.
This document presents an audio-visual emotion recognition system that uses multiple modalities and machine learning techniques. It extracts audio features like MFCCs and visual features like facial landmarks from video clips. It uses classifiers like CNNs and stacks their confidence outputs to predict emotions. The system achieves state-of-the-art performance on several databases according to experiments. It represents an improvement over previous work by combining audio, visual and classifier fusion approaches for multimodal emotion recognition.
The document describes a new hierarchical deep learning algorithm for facial expression recognition (FER). The algorithm extracts appearance features from preprocessed LBP images using a convolutional neural network. It also extracts geometric features by tracking the coordinates of action unit landmarks, which are facial muscles involved in expressions. These two types of features are fused in a hierarchical structure. The algorithm combines the softmax outputs of each network by considering the second-highest predicted emotion. It also uses an autoencoder to generate neutral expression images to help extract dynamic features between neutral and emotional expressions. The algorithm achieved 96.46% accuracy on the CK+ dataset and 91.27% on the JAFFE dataset, outperforming other recent FER methods.
IRJET- Emotion based Music Recommendation SystemIRJET Journal
This document discusses an emotion-based music recommendation system that uses facial expression recognition to determine a user's mood and generate an appropriate playlist. It first discusses existing approaches to emotion detection in music, including models that represent emotions in two-dimensional spaces of arousal and valence. It then outlines the objectives of developing a new system using machine learning approaches to detect emotions from facial images in order to provide personalized music recommendations based on the user's mood. Finally, it reviews several related works that use techniques like Active Appearance Models, Bezier curve fitting, and support vector machines for facial expression analysis and emotion recognition from images.
Predicting depression using deep learning and ensemble algorithms on raw twit...IJECEIAES
Social network and microblogging sites such as Twitter are widespread amongst all generations nowadays where people connect and share their feelings, emotions, pursuits etc. Depression, one of the most common mental disorder, is an acute state of sadness where person loses interest in all activities. If not treated immediately this can result in dire consequences such as death. In this era of virtual world, people are more comfortable in expressing their emotions in such sites as they have become a part and parcel of everyday lives. The research put forth thus, employs machine learning classifiers on the twitter data set to detect if a person’s tweet indicates any sign of depression or not.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Evaluation of positive emotion in children mobile learning applicationjournalBEEI
This document discusses the evaluation of positive emotion in children's mobile learning applications using a mixed methods approach. Data was collected from over 100 children across rural, suburban, and urban schools in Sabah, Malaysia using various evaluation methods, including assessment scores, EEG devices, facial expression analysis, a emotions scale, and interviews. The results showed that children were generally able to score highly on assessments after using the mobile application, with urban children scoring highest. EEG data also suggested high levels of positive emotion and learning effectiveness, particularly for suburban children. Overall, the evaluation methods demonstrated that the mobile learning application was effective at eliciting positive emotions in children.
This document describes an emotion-based music player that generates playlists based on a user's detected mood. It uses three main modules: an emotion extraction module that analyzes facial expressions from webcam images to determine mood, an audio feature extraction module that extracts data from songs, and an emotion-audio recognition module that maps the facial and audio features to select songs for the playlist. The system aims to reduce the effort of manually creating playlists by automatically generating ones tailored to the user's current emotional state. It works by classifying facial expressions and songs into categories like happy, sad, and angry to create playlists that match or influence the user's detected mood.
Connotative Feature Extraction For Movie RecommendationCSCJournals
It is difficult to assess the emotions subject to the emotional responses to the content of the film by exploring the film connotative properties. Connotation is used to represent the emotions described by the audiovisual descriptors so that it predicts the emotional reaction of user. The connotative features can be used for the recommendation of movies. There are various methodologies for the recommendation of movies. This paper gives comparative analysis of some of these methods. This paper introduces some of the audio features that can be useful in the analysis of the emotions represented in the movie scenes. The video features can be mapped with emotions. This paper provides methodology for mapping audio features with some emotional states such as happiness, sleepiness, excitement, sadness, relaxation, anger, distress, fear, tension, boredom, comedy and fight. In this paper movie’s audio is used for connotative feature extraction which is extended to recognize emotions. This paper also provides comparative analysis of some of the methods that can be used for the recommendation of movies based on user’s emotions.
The document is a seminar report on an EMO player that generates playlists based on a user's facial expressions. It includes sections on introduction, literature review, current music systems and their limitations, the proposed system, Viola-Jones algorithm, experimental results, conclusion, applications, future scope, and references. The proposed system aims to automatically generate playlists based on detected facial expressions in order to reduce the effort required for manual playlist generation. It uses modules for emotion feature extraction from faces, audio feature extraction, emotion-audio integration, and a classifier to map emotions to music selections.
Expressing and recognizing emotional behavior of human play an important role in communication systems. Facial expression analysis is the most expressive way to display human emotion. Three types of facial emotion are recognized and classified: happy, sad, anger. Depending on the emotion, the music player will play the song accordingly which eliminates the time-consuming and tedious task of manually segregating or grouping songs into different lists and help in generating an appropriate playlist based on individual’s emotional features. This paper implements an efficient extraction of facial points using Beizer Curves which is more suitable for use in mobile devices. To enhance the audibility level of audio and memory shortage group play concept is also introduced. More than one android mobiles communicate with each other through Peer to Peer connection using Wi-Fi direct. Along with music player editing features like audio trimming and precise voice recording. Later the customized audio file can be set as alarm, ringtone, notification tones. Since all the emotional recognition is done for the real time images, it outperforms well than the any other existing face recognition algorithms or music play applications.
Development of video-based emotion recognition using deep learning with Googl...TELKOMNIKA JOURNAL
Emotion recognition using images, videos, or speech as input is considered as a hot topic in the field of research over some years. With the introduction of deep learning techniques, e.g., convolutional neural networks (CNN), applied in emotion recognition, has produced promising results. Human facial expressions are considered as critical components in understanding one's emotions. This paper sheds light on recognizing the emotions using deep learning techniques from the videos. The methodology of the recognition process, along with its description, is provided in this paper. Some of the video-based datasets used in many scholarly works are also examined. Results obtained from different emotion recognition models are presented along with their performance parameters. An experiment was carried out on the fer2013 dataset in Google Colab for depression detection, which came out to be 97% accurate on the training set and 57.4% accurate on the testing set.
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Modeling the Adaption Rule in Contextaware Systemsijasuc
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing
systems. Each context-aware application has its own set of behaviors to react to context modifications. This
paper is concerned with the context modeling and the development methodology for context-aware systems.
We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware
systems. We illustrate this idea in an arithmetic game application.
Gamification for Elementary Mathematics Learning in Indonesia IJECEIAES
The purpose of this research is to combine multimedia elements and mathematics learning material to a mathematic learning interactive application. Research and design methodology that used is Game Development Life Cycle (GDLC) which consist of initiation, pre-production, production, testing and release. Content inside the game is made using gamification and expert system concept. The result of this research is an interactive learning game to support student to understand mathematic materials. The purpose of this application is to help student to learn mathematic in an interactive and interesting way, to deliver mathematic material easily.
NLP-based personal learning assistant for school education IJECEIAES
Computer-based knowledge and computation systems are becoming major sources of leverage for multiple industry segments. Hence, educational systems and learning processes across the world are on the cusp of a major digital transformation. This paper seeks to explore the concept of an artificial intelligence and natural language processing (NLP) based intelligent tutoring system (ITS) in the context of computer education in primary and secondary schools. One of the components of an ITS is a learning assistant, which can enable students to seek assistance as and when they need, wherever they are. As part of this research, a pilot prototype chatbot was developed, to serve as a learning assistant for the subject Scratch (Scratch is a graphical utility used to teach school children the concepts of programming). By the use of an open source natural language understanding (NLU) or NLP library, and a slackbased UI, student queries were input to the chatbot, to get the sought explanation as the answer. Through a two-stage testing process, the chatbot’s NLP extraction and information retrieval performance were evaluated. The testing results showed that the ontology modelling for such a learning assistant was done relatively accurately, and shows its potential to be pursued as a cloud-based solution in future.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
Depression and anxiety detection through the Closed-Loop method using DASS-21TELKOMNIKA JOURNAL
The change of information and communication technology has brought many changes in daily
life. The way humans interacting is changing. It is possible to express each form of communication directly
and instantly. Social media has contributed data in size, diversity and capacity and quality. Based on it,
the idea was to see and measure the tendency of depression and anxiety through social media using
the Closed-Loop method using Facebook text mining posts. Through the stages of pre-processing
including text extraction using the Naïve Bayes machine learning model for text classification, the early
signs of depression and anxiety are measured using DASS-21 parameter. In total, 22,934 Facebook posts
were contributed as training and learning data collected from July 2017 until July 2018. As a results,
analysis and mapping of social demographics of users that are usually as a trigger of depression, and
anxiety, such as grief, illness, household affairs, children education and others are available.
Analysis on techniques used to recognize and identifying the Human emotions IJECEIAES
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on various recognition techniques used to identify the complexity in recognizing the facial expression is presented.
An Approach of Human Emotional States Classification and Modeling from EEGCSCJournals
In this paper, a new approach is proposed to model the emotional states from EEG signals with mathematical expressions based on wavelet analysis and trust region algorithm. EEG signals are collected in different emotional states and some salient features are extracted through temporal and spectral analysis to indicate the dispersion which will unify different states. The maximum classification accuracy of emotion is obtained for DWT analysis rather than FFT and statistical analysis. So DWT analysis is considered as the best suited for mathematical modeling of human emotions. The emotional states are modeled with different mathematical expressions using the obtained coefficients from trust region algorithm that can be compared with the sub-band wavelet coefficients of different states. The proposed approach is verified with the adjusted R-square percentage and the sum of square errors. The adjusted R- square percentage of the mathematical modeled states are 78.4% for relax, 77.18% for motor action; however for memory, pleasant, enjoying music and fear they are 93%, 95.6%, 97.7% and 91.5% respectively. The proposed system is reliable that can be applied for practical real time implementation of human emotion based systems.
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONSIJMIT JOURNAL
The document describes a new model called Measuring User Experience using Digitally Transformed/Converted Emotions (MUDE) which measures two metrics of user experience (satisfaction and errors) using facial expressions and gestures captured by an Intel interactive camera. An experiment was conducted with 70 participants who used a software application while their facial expressions and gestures were recorded. The results from the camera were then compared to responses from a System Usability Scale questionnaire to determine if attitudes towards usability matched between the two methods. The study found consistency between the camera-captured emotions and questionnaire responses regarding usability. The MUDE model provides a new approach to evaluating user experience based on digitally measuring emotions expressed during interaction.
Emotion Detection and Depression Analysis in Chat ApplicationIRJET Journal
This document summarizes a research paper on developing an emotion detection and depression analysis system within a chat application. It describes how the researchers created two machine learning models to detect emotions in conversations with 83% accuracy. It also analyzed identified emotions to detect signs of depression. The chat application has the potential to improve mental healthcare by providing a way for users to track their emotional well-being. The document reviews related work and outlines the methodology, results, and applications of the proposed system for emotion detection and depression analysis in text-based conversations.
A Review On Sentiment Analysis And Emotion Detection From TextYasmine Anino
This document provides a review of sentiment analysis and emotion detection from text. It begins with definitions of sentiment analysis and emotion detection, noting they are related but distinct tasks. Sentiment analysis determines polarity (positive, negative, neutral) while emotion detection identifies specific emotion types. The document reviews levels of sentiment analysis (sentence, document, aspect) and common emotion models (dimensional, categorical). It also discusses the process of sentiment analysis and emotion detection, including datasets, preprocessing, feature extraction, and different approaches. Finally, it addresses challenges in the field, such as context, sarcasm, and ambiguity.
IRJET - Social Network Stress Analysis using Word Embedding TechniqueIRJET Journal
The document describes a proposed system for predicting stress levels of social network users by analyzing their comments on platforms like Facebook, Twitter, and Instagram. It uses word embedding techniques to translate words from comments into vectors, and then applies decision tree text classification on the vectors to predict whether the comments convey positive, negative, or neutral sentiment. The predicted sentiment analysis is then used to determine an overall stress status for each user. The system is intended to help identify stress early by analyzing what users share on social media, before depression or other issues arise. It aims to provide an easy-to-use web application and dashboard for viewing predicted stress statuses of social network users.
The document discusses research into developing computers that can interact with humans more naturally by perceiving emotions and sensory inputs like humans. Specifically, it discusses:
1) The BLUE EYES technology which aims to give computers human-like perceptual and sensory abilities to understand facial expressions and emotional states.
2) An emotion mouse which measures physiological signals through a mouse to determine a user's emotional state and build a personalized model to help computers adapt to individual users.
3) Prior research linking facial expressions, physiological measurements like galvanic skin response and temperature, and emotional states which provide a framework for the emotion mouse research.
Literature Review On: ”Speech Emotion Recognition Using Deep Neural Network”IRJET Journal
The document discusses speech emotion recognition using deep neural networks. It first provides an overview of SER and the challenges in the field. It then reviews 20 research papers on the topic, finding that most use deep neural network techniques like CNNs and DNNs for model building. The papers evaluated various datasets and algorithms, with accuracy ranging from 84% to 90%. Overall limitations identified included the need for more data, handling of multiple simultaneous emotions, and improving cross-corpus performance. The literature review contributes to knowledge in using machine learning for SER.
Predicting depression using deep learning and ensemble algorithms on raw twit...IJECEIAES
Social network and microblogging sites such as Twitter are widespread amongst all generations nowadays where people connect and share their feelings, emotions, pursuits etc. Depression, one of the most common mental disorder, is an acute state of sadness where person loses interest in all activities. If not treated immediately this can result in dire consequences such as death. In this era of virtual world, people are more comfortable in expressing their emotions in such sites as they have become a part and parcel of everyday lives. The research put forth thus, employs machine learning classifiers on the twitter data set to detect if a person’s tweet indicates any sign of depression or not.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Evaluation of positive emotion in children mobile learning applicationjournalBEEI
This document discusses the evaluation of positive emotion in children's mobile learning applications using a mixed methods approach. Data was collected from over 100 children across rural, suburban, and urban schools in Sabah, Malaysia using various evaluation methods, including assessment scores, EEG devices, facial expression analysis, a emotions scale, and interviews. The results showed that children were generally able to score highly on assessments after using the mobile application, with urban children scoring highest. EEG data also suggested high levels of positive emotion and learning effectiveness, particularly for suburban children. Overall, the evaluation methods demonstrated that the mobile learning application was effective at eliciting positive emotions in children.
This document describes an emotion-based music player that generates playlists based on a user's detected mood. It uses three main modules: an emotion extraction module that analyzes facial expressions from webcam images to determine mood, an audio feature extraction module that extracts data from songs, and an emotion-audio recognition module that maps the facial and audio features to select songs for the playlist. The system aims to reduce the effort of manually creating playlists by automatically generating ones tailored to the user's current emotional state. It works by classifying facial expressions and songs into categories like happy, sad, and angry to create playlists that match or influence the user's detected mood.
Connotative Feature Extraction For Movie RecommendationCSCJournals
It is difficult to assess the emotions subject to the emotional responses to the content of the film by exploring the film connotative properties. Connotation is used to represent the emotions described by the audiovisual descriptors so that it predicts the emotional reaction of user. The connotative features can be used for the recommendation of movies. There are various methodologies for the recommendation of movies. This paper gives comparative analysis of some of these methods. This paper introduces some of the audio features that can be useful in the analysis of the emotions represented in the movie scenes. The video features can be mapped with emotions. This paper provides methodology for mapping audio features with some emotional states such as happiness, sleepiness, excitement, sadness, relaxation, anger, distress, fear, tension, boredom, comedy and fight. In this paper movie’s audio is used for connotative feature extraction which is extended to recognize emotions. This paper also provides comparative analysis of some of the methods that can be used for the recommendation of movies based on user’s emotions.
The document is a seminar report on an EMO player that generates playlists based on a user's facial expressions. It includes sections on introduction, literature review, current music systems and their limitations, the proposed system, Viola-Jones algorithm, experimental results, conclusion, applications, future scope, and references. The proposed system aims to automatically generate playlists based on detected facial expressions in order to reduce the effort required for manual playlist generation. It uses modules for emotion feature extraction from faces, audio feature extraction, emotion-audio integration, and a classifier to map emotions to music selections.
Expressing and recognizing emotional behavior of human play an important role in communication systems. Facial expression analysis is the most expressive way to display human emotion. Three types of facial emotion are recognized and classified: happy, sad, anger. Depending on the emotion, the music player will play the song accordingly which eliminates the time-consuming and tedious task of manually segregating or grouping songs into different lists and help in generating an appropriate playlist based on individual’s emotional features. This paper implements an efficient extraction of facial points using Beizer Curves which is more suitable for use in mobile devices. To enhance the audibility level of audio and memory shortage group play concept is also introduced. More than one android mobiles communicate with each other through Peer to Peer connection using Wi-Fi direct. Along with music player editing features like audio trimming and precise voice recording. Later the customized audio file can be set as alarm, ringtone, notification tones. Since all the emotional recognition is done for the real time images, it outperforms well than the any other existing face recognition algorithms or music play applications.
Development of video-based emotion recognition using deep learning with Googl...TELKOMNIKA JOURNAL
Emotion recognition using images, videos, or speech as input is considered as a hot topic in the field of research over some years. With the introduction of deep learning techniques, e.g., convolutional neural networks (CNN), applied in emotion recognition, has produced promising results. Human facial expressions are considered as critical components in understanding one's emotions. This paper sheds light on recognizing the emotions using deep learning techniques from the videos. The methodology of the recognition process, along with its description, is provided in this paper. Some of the video-based datasets used in many scholarly works are also examined. Results obtained from different emotion recognition models are presented along with their performance parameters. An experiment was carried out on the fer2013 dataset in Google Colab for depression detection, which came out to be 97% accurate on the training set and 57.4% accurate on the testing set.
Efficient Facial Expression and Face Recognition using Ranking MethodIJERA Editor
Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Modeling the Adaption Rule in Contextaware Systemsijasuc
Context awareness is increasingly gaining applicability in interactive ubiquitous mobile computing
systems. Each context-aware application has its own set of behaviors to react to context modifications. This
paper is concerned with the context modeling and the development methodology for context-aware systems.
We proposed a rule-based approach and use the adaption tree to model the adaption rule of context-aware
systems. We illustrate this idea in an arithmetic game application.
Gamification for Elementary Mathematics Learning in Indonesia IJECEIAES
The purpose of this research is to combine multimedia elements and mathematics learning material to a mathematic learning interactive application. Research and design methodology that used is Game Development Life Cycle (GDLC) which consist of initiation, pre-production, production, testing and release. Content inside the game is made using gamification and expert system concept. The result of this research is an interactive learning game to support student to understand mathematic materials. The purpose of this application is to help student to learn mathematic in an interactive and interesting way, to deliver mathematic material easily.
NLP-based personal learning assistant for school education IJECEIAES
Computer-based knowledge and computation systems are becoming major sources of leverage for multiple industry segments. Hence, educational systems and learning processes across the world are on the cusp of a major digital transformation. This paper seeks to explore the concept of an artificial intelligence and natural language processing (NLP) based intelligent tutoring system (ITS) in the context of computer education in primary and secondary schools. One of the components of an ITS is a learning assistant, which can enable students to seek assistance as and when they need, wherever they are. As part of this research, a pilot prototype chatbot was developed, to serve as a learning assistant for the subject Scratch (Scratch is a graphical utility used to teach school children the concepts of programming). By the use of an open source natural language understanding (NLU) or NLP library, and a slackbased UI, student queries were input to the chatbot, to get the sought explanation as the answer. Through a two-stage testing process, the chatbot’s NLP extraction and information retrieval performance were evaluated. The testing results showed that the ontology modelling for such a learning assistant was done relatively accurately, and shows its potential to be pursued as a cloud-based solution in future.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
Depression and anxiety detection through the Closed-Loop method using DASS-21TELKOMNIKA JOURNAL
The change of information and communication technology has brought many changes in daily
life. The way humans interacting is changing. It is possible to express each form of communication directly
and instantly. Social media has contributed data in size, diversity and capacity and quality. Based on it,
the idea was to see and measure the tendency of depression and anxiety through social media using
the Closed-Loop method using Facebook text mining posts. Through the stages of pre-processing
including text extraction using the Naïve Bayes machine learning model for text classification, the early
signs of depression and anxiety are measured using DASS-21 parameter. In total, 22,934 Facebook posts
were contributed as training and learning data collected from July 2017 until July 2018. As a results,
analysis and mapping of social demographics of users that are usually as a trigger of depression, and
anxiety, such as grief, illness, household affairs, children education and others are available.
Analysis on techniques used to recognize and identifying the Human emotions IJECEIAES
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on various recognition techniques used to identify the complexity in recognizing the facial expression is presented.
An Approach of Human Emotional States Classification and Modeling from EEGCSCJournals
In this paper, a new approach is proposed to model the emotional states from EEG signals with mathematical expressions based on wavelet analysis and trust region algorithm. EEG signals are collected in different emotional states and some salient features are extracted through temporal and spectral analysis to indicate the dispersion which will unify different states. The maximum classification accuracy of emotion is obtained for DWT analysis rather than FFT and statistical analysis. So DWT analysis is considered as the best suited for mathematical modeling of human emotions. The emotional states are modeled with different mathematical expressions using the obtained coefficients from trust region algorithm that can be compared with the sub-band wavelet coefficients of different states. The proposed approach is verified with the adjusted R-square percentage and the sum of square errors. The adjusted R- square percentage of the mathematical modeled states are 78.4% for relax, 77.18% for motor action; however for memory, pleasant, enjoying music and fear they are 93%, 95.6%, 97.7% and 91.5% respectively. The proposed system is reliable that can be applied for practical real time implementation of human emotion based systems.
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONSIJMIT JOURNAL
The document describes a new model called Measuring User Experience using Digitally Transformed/Converted Emotions (MUDE) which measures two metrics of user experience (satisfaction and errors) using facial expressions and gestures captured by an Intel interactive camera. An experiment was conducted with 70 participants who used a software application while their facial expressions and gestures were recorded. The results from the camera were then compared to responses from a System Usability Scale questionnaire to determine if attitudes towards usability matched between the two methods. The study found consistency between the camera-captured emotions and questionnaire responses regarding usability. The MUDE model provides a new approach to evaluating user experience based on digitally measuring emotions expressed during interaction.
Emotion Detection and Depression Analysis in Chat ApplicationIRJET Journal
This document summarizes a research paper on developing an emotion detection and depression analysis system within a chat application. It describes how the researchers created two machine learning models to detect emotions in conversations with 83% accuracy. It also analyzed identified emotions to detect signs of depression. The chat application has the potential to improve mental healthcare by providing a way for users to track their emotional well-being. The document reviews related work and outlines the methodology, results, and applications of the proposed system for emotion detection and depression analysis in text-based conversations.
A Review On Sentiment Analysis And Emotion Detection From TextYasmine Anino
This document provides a review of sentiment analysis and emotion detection from text. It begins with definitions of sentiment analysis and emotion detection, noting they are related but distinct tasks. Sentiment analysis determines polarity (positive, negative, neutral) while emotion detection identifies specific emotion types. The document reviews levels of sentiment analysis (sentence, document, aspect) and common emotion models (dimensional, categorical). It also discusses the process of sentiment analysis and emotion detection, including datasets, preprocessing, feature extraction, and different approaches. Finally, it addresses challenges in the field, such as context, sarcasm, and ambiguity.
IRJET - Social Network Stress Analysis using Word Embedding TechniqueIRJET Journal
The document describes a proposed system for predicting stress levels of social network users by analyzing their comments on platforms like Facebook, Twitter, and Instagram. It uses word embedding techniques to translate words from comments into vectors, and then applies decision tree text classification on the vectors to predict whether the comments convey positive, negative, or neutral sentiment. The predicted sentiment analysis is then used to determine an overall stress status for each user. The system is intended to help identify stress early by analyzing what users share on social media, before depression or other issues arise. It aims to provide an easy-to-use web application and dashboard for viewing predicted stress statuses of social network users.
The document discusses research into developing computers that can interact with humans more naturally by perceiving emotions and sensory inputs like humans. Specifically, it discusses:
1) The BLUE EYES technology which aims to give computers human-like perceptual and sensory abilities to understand facial expressions and emotional states.
2) An emotion mouse which measures physiological signals through a mouse to determine a user's emotional state and build a personalized model to help computers adapt to individual users.
3) Prior research linking facial expressions, physiological measurements like galvanic skin response and temperature, and emotional states which provide a framework for the emotion mouse research.
Literature Review On: ”Speech Emotion Recognition Using Deep Neural Network”IRJET Journal
The document discusses speech emotion recognition using deep neural networks. It first provides an overview of SER and the challenges in the field. It then reviews 20 research papers on the topic, finding that most use deep neural network techniques like CNNs and DNNs for model building. The papers evaluated various datasets and algorithms, with accuracy ranging from 84% to 90%. Overall limitations identified included the need for more data, handling of multiple simultaneous emotions, and improving cross-corpus performance. The literature review contributes to knowledge in using machine learning for SER.
Emotion detection on social media status in Myanmar language IJECEIAES
Many social media emerged and provided services during these years. Most people, especially in Myanmar, use them to express their emotions or moods, learn subjects, sell products, read up-to-date news, and communicate with each other. Emotion detection on social users makes critical tasks in the opinion mining and sentiment analysis. This paper presents the emotion detection system on social media (Facebook) user status or post written in Myanmar (Burmese) language. Before the emotion detection process, the user posts are pre-processed under segmentation, stemming, part-of-speech (POS) tagging, and stop word removal. The system then uses our preconstructed Myanmar word-emotion Lexicon, M-Lexicon, to extract the emotion words from the segmented POS post. The system provides six types of emotion such as surprise, disgust, fear, anger, sadness, and happiness. The system applies naïve Bayes (NB) emotion classifier to examine the emotion in the case of more than two words with different emotion values are extracted. The classifiers also classify the emotion of the users on their posts. The experiment shows that the system can detect 85% accuracy in NB based emotion detection while 86% in recurrent neural network (RNN).
ENHANCING THE HUMAN EMOTION RECOGNITION WITH FEATURE EXTRACTION TECHNIQUESIAEME Publication
This document discusses techniques for enhancing human emotion recognition through feature extraction. It begins by introducing the importance of recognizing human emotions through facial expressions. It then reviews related work applying techniques like deep learning, feature selection, and curriculum learning to emotion recognition. The document goes on to describe optimization techniques like ant colony optimization, particle swarm optimization, and genetic algorithms that can be used for feature extraction. It suggests these techniques may improve emotion recognition performance by selecting important features.
The document summarizes research on designing emotional experiences for mobile user interfaces. It describes:
1. Three empirical studies were conducted to identify emotions associated with smartphone use and design principles.
2. Positive and negative emotion sets were developed based on the studies. 30 positive emotions were identified as potential "emotional solutions".
3. 14 "general solutions" or high-level design principles were developed to deliver the emotional solutions. These include principles like "flow" and "superior quality".
4. 23 specific "UX solutions" were developed that embody the general solutions and can be applied to interface design with flexibility.
Emotional Multi-Agents System for Peer to peer E-LearningKarlos Svoboda
The document describes EMASPEL, an emotional multi-agent system for peer-to-peer e-learning. The system uses multiple agent types, including interface agents, emotional agents, curriculum agents, tutor agents, and emotional embodied conversational agents. Emotional agents analyze learners' facial expressions to recognize emotions, while embodied conversational agents express emotions through facial animations. Together this allows the system to personalize instruction based on learners' cognitive and emotional states.
This document discusses issues in sentiment analysis and emotion extraction from text. It provides an overview of different techniques used for emotion extraction like text mining, empirical studies, emotion extraction engines, and vector space models. It then analyzes the issues with each technique, such as only identifying the subject but not sentiment, inability to determine intensity, and difficulties with contradictory or symbolic text. The document concludes that combining the study of multiple techniques and parameters could help develop a more accurate system for sentiment analysis that is closer to realistic human emotion extraction from text.
This document summarizes research on the Blue Eyes technology, which aims to give computers human-like perceptual abilities. It discusses how Blue Eyes uses non-intrusive sensors like video cameras and microphones to identify a user's actions, physical state, emotions, and where they are looking. This information is analyzed to build a model of the user over time to help the computer adapt and create a more productive environment. The document also reviews related work on detecting emotions from facial expressions and touch input and explores using eye tracking for computer input.
ANALYSING SPEECH EMOTION USING NEURAL NETWORK ALGORITHMIRJET Journal
The document proposes a deep learning model to identify speech emotion using neural network algorithms. It discusses using convolutional neural networks and LSTM to extract acoustic features from speech signals and accurately identify emotional states. The model aims to improve the accuracy and robustness of speech emotion recognition systems, with applications in healthcare, customer service, and human-computer interaction.
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Facial Expression Recognition System: A Digital Printing Applicationijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Facial Expression Recognition System: A Digital Printing Applicationijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Similar to Recognizing emotional state of user based on learning method and conceptual memories (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Recognizing emotional state of user based on learning method and conceptual memories
1. TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 6, December 2020, pp. 3033~3040
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i6.16756 3033
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
Recognizing emotional state of user based on learning method
and conceptual memories
Maytham N. Meqdad1
, Fardin Abdali-Mohammadi2
, Seifedine Kadry3
1, 2
Department of Computer Engineering and Information Technology, Faculty of Engineering, Razi University, Iran
3
Department of Mathematics and Computer Science, Faculty of Science, Beirut Arab University, Lebanon
Article Info ABSTRACT
Article history:
Received May 20, 2020
Revised Jun 28, 2020
Accepted Jul 6, 2020
With theincreased useof computers,electronicdevicesand human interaction with
computer in the broad spectrum of human life, the role of controlling emotions and
increasing positive emotional states becomes more prominent. If a user's negative
emotions increase, his/her efficiency will decrease greatly as well. Research has
shown that colors are to be considered as one of the most influential basic functions
in sight, identification, interpretation, perception and senses. It can be said that
colors have impact on individuals' emotional states and can change them. In this
paper, by learning the reactions of users with different personality types against
each color, communication between the user's emotional states and personality and
colors were modeled for the variable "emotional control". For the sake of learning,
we used a memory-based system with the user’s interface color changing in
accordance with the positive and negative experiences of users with different
personalities. The end result of comparison of the testing methods demonstrated
the superiority of memory-based learning in all three parameters of emotional
control, enhancement of positive emotional states and reduction of negative
emotional states. Moreover, the accuracy of memory- based learning method was
almost 70 percent.
Keywords:
Adaptive color interface
Coping strategy
Emotion
Human-computer interaction
Memory
Memory-based learning
Mood
Personality
This is an open access article under the CC BY-SA license.
Corresponding Author:
Seifedine Kadry,
Department of Mathematics and Computer Science, Faculty of Science,
Beirut Arab University,
Beirut, Lebanon.
Email: s.kadry@bau.edu.lb
1. INTRODUCTION
Due to the extensive use of computers and electronic devices in human daily lives, the role of emotion
regulation and increasing positive emotional state becomes more and more prominent. If the negative emotions
of user’s increase, user efficiency will also decrease greatly [1]. This reduced effectiveness only increases
the number of errors made by ordinary users, while it may cause irreparable problems for professional users
who are responsible for sensitive tasks [1]. To enhance positive emotional states, one can control his/her
emotional states and change them into positive emotional states; in fact, emotional control can actually help
one prevent his/her negative states [1, 2]. One of the uses of emotional control is in educational and tutorial
systems [3, 4], where emotional control prevents fatigue and lack of interest in users [3].
Thus, the importance of emotional control has become more prominent with the increased use of
computers and interaction of humans with computers. On the other hand, with an increase in computer
programs, graphical interfaces of applications play a more significant role in computer programs due to direct
communications with the users [5]. Graphical interface is considered to be one of the fundamental parts of
2. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 6, December 2020: 3033 - 3040
3034
applications for human interaction with computer [5]. Adaptable interface that can automatically change and
adapt it based on its task and interaction is one of the most important topics for research [6]. Hence, excitement
can be controlled by using adaptive graphical interfaces.
Colors are one of the most influential basic functions in sight, identification, interpretation, perception
and senses [7]. Colors have impact on emotional states [7-9] and can change them. Bright colors like yellow
and blue have positive emotional states, while dark colors like black and gray are associated with negative
emotions [10]. So, one may change emotional states by using user interface color [11]. As it is possible to
change emotional states using color change, we can thus control the emotional states and lead them to positive
emotional states. The main hypothesis of this article is: ‟emotional states can be controlled through a change
in the user interface color″.
In this paper we decided to learn the reactions of users with different personality types versus each
color for control emotion by modeling of the communication between the user's emotional states and user
personality. For this purpose, we used a memory-based system for learning and interface color changes
according to the different positive and negative experiences of users with different personalities. In this paper,
to evaluate we used the five-factor personality of neo five-factor inventory (NEO-FFI) in order to evaluate
personality [12]. In addition, we implemented a training tool as C ++ programming language in order to test
the system.
Correct response in human-computer interaction which is based on emotions relies on two main parts:
1) emotion recognition and 2) response generation. Errors made when generating an appropriate response are
due to two reasons: errors arising from emotion recognition and errors made when making responses. The main
purpose of this study is to make use of the correct experience-driven response in the user interface. Therefore,
emotion recognition phase was removed in order to minimize the errors and the self-assessment manikin
(SAM) non-verbal pictorial assessment test was used to determine the emotional state [13].
The memory system is made up of two parts; episodic memory and semantic memory. Episode
memory saves the details of the experienced significant events, while semantic memory demonstrates
the abstract of episodic memory contents. Usable memory system has the ability to learn and will be completed
as it is used more and more. The memory model used in this study is learning based on actions and emotions.
But a typical learning system such as Q-learning is learning only on the basis of the best practice in a given
situation. In conventional methods of learning, useful details such as the time of the action, the intensity of
emotion, and the subject and object are not taken into account. In other words, Q-learning loses the details,
while the memory-based learning model which has been used in this article is based on the details of
an experience. In this study, attempts were made to control the emotional states of users and lead them to
positive emotional states. In fact, for users with different personality traits, we tried to use their prior experience
and select user-friendly colors as background and prevent the user from creating negative emotional states.
In simple terms, commensurate with users' personality and experiences, the individuals' emotional states may
be controlled through user interface color and changed into balanced and positive emotional states. One-way
ANOVA analysis was used in this study for the assessment process with results revealing the significant
effectiveness of the memory-based learning model.
This paper is organized as follows. First, we describe the related works. Then, we present the background
of the study including the memory, personality, mood and SAM. In the fourth section, we will present
the memory-based learning and explain the details of this framework. We will then describe implementation of a
C++ programming language learning tools developed to evaluate the framework, after which the results of the
evaluation will be presented. In the last section, we will present the discussion and the conclusion.
2. RELATED WORKS
Suk and Irtel [9] performed some experiments so as to find the relation between emotional states and
colors. The purpose of this article was to describe emotional responses to color perception in terms of emotional
state dimensions. Finally, they gained a table that revealed the relationship between colors and emotional states.
Their research also showed that the color characteristics will influence the emotional states. Unlike light and
chroma, hue has no effect on emotional states. The effects of personality have been neglected in the study
conducted by Suk and Irtel.
OU and his colleagues [14] examined the relation between color and age. Their results showed that
users' age is associated with their color preferences. They found out that older adults prefer stronger colors
with more Chroma, while the younger people prefer chromatic colors. We have used a sample from a particular
age range in this study in order to obtain more appropriate results. Epps and Kaya [10] carried out some studies
so as to examine emotional states as associated with colors vision. They found out that bright colors like yellow
and blue are associated with positive emotions (like happy), whereas dark colors like black and gray are
3. TELKOMNIKA Telecommun Comput El Control
Recognizing emotional state of user based on learning method and… (Maytham N. Meqdad)
3035
associated with negative emotions (like sad, anger) and yellow and red are associated with more anxiety than
are blue and green.
Sokolova and colleagues [15] investigated the relationship between emotion regulation and color
preference. Kurt and Osueke [7] showed that colors affect the emotional states and vice versa. The main aim
of their study was to explore the effects of colors psychology on individuals. Colors may change states and are
actually one of the basic functions in sight, identification, interpretation, perception and senses. They found
that cold colors such as blue and green are relaxing, while warm colors which including red and yellow are
energy-making.
Noori et al. [16] introduced an intelligent interface named an adaptive user-interface based on users’
emotions (AUBUE) adapting its colors based on emotional states of users. In this intelligent interface, which
can recognize emotions based on users' interactions through a keyboard and accordingly change
the background color in such a way that it decreases the negative emotions in users. This interface consists of
4 sections: keyboard interpreter, event interpreter, mood update and color selector. Keyboard Interpreter
analyzes the user's interaction with keyboard and converts them into a number of events. Event Interpreter
converts keyboard events into fuzzy and then maps them onto emotions and optical camera communication
(OCC) model has been used in order to use emotions. Mood update receives the list of the active emotions and
their intensity as the input, obtaining the user’s current mood based on the user’s current mood and emotions
and sending it to color selector. In the color selector, when the user's emotional state was determined, the
appropriate color mode is selected. The appropriate color for a mode is a color which overrides the current
mode. They showed that the use of colors can affect controlling the emotional states, but it was confronted
with some problems and effectiveness of colors was not significant. Among the problems were: 1) Low number
of the recognized emotional moods, 2) lack of accuracy in the emotion recognition procedure 3) Ignoring users'
personality and 4) lack of learning from users' experience.
We seek in this study to solve the problems posed by Noori et al. [16] and to assess the possibility of
controlling emotions by using colors. In order to solve the first and second problems, we removed the emotional
state diagnostic phase and used the SAM phase instead. Users had different expectations from color change. In other
words, users with different personalities expected different colors. Users with different feelings may have different
color expectations. People with different personalities may react differently in dealing with a particular event [17]
and each color has different impacts on different people with different personalities [18-20]. Some studies have also
dealt with the effect of individuals' personality on their favorite colors. In fact, it can be said that the basic problem
set forth in Noori et al. [16] was lack of the system learning process in both interactions and choice of colors. Hence,
adding the learning ability can have a significant effect on controlling emotions. So, we have used in this study
a memory-based learning model in order to solve the third and fourth problems by taking the user's personality into
consideration. The aim of this research is to control emotional states using learning model by taking into account
the users' personalities and through the use of user interface graphic color when interacting with the software.
3. BACKGROUND
3.1. Memory
Short-term memory and long-term memory are two large divisions of human memories. Short-term
memory holds and keeps the experienced events only temporarily. These events are either happening at
the moment or have happened just a few seconds ago. Long-term memory is the same as short-term memory
with the exception that it keeps events in our mind for a longer period of time (permanently). This memory can
be divided into two types: declarative memory and procedural memory.
Declarative memory keeps “explicit knowledge that we can express and are aware of". Procedural
memory keeps "the knowledge of how to do things that are implicit. This study has been designed and
conducted based on the memory framework and system presented in [8]. The declarative memory, with two
types of episodic memory and semantic memory, has been used in this framework. The following is a brief
explanation about these memories. But for more comprehensive information, you may refer to [8].
− Episodic memory: stores details of the remarkable experienced events
− Semantic memory: can store up general knowledge. This section of memory has not been used in this paper.
− Semantic graphs: are a kind of semantic networks displaying an abstract of the event memory contents.
Semantic graph has been conceived as a component added to semantic memory and capable of learning.
− General graph: is a kind of semantic graph, which is an abstract of semantic graphs and includes general
knowledge about all of the factors involved.
The intensity of the emotions that an individual feel during an experience can influence the retrieval
of that experience from the event memory. Episodic memory can be considered as a machine learning method
that 1) has memory; 2) both right and wrong answers of the system are considered as experience and
3) the addition of any recovery experience will affect other experiences. Semantic memory displays personal
4. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 6, December 2020: 3033 - 3040
3036
experience and does not include general information. Semantic memory also includes general knowledge
abstracted from personal experiences (episodic memory). Abstraction of experience includes “cleaning up
a lot of the received details and keeping the important relations between them”. Since the main purpose of this
study is to use experiences, only that part of memory which is capable of learning has been used in this study.
In other words, the part of semantic memory of the memory model which has no learning capability has not
been used in this study.
3.2. Personality
Personality has been defined as a set of distinct and stable thoughts, emotions and behaviors
that shows our adaptation with the world. McCrae and Costa (1990) have defined the personality traits as
the dimensions of individual differences in the tendency to show stable patterns of thought, feeling and action.
They identified five powerful factors in understanding human personality traits. Based on this model, human
personality was divided into five main dimensions including extraversion, agreeableness, conscientiousness,
neuroticism and openness to experiences. These dimensions are commonly known as the Big Five personality
traits and can be summarized as follow:
− Extraversion (E): extroverts are socially-oriented, but sociability is just one of their attributes. Loving
the people around, preferring large groups and conventions, being active, courageous and being too
talkative are among their attributes.
− Neuroticism (N): a general tendency to negative emotions such as fear, sadness, confusion, anger and
feeling of guilt and hatred constitutes neurosis.
− Agreeableness (A): the agreeable people are basically altruists, feeling compassion towards others and
eager to help them and believing that others are mutually helpful.
− Openness (O): resilient people are curious about the inner and outer world and their life is rich in terms of
experience. They are willing and eager to accept new ideas and unconventional values and experience
positive and negative emotions more highly and deeply than inflexible individuals.
− Conscientiousness (C): purposeful conscientious people are strong-willed, determined, precise and reliable.
3.3. Mood
In fact, a person's mood is their status with higher durability than emotions and has many effects on
cognitive processes and learning. The PAD model has been used in this study in order to display and process
a person's mood. This model has defined a person's mood as their medium mental condition in different
situations. The mood in PAD space has three dimensions: P (pleasure), A (arousal) and D (dominance),
considered to describe and measure a specific emotional response. Pleasure (P) evaluates the quality of
pleasant-unpleasant emotional experience. Arousal (A) refers to physical activity and psycho-physiologic
changes and dominance (D) defines a sense of control or lack of control in a specific situation. Each dimension
is defined as a variable that can take values between -1 to 1. If the variables are classified according to whether
they are positive or negative, 8 modes can be defined in as presented in Table 1. The important question raised
here is whether or not these eight emotional states mutually contradict one another, the answer of which has
been shown by a study to be positive. These eight emotional states are in mutual conflict with one another.
Table 1. Categorization of modes based on their dimensional signs
Mode
-P-A-D = Bored+P+A+D = Exuberant
-P-A+D = Disdainful+P+A-D = Dependent
-P+A-D = Anxious+P-A+D = Relaxed
-P+A+D = Hostile+P-A-D = Docile
3.4. Self-assessment manikin
Several methods have been used for detecting emotional states and have been embedded in different
systems for testing. In general, three methods can be used to detect the emotional states:
− Physiological: such as skin conductance, heart rate, blood pressure, electroencephalography (EEG), and so on.
− Psychological: verbal description of an emotion or emotional state, standardize check list, questionnaire,
and so on.
− Behavioral: facial expression, voice modulation, gesture, posture, motor behavior, mouse and keyboard,
and so on.
In other words, it can be said that there are two ways to tag emotional states, one is done automatically by
diagnostic tools and the other is done by humans. SAM [13] is a method of tagging by humans. SAM is used
more often in most automated methods so as to help one determine the accuracy of their work.
5. TELKOMNIKA Telecommun Comput El Control
Recognizing emotional state of user based on learning method and… (Maytham N. Meqdad)
3037
Self-assessment manikin (SAM) is a test used for evaluation of emotion with reliability coefficient ranging
between 55/0 and 78/0 and the concurrent validity ranging between 56/0 and 78/0. SAM is a visual representation
of the PAD dimensions developed by Lang as an alternative to self-report scales [13]. Lang developed SAM as
a functional visual scale for evaluation of pleasure, arousal and dominance. Its displays in each dimension is along
with a visual feature on a 5-point scale among which the respondents are to choose what they feel. SAM uses
manikins for each emotional dimension in a scale. The answer to each row rates one of the three PAD variables,
which can help identify an individual’s emotional state. As previously mentioned, the emotion detection phase was
removed so as to minimize the errors and self-assessment manikin was used instead.
4. METHOD
The learning-based memory framework developed by [8] is presented in Figure 1. This framework
includes 1) meta-model, 2) analyzer, 3) evaluator, and 4) a memory modulator. Meta-model includes episodic
memory, semantic memory, semantic graph and general semantic graph. Analyzer is a perceptual
categorization mechanism. Assessment provides and represents understanding of perceptions (output analyst)
according to memories content (meta-model). “Memory modulator” updates memories. The following
information describes the applied changes. As mentioned above the details of the highlighted events will be
saved in the episodic memory. In the episodic memory module, four aspects of each experience will be recorded
(action_i, group_j, emotion k, and intensity). These four aspects mean that an individual with group_j
personality in front of the background color i, has experienced the emotional state k with an intensity. In fact,
using this method, this system will be able to retrieve the user's previous experiences. As mentioned previously,
all memory modules presented in [8] have been used in this study. The four aspects of each experience
(that will be recorded) are described in the following.
Figure 1. The framework of emotion understanding [8]
− Action
In this study, the operations are a set of background colors. Due to the wide range of colors, we have
used color communication with emotional states in Suk et al. [9]. They obtained a set of colors for every
emotional state but there are different color themes in each emotional state, and from our point of view
the diversity of colors in every emotional state is due to the user’s personality. For elicitation of the user's
feedback, the set of colors related with each emotional state is demonstrated in each emotional state. Because
of the low number of colors in some certain emotional states, we made some changes in colors based on
the results of the articles [7, 10] so as to improve and increase the number of colors to be chosen. At the time
of testing the tools, after receiving the emotional state of the user, a set of colors associated with that emotional
state is displayed and a question is asked about the color that represents that emotional state. But at the time of
6. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 6, December 2020: 3033 - 3040
3038
the test after determining the emotional state, if emotional state is positive, color selection is done according to
the experience and for that positive emotional state. But when the user is in a negative emotional state, color
selection will be done according to experience for the emotional state contrasting that negative emotional state,
meaning that an opposite color is displayed so that it may create a positive emotional state. Table 2 shows
the colors used in this study.
Table 2. Table of colors
ColorDAP
+++
-++
+-+
--+
++-
-+-
+--
---
− Type
The classification done by [8] was objective-based, while it is personality-based here. An experience
by an individual with a particular personality type may occur for all persons with the same type of personality.
− Emotional state
Eight PAD emotional states have been used in this study, which have been explained in section III.c Mood.
− Intensity
As mentioned earlier, the intensity of the user’s emotional states can influence the retrieval of
an experience from the episodic memory. This is why each user is asked about the intensity of generated
emotional states and will select a number in the range (0-100) as excitement intensity.
The recovery operations are also done based on the article written [8], in which only minor changes
were made. Here, when the system interacts with a user, if it has an experience about the user, it will exactly
use that experience. But if the user is a new one and has not already used the system, or has had no experience
about the generated state, the retrieval of experience will occur based on the memory model [21].
Emotional states (emotion_ k) in nodes, actions (action_ i) the edges and weights are certainly factor
(CF) in semantic graph. Semantic graph allows us to compare some experienced candidates by using
the certainly factor [22, 23]. To calculate the certainly factor, we use saving time, new experience, and
excitement intensity. For calculating the overall certainly factor, the certainly factor of all experiences that
make an emotional state in an individual with a particular personality type needs to be calculated. When more
than one action is present, the action with the highest degree of certainly factor is selected.
General semantic graph is also a kind of semantic graph and is actually an abstraction of semantic
graphs and includes general knowledge about all human beings with different personality types. The general
law of abstraction for semantic graph is that if an experience is repeated in more than half of the semantic
graphs, that experience will be added to the general semantic graph [24, 25]. Thus, semantic graph experience
may be used for all types of people.
5. EVALUATION
System evaluation is done based on the user evaluation method. Therefore, we used user’s viewpoint
for assessment in different parts. In fact, for evaluation of the memory system and assessment of the accuracy
rate of the memory system's response to the gained experiences, the user is asked about how satisfied he/she is
with the colors. At the end, after the user worked with the system, his/her feedback will be collected through
a questionnaire. A questionnaire is used for several reasons:
− The system is made for users and they should comment about the system and thus help the designers in
the construction and designing of the system.
− The user's structured responses to the designers' purposeful questions can be much more effective than
the measurements methods which are not free from errors.
7. TELKOMNIKA Telecommun Comput El Control
Recognizing emotional state of user based on learning method and… (Maytham N. Meqdad)
3039
− Instead of asking for a complex feedback from experts, ordinary users can use the questionnaire.
− It is easier for users to compare the performance of multiple systems.
− After working with the system, the users will respond to the questionnaire including demographic
questions and also questions used in order to get feedback from users concerning emotional state control
and positive and negative emotions.
5.1. Expriment
For evaluation, the tools were tested and evaluated in three different modes:
− Base mode without changing the color
− Color change and via AUBUE method
− Color change by memory-based learning
Users worked with C++ programming language learning tool. The total number of users in the test
was 48 people; each mode was tested by 16 members, 8 women and 8 men. In addition, for testing the memory-
based learning method, some users needed to interact with the system so that it could collect the primitive data.
This was done by 30 users, 15 women and 15 men. The mean age of the users in the study was 23.5 years; with
SD of 2 years. Each user typed at least 6 programs in C++ language. Furthermore, each user worked with
the educational tool in one of the modes of testing and evaluation. The users responded the questionnaire items
after they finished working with the system.
5.2. Data analysis
In the questionnaire, the whole data were obtained via a 7-point scale, with 1 showing the weakest
and 7 representing the highest degrees. One-way ANOVA was used to analyze the data. Analysis was done for
the main hypothesis of the research, i.e. emotional control, and also a positive emotional state and negative
emotional state. In fact, the emotional control should help increase the positive emotional states and decrease
the negative emotional ones. Three comparisons were made in different modes for three variables of this study:
− Comparison of the base mode with color change by AUBUE method
− Comparison of the base mode with color change by memory-based learning model and
− Comparison of the color change mode by AUBUE method with that by memory-based learning model.
The accuracy of memory model developed in [8] was determined but due to the changes made in
the model, the accuracy parameters were measured again. In fact, we can state that accuracy is the ratio of
the number of the correctly retrieved actions to the total number of the retrieval ones. Errors were determined
based on users' feedbacks. In other words, the user's dissatisfaction with the presented color is considered as
error, and ultimately the accuracy for the memory model was obtained 70.213.
5.3. Controlling the emotional states
The results of evaluating the emotional state control parameter are given in Table 3. The number of
participants in each method was 16, and the mean values obtained for emotional state control in the three
methods were 2.94, 3.56 and 5.31 respectively. This means that the amount of emotional state control is greater
for the memory-based learning model than for the other modes.
Color change method using AUBUE was better than the base method but the results had not enough
superiority. A comparison of the first and second methods showed that the effect of the conditions was not
significant for the variable emotional control. The change color method using AUBUTE was shown to be better
than the base method, but the results showed no considerable superiority. The results of a comparison of
the first method with the memory-based method was shown to be significant and the effect of situations was
significant with effectiveness of 2.18. As a result, memory-based method is better than the base state with
significant efficacy.
In addition, the result of comparison between AUBUE and memory-based method showed
the superiority of memory-based method with 1.83 effectiveness. The results of the comparison showed
the superiority of the memory-based learning method in emotional state control parameter over other methods
with a noticeably high effectiveness.
Table 3. Descriptive table about the results of evaluation of the emotional state control parameter
User Color 1 Color 2 Color 3 Color 4 Color 5
Emotional State Bad Good Normal Hard Easy
6. CONCLUSION
In this study, an adaptive user interface was implemented so that the user interface color will change
in accordance with the user's emotion. Also, by learning about the reactions of users with different personality
types against each color, communication between the user's emotional state and personality and suitable color
8. ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 6, December 2020: 3033 - 3040
3040
has been modeled to control the emotions. For modeling the user’s experiences, the memory-based learning
system was used.
The memory system used here includes two parts, namely episodic memory and semantic memory.
Episode memory saves the details of experienced significant events, while semantic memory demonstrates
an abstract of the episodic memory contents. The used memory system is capable to learn and will be completed
as used more often. NEO_FFI was used for personality assessment and the user’s point of view was also used
for evaluation of the model in the form of a questionnaire.
REFERENCES
[1] M. Ben Ammar, M. Neji, A. Alimi, G. Gouardères, “The Affective Tutoring System,” Expert Systems with
Applications, vol. 37, no. 4, pp. 3013-3023, 2010.
[2] E. C. De Castro, R. R. Gudwin, “An Episodic Memory Implementation for a Virtual Creature, in Model-Based
Reasoning in Science and Technology: Abduction, Logic, and Computational Discovery,” Springer Berlin
Heidelberg: Berlin, Heidelberg, pp. 393-406, 2010.
[3] J. Dias, A. Paiva, “Agents with Emotional Intelligence for Storytelling. in Affective Computing and Intelligent
Interaction,” Berlin, Heidelberg: Springer Berlin Heidelberg, 2011.
[4] U. Faghihi, et al. “A Generic Episodic Learning Model Implemented in a Cognitive Agent by Means of Temporal
Pattern Mining,” Next-Generation Applied Intelligence, Berlin, Heidelberg: Springer Berlin Heidelberg, 2009.
[5] S. Fatahi, N. Ghasem-Aghaee, “Design and Implementation of an E-Learning Model by Considering Learner's
Personality and Emotions,” Advances in Electrical Engineering and Computational Science, Springer Netherlands:
Dordrecht, pp. 423-434, 2009.
[6] W. C. Ho, K. Dautenhahn, “Towards a Narrative Mind: The Creation of Coherent Life Stories for Believable Virtual
Agents,” Intelligent Virtual Agents, Berlin, Heidelberg: Springer Berlin Heidelberg, 2008.
[7] Z. Kasap, N. Magnenat-Thalmann, “Towards episodic memory-based long-term affective interaction with a human-
like robot,” 19th International Symposium in Robot and Human Interactive Communication, 2010.
[8] M. Ören, et al., “An emotion understanding framework for intelligent agents based on episodic and semantic
memories,” Autonomous Agents and Multi-Agent Systems, vol. 28, no. 1, pp. 126-153, 2014.
[9] M. N. Ghasem-Aghaee, T. I. Ören, “Emotive and cognitive simulations by agents: Roles of three levels of information
processing,” Cognitive Systems Research, vol. 13, no. 1, pp. 24-38, 2012.
[10] J. R. Millan, et al., “Noninvasive brain-actuated control of a mobile robot by human EEG,” IEEE Transactions on
Biomedical Engineering, vol. 51, no. 6, pp. 1026-1033, 2004.
[11] A. M. Nuxoll, “Episodic Learning, in Encyclopedia of the Sciences of Learning,” Springer US: Boston, pp. 1157-1159, 2012.
[12] P. Saulnier, E. Sharlin, S. Greenberg, “Using bio-electrical signals to influence the social behaviours of domesticated
robots,” 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2009.
[13] M. A. M. Shaikh, H. Prendinger, M. Ishizuka, “A Linguistic Interpretation of the OCC Emotion Model for Affect
Sensing from Text,” Affective Information Processing, Springer London: London, pp. 45-73, 2009.
[14] J. M. Talarico, K. S. LaBar, D. C. Rubin, “Emotional intensity predicts autobiographical memory experience,”
Memory & Cognition, vol. 32, no. 7, pp. 1118-1132, 2004.
[15] B. P. Woolf, et al., “Affective Tutors: Automatic Detection of and Response to Student Emotion, in Advances,”
Intelligent Tutoring Systems, Springer Berlin Heidelberg: Berlin, Heidelberg, pp. 207-227, 2010.
[16] S. Fatahi, "Design and Implementation of a Model based on Emotion and Personality in Virtual Learning," MSc
Thesis, Department of Computer, University of Isfahan, 2008.
[17] M. M. Bradley and P. J. Lang, "Measuring emotion: The self-assessment manikin and the semantic differential,"
Journal of Behavior Therapy and Experimental Psychiatry, vol. 25, no. 1, pp. 49-59, 1994.
[18] L. C. Ou, M. R. Luo, P. L. Sun, N. C. Hu, and H. S. Chen, "Age effects on colour emotion, preference, and harmony,"
Color Research & Application, vol. 37, no. 2, pp. 92-105, 2012.
[19] M. V. Sokolova, A. Fernández-Caballero, L. Ros, J. M. Latorre, and J. P. Serrano, "Evaluation of color preference for
emotion regulation," Artificial Computation in Biology and Medicine: Springer, pp. 479-487, 2015.
[20] A. Kargere, “Color and Personality,” Red Wheel/Weiser, 1979.
[21] M. Gurven, C. von Rueden, M. Massenkoff, H. Kaplan, and M. Lero Vie, "How universal is the Big Five? Testing
the five-factor model of personality variation among forager–farmers in the Bolivian Amazon," Journal of personality
and social psychology, vol. 104, no. 2, pp. 1-17, 2013.
[22] H. Ahuja, R. Sivakumar. “Implementation of FOAF, AIISO and DOAP ontologies for creating an academic
community networkusing semantic frameworks,” International Journal of Electrical and Computer Engineering,
vol. 9, no. 5, pp. 4302-4310, 2019.
[23] D. K. Kim. “Enhancing code clone detection using control flow graphs,” International Journal of Electrical and
Computer Engineering, vol. 9, no. 5, pp. 3804-3812, 2019.
[24] L. Wei, L. X. Hua. “The Error Control Methods of Information System in Sensor Networks,” Bulletin of Electrical
Engineering and Informatics, vol. 4, no. 3, pp. 210-216, 2015.
[25] P. Vishal Tank, S.K. Hadia, “Creation of speech corpus for emotion analysis in Gujarati language and its evaluation
by various speech parameters,” International Journal of Electrical and Computer Engineering, vol. 10, no. 5,
pp. 4752-4758, 2020.