The document describes a new model called Measuring User Experience using Digitally Transformed/Converted Emotions (MUDE) which measures two metrics of user experience (satisfaction and errors) using facial expressions and gestures captured by an Intel interactive camera. An experiment was conducted with 70 participants who used a software application while their facial expressions and gestures were recorded. The results from the camera were then compared to responses from a System Usability Scale questionnaire to determine if attitudes towards usability matched between the two methods. The study found consistency between the camera-captured emotions and questionnaire responses regarding usability. The MUDE model provides a new approach to evaluating user experience based on digitally measuring emotions expressed during interaction.
Facial Expression Recognition System: A Digital Printing Applicationijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Review of facial expression recognition system and used datasetseSAT Journals
Abstract The human face is main part to recognize the individuals as well as provides the important information, current state of user behavior through their different expressions. Therefore, in biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. The other areas which use such technique are computer science medicine, psychology etc. Usually face recognition system is consisting of many internal tasks. Face detection is thefirst task of such systems. Due to different variations across the human faces, the process of detecting face becomes complex. But with help of different modeling methods, it becomes possible to recognize the face and hence different face expressions. This paperpresents a literature review over the techniques and methods used for facial expression recognition. Also, different facial expression datasets available for the research or testing of existing methods of facial expression recognition are discussed. Keywords: Facial Expression, Face Detection, Features Extraction, Recognition, datasets.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The seminar discus about affective computing. and emotion based computing,its objectives,components of emotion, psychological theories of emotion, A-V-S emotional model, Electroencephlography (EEG),
Facial Expression Recognition System: A Digital Printing Applicationijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Review of facial expression recognition system and used datasetseSAT Journals
Abstract The human face is main part to recognize the individuals as well as provides the important information, current state of user behavior through their different expressions. Therefore, in biometric area of the research, automatically face & face expression recognition attracts researcher’s interest. The other areas which use such technique are computer science medicine, psychology etc. Usually face recognition system is consisting of many internal tasks. Face detection is thefirst task of such systems. Due to different variations across the human faces, the process of detecting face becomes complex. But with help of different modeling methods, it becomes possible to recognize the face and hence different face expressions. This paperpresents a literature review over the techniques and methods used for facial expression recognition. Also, different facial expression datasets available for the research or testing of existing methods of facial expression recognition are discussed. Keywords: Facial Expression, Face Detection, Features Extraction, Recognition, datasets.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The seminar discus about affective computing. and emotion based computing,its objectives,components of emotion, psychological theories of emotion, A-V-S emotional model, Electroencephlography (EEG),
Emotion-oriented computing: Possible uses and applicationsAndré Valdestilhas
This article discusses the concepts of using digital television affective computing and computer vision.
The proposal involves the union of some techniques such as capturing facial expressions through a video
camera, use of accelerometers in ball and touch holograms to work a certain level of interactivity with the
viewer. Some uses of the proposal in question are described, such as control of the hearing, background
content, among others. This article reveals numerous benefits that can be addressed with the use of
matters presented which can be applied in a broad context, such as for the blind in video games, among
others
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
This paper deals with acknowledgment of characteristic feelings from human countenances is a fascinating subject with an extensive variety of potential applications like human-PC communication, robotized mentoring frameworks, picture and video recovery, brilliant situations, what's more, driver cautioning frameworks. Generally, facial feeling acknowledgment frameworks have been assessed on lab controlled information, which is not illustrative of the earth confronted in genuine applications. To vigorously perceive facial feelings in genuine regular circumstances, this paper proposes a methodology called Extreme Sparse Learning (ESL), which can mutually take in a word reference (set of premise) and a non-direct grouping model. The proposed approach consolidates the discriminative force of Extreme Learning Machine (ELM) with the reproduction property of meager representation to empower exact arrangement when given uproarious signs and blemished information recorded in common settings. Moreover, this work exhibits another neighborhood spatioworldly descriptor that is particular what's more, posture invariant. The proposed structure can accomplish best in class acknowledgment precision on both acted what's more, unconstrained facial feeling databases.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
A new alley in Opinion Mining using Senti Audio Visual AlgorithmIJERA Editor
People share their views about products and services over social media, blogs, forums etc. If someone is willing
to spend resources and money over these products and services will definitely learn about them from the past
experiences of their peers. Opinion mining plays vital role in knowing increasing interests of a particular
community, social and political events, making business strategies, marketing campaigns etc. This data is in
unstructured form over internet but analyzed properly can be of great use. Sentiment analysis focuses on polarity
detection of emotions like happy, sad or neutral.
In this paper we proposed an algorithm i.e. Senti Audio Visual for examining Video as well as Audio
sentiments. A review in the form of video/audio may contain several opinions/emotions, this algorithm will
classify the reviews with the help of Baye’s Classifiers to three different classes i.e., positive, negative or
neutral. The algorithm will use smiles, cries, gazes, pauses, pitch, and intensity as relevant Audio Visual
features.
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
The study of attention estimation for child-robot interaction scenariosjournalBEEI
One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.
Facial Expression Recognition System: A Digital Printing Applicationijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Analysis on techniques used to recognize and identifying the Human emotions IJECEIAES
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on various recognition techniques used to identify the complexity in recognizing the facial expression is presented.
Emotion-oriented computing: Possible uses and applicationsAndré Valdestilhas
This article discusses the concepts of using digital television affective computing and computer vision.
The proposal involves the union of some techniques such as capturing facial expressions through a video
camera, use of accelerometers in ball and touch holograms to work a certain level of interactivity with the
viewer. Some uses of the proposal in question are described, such as control of the hearing, background
content, among others. This article reveals numerous benefits that can be addressed with the use of
matters presented which can be applied in a broad context, such as for the blind in video games, among
others
Recognition of Facial Emotions Based on Sparse CodingIJERA Editor
This paper deals with acknowledgment of characteristic feelings from human countenances is a fascinating subject with an extensive variety of potential applications like human-PC communication, robotized mentoring frameworks, picture and video recovery, brilliant situations, what's more, driver cautioning frameworks. Generally, facial feeling acknowledgment frameworks have been assessed on lab controlled information, which is not illustrative of the earth confronted in genuine applications. To vigorously perceive facial feelings in genuine regular circumstances, this paper proposes a methodology called Extreme Sparse Learning (ESL), which can mutually take in a word reference (set of premise) and a non-direct grouping model. The proposed approach consolidates the discriminative force of Extreme Learning Machine (ELM) with the reproduction property of meager representation to empower exact arrangement when given uproarious signs and blemished information recorded in common settings. Moreover, this work exhibits another neighborhood spatioworldly descriptor that is particular what's more, posture invariant. The proposed structure can accomplish best in class acknowledgment precision on both acted what's more, unconstrained facial feeling databases.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Feature Extraction of Gesture Recognition Based on Image Analysis for Differe...IJERA Editor
Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.
A new alley in Opinion Mining using Senti Audio Visual AlgorithmIJERA Editor
People share their views about products and services over social media, blogs, forums etc. If someone is willing
to spend resources and money over these products and services will definitely learn about them from the past
experiences of their peers. Opinion mining plays vital role in knowing increasing interests of a particular
community, social and political events, making business strategies, marketing campaigns etc. This data is in
unstructured form over internet but analyzed properly can be of great use. Sentiment analysis focuses on polarity
detection of emotions like happy, sad or neutral.
In this paper we proposed an algorithm i.e. Senti Audio Visual for examining Video as well as Audio
sentiments. A review in the form of video/audio may contain several opinions/emotions, this algorithm will
classify the reviews with the help of Baye’s Classifiers to three different classes i.e., positive, negative or
neutral. The algorithm will use smiles, cries, gazes, pauses, pitch, and intensity as relevant Audio Visual
features.
This paper introduces a new concept for the establishment of human-robot symbiotic relationship. The
system is based on the implementation of knowledge-based image processing methodologies for model
based vision and intelligent task scheduling for an autonomous social robot. This paper aims to develop an
automatic translation of static gestures of alphabets and signs in American Sign Language (ASL), using
neural network with backpropagation algorithm. System deals with images of bare hands to achieve the
recognition task. For each individual sign 10 sample images have been considered, which means in
total300 samples have been processed. In order to compare between the training set of signs and the
considered sample images, are converted into feature vectors. Experimental results reveal that this can
recognize selected ASL signs (accuracy of 92.00%). Finally, the system has been implemented issuing hand
gesture commands for ASL to a robot car, named “Moto-robo”.
The study of attention estimation for child-robot interaction scenariosjournalBEEI
One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.
Facial Expression Recognition System: A Digital Printing Applicationijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Analysis on techniques used to recognize and identifying the Human emotions IJECEIAES
Facial expression is a major area for non-verbal language in day to day life communication. As the statistical analysis shows only 7 percent of the message in communication was covered in verbal communication while 55 percent transmitted by facial expression. Emotional expression has been a research subject of physiology since Darwin’s work on emotional expression in the 19th century. According to Psychological theory the classification of human emotion is classified majorly into six emotions: happiness, fear, anger, surprise, disgust, and sadness. Facial expressions which involve the emotions and the nature of speech play a foremost role in expressing these emotions. Thereafter, researchers developed a system based on Anatomic of face named Facial Action Coding System (FACS) in 1970. Ever since the development of FACS there is a rapid progress in the domain of emotion recognition. This work is intended to give a thorough comparative analysis of the various techniques and methods that were applied to recognize and identify human emotions. This analysis results will help to identify proper and suitable techniques, algorithms and the methodologies for future research directions. In this paper extensive analysis on various recognition techniques used to identify the complexity in recognizing the facial expression is presented.
Real Time Facial Emotion Recognition using Kinect V2 Sensoriosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
A Comprehensive Survey on Human Facial Expression DetectionCSCJournals
In the recent years recognition of Human's Facial Expression has been very active research area in computer vision. There have been several advances in the past few years in terms of face detection and tracking, feature extraction mechanisms and the techniques used for expression classification. This paper surveys some of the published work since 2001. The paper gives a time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advances made in terms of their standardization and a detailed summary of the state of the art. The paper also discusses facial parameterization using FACS Action Units (AUs) and advances in face detection, tracking and feature extraction methods. It has the important role in the humancomputer interaction (HCI) systems. There are multiple methods devised for facial feature extraction which helps in identifying face and facial expressions.
Dynemo a video database of natural facial expressions of emotionsijma
DynEmo is a database available to the scientific community (https://dynemo.upmf-grenoble.fr/). It contains
dynamic and natural emotional facial expressions (EFEs) displaying subjective affective states rated by
both the expresser and observers. Methodological and contextual information is provided for each
expression. This multimodal corpus meets psychological, ethical, and technical criteria. It is quite large,
containing two sets of 233 and 125 recordings of EFE of ordinary Caucasian people (ages 25 to 65, 182
females and 176 males) filmed in natural but standardized conditions. In the Set 1, EFE recordings are
associated with the affective state of the expresser (self-reported after the emotion inducing task, using
dimensional, action readiness, and emotional labels items). In the Set 2, EFE recordings are both
associated with the affective state of the expresser and with the time line (continuous annotations) of
observers’ ratings of the emotions displayed throughout the recording. The time line allows any researcher
interested in analysing non-verbal human behavior to segment the expressions into small emotion excerpts
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Thelxinoë: Recognizing Human Emotions Using Pupillometry and Machine Learningmlaij
In this study, we present a method for emotion recognition in Virtual Reality (VR) using pupillometry. We analyze pupil diameter responses to both visual and auditory stimuli via a VR headset and focus on extracting key features in the time-domain, frequency-domain, and time-frequency domain from VR-generated data. Our approach utilizes feature selection to identify the most impactful features using Maximum Relevance Minimum Redundancy (mRMR). By applying a Gradient Boosting model, an ensemble learning technique using stacked decision trees, we achieve an accuracy of 98.8% with feature engineering, compared to 84.9% without it. This research contributes significantly to the Thelxinoë framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions. Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
Thelxinoë: Recognizing Human Emotions Using Pupillometry and Machine Learningmlaij
In this study, we present a method for emotion recognition in Virtual Reality (VR) using pupillometry. We analyze pupil diameter responses to both visual and auditory stimuli via a VR headset and focus on extracting key features in the time-domain, frequency-domain, and time-frequency domain from VR-generated data. Our approach utilizes feature selection to identify the most impactful features using Maximum Relevance Minimum Redundancy (mRMR). By applying a Gradient Boosting model, an ensemble learning technique using stacked decision trees, we achieve an accuracy of 98.8% with feature engineering, compared to 84.9% without it. This research contributes significantly to the Thelxinoë framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions. Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
ENHANCING THE HUMAN EMOTION RECOGNITION WITH FEATURE EXTRACTION TECHNIQUESIAEME Publication
Human emotions are states of mental health that resolve spontaneously rather than through conscious exertion, and are accompanied by physiological changes in the facial muscles that signify expressions. Nonverbal communication methods such as expressions, eye movements, and gestures are used in many applications of human computer interaction. Identifying emotions is not an easy task because there is no difference between the emotions of a face, and there is also a lot of complexity and variability. The machine learning algorithm uses some open features to model the face. Automatic emotion recognition based on facial expression is an interesting research field, which has presented and applied in several areas such as safety, health and in human machine interfaces. Researches in this field are interested in developing techniques to interpret, code facial expressions and extract these features in order to have a better prediction by computer. Machine learning, one of the top emerging sciences, has an extensive range of applications. In this paper, the optimization techniques-based feature extraction techniques are used to enhance the recognition of the human emotion using facial images. The optimization techniques like Ant Colony Optimization, Particle Swarm Optimization, Genetic Algorithm are used. Various metrics are used to evaluate the performance of the feature extraction techniques for emotion recognition.
MULTIMODAL COURSE DESIGN AND IMPLEMENTATION USING LEML AND LMS FOR INSTRUCTIO...IJMIT JOURNAL
Traditionally, teaching has been centered around classroom delivery. However, the onslaught of the
COVID-19 pandemic has cultivated usage of technology, teaching, and learning methodologies for course
delivery. We investigate and describe different modes of course delivery that maintain the integrity of
teaching and learning. This paper answers to the research questions: 1) What course delivery method our
academic institutions use and why? 2) How can instructors validate the guidelines of the institutions? 3)
How courses should be taught to provide student learning outcomes? Using the Learning Environment
Modeling Language (LEML), we investigate the design and implementation of courses for delivery in the
following environments: face-to-face, online synchronous, asynchronous, hybrid, and hyflex. A good
course design and implementation are key components of instructional alignment. Furthermore, we
demonstrate how to design, implement, and deliver courses in synchronous, asynchronous, and hybrid
modes and describe our proposed enhancements to LEML.
Novel R&D Capabilities as a Response to ESG Risks-Lessons From Amazon’s Fusio...IJMIT JOURNAL
Environmental, Social, and Governance (ESG) management is essential for transforming corporate
financial performance-oriented business strategies into Finance (F) + ESG optimization strategies to
achieve the Sustainable Development Goals (SDGs).
In this trend, the rise of ESG risks has divided firms into two categories. Former incorporates a growthmindset that creates a passion for learning, and urges it to improve itself by endeavoring Research and
development (R&D) -driven challenges, while the other category, characterized by risk aversion, avoids
challenging highly uncertain R&D activities and seeks more manageable endeavors.
This duality underscores the complexity of corporate R&D strategies in addressing ESG risks and
necessitates the development of novel R&D capabilities for corporate R&D transformation strategies
towards F + ESG optimization.
International Journal of Managing Information Technology (IJMIT) ** WJCI IndexedIJMIT JOURNAL
The International Journal of Managing Information Technology (IJMIT) is a quarterly open access peer-reviewed journal that publishes articles that contribute new results in all areas of the strategic application of information technology (IT) in organizations. The journal focuses on innovative ideas and best practices in using IT to advance organizations – for-profit, non-profit, and governmental. The goal of this journal is to bring together researchers and practitioners from academia, government, and industry to focus on understanding both how to use IT to support the strategy and goals of the organization and to employ IT in new ways to foster greater collaboration, communication, and information sharing both within the organization and with its stakeholders. The International Journal of Managing Information Technology seeks to establish new collaborations, new best practices, and new theories in these areas.
International Journal of Managing Information Technology (IJMIT) ** WJCI IndexedIJMIT JOURNAL
The International Journal of Managing Information Technology (IJMIT) is a quarterly open access peer-reviewed journal that publishes articles that contribute new results in all areas of the strategic application of information technology (IT) in organizations. The journal focuses on innovative ideas and best practices in using IT to advance organizations – for-profit, non-profit, and governmental. The goal of this journal is to bring together researchers and practitioners from academia, government, and industry to focus on understanding both how to use IT to support the strategy and goals of the organization and to employ IT in new ways to foster greater collaboration, communication, and information sharing both within the organization and with its stakeholders. The International Journal of Managing Information Technology seeks to establish new collaborations, new best practices, and new theories in these areas.
NOVEL R & D CAPABILITIES AS A RESPONSE TO ESG RISKS- LESSONS FROM AMAZON’S FU...IJMIT JOURNAL
Environmental, Social, and Governance (ESG) management is essential for transforming corporate
financial performance-oriented business strategies into Finance (F) + ESG optimization strategies to
achieve the Sustainable Development Goals (SDGs).
In this trend, the rise of ESG risks has divided firms into two categories. Former incorporates a growthmindset that creates a passion for learning, and urges it to improve itself by endeavoring Research and
development (R&D) -driven challenges, while the other category, characterized by risk aversion, avoids
challenging highly uncertain R&D activities and seeks more manageable endeavors.
This duality underscores the complexity of corporate R&D strategies in addressing ESG risks and
necessitates the development of novel R&D capabilities for corporate R&D transformation strategies
towards F + ESG optimization.
Building on this premise, this paper conducts an empirical analysis, utilizing reliable firms data on ESG
risk and brand value, with a focus on 100 global R&D leader firms. It analyzes R&D and actions for ESG
risk mitigation, and assesses the development of new functions that fulfill F + ESG optimization through
R&D. The analysis also highlights the significance of network externality effects, with a specific focus on
Amazon, a leading R&D company, providing insights into the direction for transforming R&D strategies
towards F + ESG optimization.
The dynamics of stakeholder engagement in F + ESG optimization are indicated with the example of
amazon's activities. Through the analysis, it became evident that Amazon's capacity encompassing growth
and scalability, specifically its ability to grow and expand, is accelerating high-level research and
development by gaining the trust of stakeholders in the "synergy through R&D-driven ESG risk
mitigation."
Finally, as examples of these initiatives, the paper discussed the Climate Pledge led by Amazon and the
transformation of Japan's management system.
A REVIEW OF STOCK TREND PREDICTION WITH COMBINATION OF EFFECTIVE MULTI TECHNI...IJMIT JOURNAL
It is important for investors to understand stock trends and market conditions before trading stocks. Both
these capabilities are very important for an investor in order to obtain maximized profit and minimized
losses. Without this capability, investors will suffer losses due to their ignorance regarding stock trends
and market conditions. Technical analysis helps to understand stock prices behavior with regards to past
trends, the signals given by indicators and the major turning points of the market price. This paper reviews
the stock trend predictions with a combination of the effective multi technical indicator strategy to increase
investment performance by taking into account the global performance and the proposed combination of
effective multi technical indicator strategy model.
INTRUSION DETECTION SYSTEM USING CUSTOMIZED RULES FOR SNORTIJMIT JOURNAL
These days the security provided by the computer systems is a big issue as it always has the threats of
cyber-attacks like IP address spoofing, Denial of Service (DOS), token impersonation, etc. The security
provided by the blue team operations tends to be costly if done in large firms as a large number of systems
need to be protected against these attacks. This leads these firms to turn to less costly security
configurations like IDS Suricata and IDS Snort. The main theme of the project is to improve the services
provided by Snort which is a tool used in creating a vague defense against cyber-attacks like DDOS
attacks which are done on both physical and network layers. These attacks in turn result in loss of
extremely important data. The rules defined in this project will result in monitoring traffic, analyzing it,
and taking appropriate action to not only stop the attack but also locate its source IP address. This whole
process uses different tools other than Snort like Wireshark, Wazuh and Splunk. The product of this will
result in not only the detection of the attack but also the source IP address of the machine on which the
attack is initiated and completed. The end product of this research will result in sets of default rules for the
Snort tool which will not only be able to provide better security than its previous versions but also be able
to provide the user with the IP address of the attacker or the person conducting the attack. The system
involves the integration of Wazuh with Snort tool in order to make it more efficient than IDS Suricata
which is another intrusion detection system capable of detecting all these types of attacks as mentioned.
Splunk is another tool used in this project which increases the firewall efficiency to pass the no. of bits to
be scanned and the no. of bits scanned successfully. Wazuh is used in this system as it is the best choice for
traffic monitoring and incident response than any other of its alternatives in the market. Since this system
is used in firms which are known to handle big amounts of data and for this purpose, we use Splunk tool as
it is very efficient in handling big amounts of data. Wireshark is used in this system in order to give the IDS
automation in its capability to capture and report the malicious packets found during the network scan. All
of this gives the IDS a capability of a low budget automated threat detection system. This paper gives
complete guidelines for authors submitting papers for the AIRCC Journals.
Artificial Intelligence (AI) has rapidly become a critical technology for businesses seeking to improve
efficiency and profitability. One area where AI is proving particularly impactful is in service operations
management, where it is used to create AI-powered service operations (AIServiceOps) that deliver highvalue services to customers. AIServiceOps involve the use of AI to automate and optimize various business
processes, such as customer service, sales, marketing, and supply chain management. The rapid
development of Artificial Intelligence has prompted many changes in the field of Information Technology
(IT) Service Operations. IT Service Operations are driven by AI, i.e., AIServiceOps. AI has empowered
new vitality and addressed many challenges in IT Service Operations. However, there is a literature gap on
the Business Value Impact of Artificial intelligence (AI) Powered IT Service Operations. It can help IT
build optimized business resilience by creating value in complex and ever-changing environments as
product organizations move faster than IT can handle. So, this research paper examines how AIServiceOps
creates business value and sustainability, basically how AIServiceOps makes the IT staff liberation from a
low-level, repetitive workout and traditional IT practices for a continuously optimized process. One of the
research objectives is to compare Traditional IT Service Operations with AIServiceOPs. This paper
provides the basis for how enterprises can evaluate AIServiceOps and consider it a digital transformation
tool. The paper presents a case study of a company that implemented AI-powered service operations
(AIServiceOps) and analyzes the resulting business outcomes. The study shows that AIServiceOps can
significantly improve service delivery, reduce response times, and increase customer satisfaction.
Furthermore, it demonstrates how AIServiceOps can deliver substantial cost savings, such as reducing
labor costs and minimizing downtime.
MEDIATING AND MODERATING FACTORS AFFECTING READINESS TO IOT APPLICATIONS: THE...IJMIT JOURNAL
Although IOT seems to be the upcoming trend, it is still in its infancy; especially in the banking industry.
There is a clear gap in literature, as only few studies identify factors affecting readiness to IOT
applications in banks in general, and almost negligible investigations on mediating and moderating
factors. Accordingly, this research aims to investigate the main factors that affect employees’ readiness to
IOT applications, while highlighting the mediating and moderating factors in the Egyptian banking sector.
The importance of Egypt stems from its high population and steady steps taken towards technology
adoption. 479 valid questionnaires were distributed over HR employees in banks. Data collected was
statistically analysed using Regression and SEM. Results showed a significant impact of ‘Security’,
‘Networking’, ‘Software Development’ and ‘Regulations’ on ‘readiness to IOT applications. Thus, the
readiness acceptance level is high‘Security’ and ‘User Intention’ were proven to mediate the relationship
between research variables and readiness to IOT applications, and only a partial moderation role was
proven for ‘Efficiency’. The study contributes to increasing literature on IOT applications in general, and
fills a gap on the Egyptian banking context in particular. Finally, it provides decision makers at banks with
useful guidelines on how to optimally promote IOT applications among employees.
EFFECTIVELY CONNECT ACQUIRED TECHNOLOGY TO INNOVATION OVER A LONG PERIODIJMIT JOURNAL
IT (Information and Communication Technology) companies are facing the dilemma of decreasing
productivity despite increasing research and development efforts. M&A (Merger and Acquisition) is being
considered as a breakthrough solution. From existing research, it has been pointed out that M&A leads to
the emergence of new innovations. Purpose of this study was to discuss the efficient ways of acquisition and
to resolve the dilemma of productivity decline by clarifying how the technology obtained through M&A
leads to the creation of new innovations. Hypothesis 1 was that the technology acquired through M&A is
utilized for innovation creation, Hypothesis 2 was that the acquired technology is utilized over a long
period of time, and Hypothesis 3 was that a long-term utilization has a positive impact on corporate
performance. The results, using sports prosthetics as a case study and using patents as a proxy variable,
confirmed all the hypotheses set. We have revealed that long-term utilization of technology obtained
through M&A is effective for creating new innovations.
International Journal of Managing Information Technology (IJMIT) ** WJCI IndexedIJMIT JOURNAL
The International Journal of Managing Information Technology (IJMIT) is a quarterly open access peer-reviewed journal that publishes articles that contribute new results in all areas of the strategic application of information technology (IT) in organizations. The journal focuses on innovative ideas and best practices in using IT to advance organizations – for-profit, non-profit, and governmental. The goal of this journal is to bring together researchers and practitioners from academia, government, and industry to focus on understanding both how to use IT to support the strategy and goals of the organization and to employ IT in new ways to foster greater collaboration, communication, and information sharing both within the organization and with its stakeholders. The International Journal of Managing Information Technology seeks to establish new collaborations, new best practices, and new theories in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of information technology and management
4th International Conference on Cloud, Big Data and IoT (CBIoT 2023)IJMIT JOURNAL
4th International Conference on Cloud, Big Data and IoT (CBIoT 2023) will act as a major forum for the presentation of innovative ideas, approaches, developments, and research projects in the areas of Cloud, Big Data and IoT. It will also serve to facilitate the exchange of information between researchers and industry professionals to discuss the latest issues and advancement in the area of Cloud, Big Data and IoT.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in Cloud, Big Data and IoT.
TRANSFORMING SERVICE OPERATIONS WITH AI: A CASE FOR BUSINESS VALUEIJMIT JOURNAL
Artificial Intelligence (AI) has rapidly become a critical technology for businesses seeking to improve
efficiency and profitability. One area where AI is proving particularly impactful is in service operations
management, where it is used to create AI-powered service operations (AIServiceOps) that deliver highvalue services to customers. AIServiceOps involve the use of AI to automate and optimize various business
processes, such as customer service, sales, marketing, and supply chain management. The rapid
development of Artificial Intelligence has prompted many changes in the field of Information Technology
(IT) Service Operations. IT Service Operations are driven by AI, i.e., AIServiceOps. AI has empowered
new vitality and addressed many challenges in IT Service Operations. However, there is a literature gap on
the Business Value Impact of Artificial intelligence (AI) Powered IT Service Operations. It can help IT
build optimized business resilience by creating value in complex and ever-changing environments as
product organizations move faster than IT can handle. So, this research paper examines how AIServiceOps
creates business value and sustainability, basically how AIServiceOps makes the IT staff liberation from a
low-level, repetitive workout and traditional IT practices for a continuously optimized process. One of the
research objectives is to compare Traditional IT Service Operations with AIServiceOPs. This paper
provides the basis for how enterprises can evaluate AIServiceOps and consider it a digital transformation
tool. The paper presents a case study of a company that implemented AI-powered service operations
(AIServiceOps) and analyzes the resulting business outcomes. The study shows that AIServiceOps can
significantly improve service delivery, reduce response times, and increase customer satisfaction.
Furthermore, it demonstrates how AIServiceOps can deliver substantial cost savings, such as reducing
labor costs and minimizing downtime.
DESIGNING A FRAMEWORK FOR ENHANCING THE ONLINE KNOWLEDGE-SHARING BEHAVIOR OF ...IJMIT JOURNAL
The main objective of this paper is to identify the factors that influence academic staff's digital knowledgesharing behaviors in Ethiopian higher education. A structural equation model was used to validate the
research framework using survey data from 210 respondents. The collected data has been analyzed using
Smart PLS software. The results of the study show that trust, self-motivation, and altruism are positively
related to attitude. Contrary to our expectations, knowledge technology negatively affects attitude.
However, reward systems and empowerment by leaders are significantly associated with knowledgesharing intentions.Knowledge-sharing intention, in turn, was significantly related to digital knowledgesharing behavior. The contributions of this study are twofold. The framework may serve as a roadmap for
future researchers and managers considering their strategy to enhance digital knowledge sharing in HEI.
The findings will benefit academic staff and university administrations.The study will also help academic
staff enhance their knowledge-sharing practices.
BUILDING RELIABLE CLOUD SYSTEMS THROUGH CHAOS ENGINEERINGIJMIT JOURNAL
Cloud computing systems need to be reliable so that they can be accessed and used for computing at any
given point in time. The complex nature of cloud systems is the motivation to conduct research in novel
ways of ensuring that cloud systems are built with reliability in mind. In building cloud systems, it is
expected that the cloud system will be able to deal with high demands and unexpected events that affect the
reliability and performance of the system.
In this paper, chaos engineering is considered a heuristic method that can be used to build reliable cloud
systems. Chaos engineering is aimed at exposing weaknesses in systems that are in production. Chaos
engineering will help identify system weaknesses and strengths when a system is exposed to unexpected
knocks and shocks while it is in production.
Chaos engineering allows system developers and administrators to get insights into how the cloud system
will behave when it is exposed to unexpected occurrences.
A REVIEW OF STOCK TREND PREDICTION WITH COMBINATION OF EFFECTIVE MULTI TECHNI...IJMIT JOURNAL
It is important for investors to understand stock trends and market conditions before trading stocks. Both
these capabilities are very important for an investor in order to obtain maximized profit and minimized
losses. Without this capability, investors will suffer losses due to their ignorance regarding stock trends
and market conditions. Technical analysis helps to understand stock prices behavior with regards to past
trends, the signals given by indicators and the major turning points of the market price. This paper reviews
the stock trend predictions with a combination of the effective multi technical indicator strategy to increase
investment performance by taking into account the global performance and the proposed combination of
effective multi technical indicator strategy model.
NETWORK MEDIA ATTENTION AND GREEN TECHNOLOGY INNOVATIONIJMIT JOURNAL
This paper will provide a novel empirical study for the relationship between network media attention and
green technology innovation and examine how network media attention can ease financing constraints. It
collected data from listed companies in China's heavy pollution industry and performed rigorous
regression analysis, in order to innovatively explore the environmental governance functions of the media.
It found that network media attention significantly promotes green technology innovation. By analyzing the
inner mechanism further, it found that network media attention can promote green innovation by easing
financing constraints. Besides, network media attention has a significant positive impact on green invention
patents while not affecting green utility model patents.
INCLUSIVE ENTREPRENEURSHIP IN HANDLING COMPETING INSTITUTIONAL LOGICS FOR DHI...IJMIT JOURNAL
Information System (IS) research advocates employing collaborative and loose coupling strategies to address contradictory issues to address diversified actors’ interests than the prescriptive and unilateral Information Technology (IT) governance mechanisms’, yet it is rarely depicting how managers employ these strategies in Health Information System (HIS) implementation, particularly in a resource-constrained setting where IS implementation activities have highly relied on multiple international organizations resources. This study explored how managers in resource-constrained settings employ collaborative IT governance mechanisms in the case of District Health Information System 2 (DHIS2) adoption with an interpretative case study approach and the institutional logic concept. The institutional logic concept was used to identify the major actors’ logics underpinning the DHIS2 adoption. The study depicted the importance of high-level officials' distance from the dominant systemic logic to consider new alternative, and to employ inclusive IT governance mechanisms which separated resource from the system that facilitated stakeholders’ collaboration in DHIS2 adoption based on their capacity and interest.
DEEP LEARNING APPROACH FOR EVENT MONITORING SYSTEMIJMIT JOURNAL
With an increasing number of extreme events and complexity, more alarms are being used to monitor
control rooms. Operators in the control rooms need to monitor and analyze these alarms to take suitable
actions to ensure the system’s stability and security. Security is the biggest concern in the modern world. It
is important to have a rigid surveillance that should guarantee protection from any sought of hazard.
Considering security, Closed Circuit TV (CCTV) cameras are being utilized for reconnaissance, but these
CCTV cameras require a person for supervision. As a human being, there can be a possibility to be tired
off in supervision at any point of time. So, we need a system to detect automatically. Thus, we came up with
a solution using YOLO V5. We have taken a data set and used robo-flow framework to enhance the existing
images into numerous variations where it will create a copy of grey scale image, a copy of its rotation and
a copy of its blurred version which will be used to get an enlarged data set. This work mainly focuses on
providing a secure environment using CCTV live footage as a source to detect the weapons. Using YOLO
algorithm, it divides an image from the video into grid system and each grid detects an object within itself
MULTIMODAL COURSE DESIGN AND IMPLEMENTATION USING LEML AND LMS FOR INSTRUCTIO...IJMIT JOURNAL
Traditionally, teaching has been centered around classroom delivery. However, the onslaught of the
COVID-19 pandemic has cultivated usage of technology, teaching, and learning methodologies for course
delivery. We investigate and describe different modes of course delivery that maintain the integrity of
teaching and learning. This paper answers to the research questions: 1) What course delivery method our
academic institutions use and why? 2) How can instructors validate the guidelines of the institutions? 3)
How courses should be taught to provide student learning outcomes? Using the Learning Environment
Modeling Language (LEML), we investigate the design and implementation of courses for delivery in the
following environments: face-to-face, online synchronous, asynchronous, hybrid, and hyflex. A good
course design and implementation are key components of instructional alignment. Furthermore, we
demonstrate how to design, implement, and deliver courses in synchronous, asynchronous, and hybrid
modes and describe our proposed enhancements to LEML.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
USER EXPERIENCE AND DIGITALLY TRANSFORMED/CONVERTED EMOTIONS
1. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
DOI : 10.5121/ijmit.2018.10201 1
USER EXPERIENCE AND DIGITALLY
TRANSFORMED/CONVERTED EMOTIONS
Mohammad Rafat Odeh1,
Dr. Badie Sartawi2
, Dr. Jad Najjar3
1
Information &Technology Center, Al-Quds Open University, Palestine
2
Faculty of Science &Technology, Department of Computer Science, Al-
Quds University, Palestine
3
Faculty of Science &Technology, Department of Computer Science, Al-Quds
University, Palestine
ABSTRACT
In human natural interaction (human-human interaction), humans use speech beside the non-verbal cues
like facial expressions movements and gesture movements to express themselves. However, in (human-
computer interactions), computer will use the non-verbal cues of human beings to determine the user
experience and usability of any software or application on the computer. We introduce a new model called
Measuring User Experience using Digitally Transformed/Converted Emotions (MUDE) which measures
two metrics of user experience(satisfaction and errors) , and compares them with SUS questionnaire results
by conducting an experiment for measuring the usability and user’s experience.
KEYWORDS
Usability, User Experience, Emotions, Facial Expressions, Gestures, MUDE Model.
1. INTRODUCTION
There are many evaluation methods and techniques used for testing user experience, in order to
produce an effective and efficient software. One of the modern techniques is using facial
expressions and gestures to expect the user interaction and perception of the developed software,
this technique depends on user face and gestures. Taking into consideration, those human facial
expressions and body gestures are very significant means of communication between people as
they provide important information and help deliver peoples communicative aim. Humans learn
to recognize facial expressions long before they learn to communicate verbally (Harty, 2011).
Faces and gestures not only provide us with the primary source of information about the people
that we are communicating with like there sex or gender, but also provide us extra communicative
functions like how they manage the conversation and how they express themselves towards what
is being said.
Faces and gestures are also excellent ways in a conversation for showing confirmation or surprise
without saying any word. Therefore, facial expressions and gestures are the most effective way
for communicating emotions of people (Harty, 2011).
2. METHODOLOGY
Our paper focuses on investigating a new approach for determining usability and user experience
indicators for any application used for any purpose. We will capture the expressed emotions
either positive or negative (digitally transformed/converted emotions) through face or hand over
2. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
2
face gestures while using application software, these emotions will be used to measure two
parameters of usability and user experience satisfaction and errors.
The proposed methodology is a new approach, it maps the two usability and user experience
parameter into emotions. Each one of these emotions has its specific facial expressions and
specific hand over face gestures. In this way, we leave the traditional tools of usability
engineering and the user is the source of information.
An empirical experiment was carried out to show how the user expresses his or her emotions in
critical incidents. This study use in experiment an interactive camera manufactured by Intel that
reads emotions instantly(https://software.intel.com/en-us/articles/developing-applications-using-
intel-perceptual-computing-technology,29/7/2017).
3. WHY BODY LANGUAGE
The human behavioural cues consist of human emotional cues that contain visual cues including
nonverbal cues that the body language is one of its types. In addition, that body language
comprises of facial expressions, hand gestures and other cues.
Allan Pease in his book (the definitive book of body language) defined the body language as an
outward reflection of a person's emotional condition. (Pease and Pease, 2004) confirmed that each
gesture or movement can indicate a person emotional state or feeling at a specific time. Body
language is “a type of nonverbal communication that plays a central role in how humans
communicate and empathizes with each other. The ability to read nonverbal cues is essential for
understanding, analyzing, and predicting the actions and intentions of others” (Mahmoud and
Robinson, 2011).
Humans from different cultures, and in various situations can communicate, and interacted with a
certain level of accuracy when they observe both the face and the body (Mahmoud and Robinson,
2011).
4. EMOTIONS AND DIGITALLY TRANSFORMED/CONVERTED
EMOTIONS
Emotion is an essential ingredient for human that is complex and hard to find a consensus on its
definition. The emotion like joy, anger, disgust and plethora of other emotion add an active
motivation and richness for human experience. (Brave and Nass, 2003)
Paul and Anne Kleinginna research study (a categorized list of emotion definitions, with
suggestions for consensual definition) that contains a different collection of emotion definitions,
stated that emotion has two important aspects:
(a) emotion is a reaction to events related to the needs, goals, or concerns of human (b) emotion
includes physiological, affective, behavioral, and cognitive component (Brave and Nass, 2003).
These two general aspects are extracted from many different definitions that Kleinginnas have
mentioned in their research study.
Many emotions appear on human face and body; these emotions are translated through body
language, facial expressions and gestures. We differentiate these emotions and call them digitally
transformed/converted emotions. Emotions resulting from the interaction of human with PC,
pocket, smart phone, or any computer device is digitally transformed/converted emotions. These
digitally transformed/converted emotions carry the same aspects and characteristics of regular
3. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
3
emotions, and we can measure them using the same tools in regular emotions that result from
human–human interaction.
5. INFERRING EMOTIONS
Human can express his feelings and emotions using verbal and nonverbal cues. Nonverbal cues
have their own channels to transfer feelings and emotions such as text, audio, physiology and
visual. One of the visual communications or transfer channels is body language that includes hand
gestures, body pastures, and facial expressions etc.
Facial expressions are created as result of contract of face muscles, the operation of face muscular
relaxation and contraction deform the facial expression system. Each facial deformation have its
feeling or emotion meaning. Recognizing facial expression depends on many factors like the
classification of facial motion and facial feature deformation to abstract classes based on visual
information (Fasel and Luettin, 2002). Facial expressions are generated from different factors
such as verbal and nonverbal communications, mental state and physiology activity.
Figure1. Sources of facial expression (Adopted from (Fasel and Luettin, 2002))
Besides that, face transmits messages about emotion, mood, age etc. Therefore, the face is also a
multi message system (Ekman and Friesen, 1975).
6. PSYCHOLOGICAL APPROACHES OF FACIAL EXPRESSIONS
ANALYSIS
6.1 Judgment based approaches or “message judgment approaches”
The objective of this approach is to indicate the message behind the facial expression. The facial
expression or mental status have its predefined class of emotions. Coders’ group consensus on
emotions inferred from facial expression or mental status by computing the average of the
responses of the users, this consensus possessed the absolute truth (Fasel and Luettin, 2002). A
problem in judgment-based approach is that it may be affected by context of observation behavior
of human face (Bettadapura, 2012). This approach is correspond to Facial effect (emotions)
approach which used by researchers in automatic analysis of facial expressions. The Ekman’s six
4. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
4
emotions approach, Pultchik’s approach ,Russell’s approach and Monitoring hand over face
gestures are examples of Judgment based approaches (facial effect ).
2-Sign based approaches or “vehicle based approaches“
The basic idea behind vehicle-based approach is to characterize the facial muscles movements
and temporary wrinkles that face create into visual classes, each movement is determined by its
location and intensity. A complete system contains all possible forms of face (Fasel and Luettin,
2002). This approach is correspond to facial muscles action (action unit) approach which used by
researchers in automatic analysis of facial expressions .the Facial Action Coding System (FACS),
Facial Animation Parameters (FAPs) MPEG-4 standard
7. USER EXPERIENCE AND USABILITY
Some experts defined the two terms as metaphor to clear the contrast between the two terms.
They compared them to science (usability) vs. arts (user experience) and freeway (usability) vs. a
twisting mountain road (user experience). This metaphorical representation is trying to describe
usability as “something that is usable as functional, simple and requires less mental effort to use.
Thus, a freeway is usable since it has no oncoming traffic, enables you to get from point A to
point B in a fast manner and has consistent signage, hence requiring little learnability”. In
contrast, user experience is “a freeway is highly usable but it is boring when assessed. It is
something that focuses on user experience depicted as highly emotional. Thus, a twisting
mountain road is less usable but, because of its scenery, the smell of nature and the excitement of
the climb, it conveys a pleasant user experience”.(https://usabilitygeek.com/the-difference-
between-usability-and-user-experience/)
Figure 2. The relationship between usability and user experience (Adopted from (Moville, 2004))
5. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
5
8. MEASURING USER EXPERIENCE USING DIGITALLY TRANSFORMED/
CONVERTED EMOTIONS (MUDE MODEL)
8.1 MUDE Model Components
We can summarized the components of MUDE as follow
1- User Experience Parameters “Usability Parameters “
Usability have its parameters Learnability, satisfaction, errors, efficiency and memorability. On
the other hand user experience have its specific parameters utility, usability, desirability and
adoptability .So , usability is a metrics of user experience and if we measure usability we can
measure user experience .In other words , measuring Learnability ,satisfaction, errors, efficiency
and memorability give us user experience.
2- The Emotions that Related to User Experience Parameters
Depending on Pultchik’s wheel we can map each user experience parameter with its expected or
related emotions based on its meaning as follow:
Table 1.MUDE Model Mapping of User experience parameters
User Experience
Parameters
Meaning of It Parameter Related Primary Emotions
Satisfaction How pleasant to use software Joy
Learnability How easy is it for users to accomplish
basic tasks the first time they encounter
the design?
Joy
Efficiency Once users have learned the design, how
quickly can they perform tasks
Joy
Memorability When users return to the design after a
period of not using it, how easily can
they reestablish proficiency? Joy
Errors Errors occurrence Anger, ,Surprise, contempt
,disgust, fear ,sadness
3 -Measuring Systems that Inferring Emotions from Facial Expressions and Hand over Face
Gestures. Integrating the three previous components together, we can have a MUDE model. The
following table explains the mapping between the metrics, expected emotions, action unit, FAPs
and Pease & Pease. We take the satisfaction as example for mapping between components.
6. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
6
Table 2. MUDE Model Mapping of User experience parameters relative
emotionFACSFAPsPease & Pease
User
Experience
(usability )
Parameters
Emotions
(That
related to
UX)
Inferring Methods Logs
FACS FAPs Pease &
Pease
Satisfaction
(How pleasant
is it to use the
software)
Joy 6+12 open jaw (F3), lower t
midlip (F4), raise b midlip
(F5), stretch l corner lip
(F6), stretch r corner lip
(F7), raise l corner lip
(F12), raise r corner lip
(F13), close t l eyelid
(F19), close t r eyelid
(F20), close b l eyelid
(F21), close b r eyelid
(F22), raise l m eyebrow
(F33),raise r m eyebrow
(F34), lift l cheek (F41),
lift r cheek (F42), stretch l
corner lip o (F53),stretch r
corner lip o (F54)
PP3 Review the
recording of log
file and see if
user have been
learn the
methods of
filling forms in
addition we will
consider the
post –
questionnaire
9. SUS QUESTIONNAIRE MEETS INTEL INTERACTIVE GESTURES
CAMERA DESCRIPTION OF THE EXPERIMENT
The hypothesis of experiment is that the attitudes of users emotions captured by Intel interactive
gesture camera that use FAPs model about usability, are consistent with the attitudes of users in a
paper based on SUS questionnaire about usability. In addition, we aim to complete the analysis
process by using SPSS 19 (Orfanou, Tselios, and Katsanos,2015 ).
9.1 Participants
The number of participants is 70 participants (44 males and 26 females) with different ages
ranging from 19 to 42. The academic background of participants:56 B.A (16 Media and
Television, 7 Computer Science, 2 IT, 1 MIS, 2 Mathematics,1 Physics and 1 political science),
and 14 M.A(6 Computer Science , 1 Public Health and 7 MBA/ Entrepreneurial ).
9.2 Experiment Environment
The experiment is done in unconstraint environment.
7. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
7
9.3 Procedure (Scenario)
Figure 2: Sketch Explain the Procedure of Experiment
9.4 Apparatus
Figure 3. Experiment Layout
8. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
8
Figure 4. Summary of Questionnaire Data
Figure 5. Questionnaire Results (SUS Score)
Intel creative interactive gesture camera is a perceptual computing camera designed to interact
with computers in a more natural way through facial expression, gestures and voice. We used this
camera in our experiment to capture the emotions of participants. The kit of this camera has
different examples to capture emotions .(https://software.intel.com/en-us/articles/developing-
applications-using-intel-perceptual-computing-technology,28/7/2017)
Intel camera change the way user interact with computer, making it more natural, intuitive, and
immersive. Computers will be able to perceive our actions through hand gestures, finger
articulation, speech recognition, face tracking, augmented reality, and more. To support
9. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
9
perceptual computing, Intel introduced the Intel® Perceptual Computing SDK, a library for
pattern detection and recognition algorithms.https://software.intel.com/en-us/articles/developing-
applications-using-intel-perceptual-computing-technology,28/7/2017)
Figure 6. Intel Interactive Gestures Camera. (Reprinted from (Doss and Raj,2013))
We implement facial expressions example to capture emotions, the captured emotions (Anger,
Contempt, Disgust, Fear, Joy, Sadness and Surprise), are saved in a text file. Then we take the
emotions of each participant and place them in Microsoft Excel file. After that, we count the
positive and negative emotions for each participant. I automatically analyzed the usability
attitudes of participants depending on the extracted. The results are summarized in the following
Figure:
Figure7. Summary of Camera Emotions Data
The above figure contains seven emotions for each participant, and total numbers of these
emotions.
After we collect the raw camera data, we should consolidate the data of camera and the data of
questionnaire to be compared . The data must be have the same type (like be Likert scale(0-4)).
This step is important to compare the data that we extract from camera (count of emotions) with
questionnaire data (Likert scale). We will use the SPSS19 to change camera data to frequencies
using 20th
percentiles (Gay, Airasian ,2003). Based on these frequencies we will create ranges that
will represent the Likert scale responses as (1 =Strongly disagree,2=Disagree,3= Natural
10. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
10
,4=Agree and 5Strongly agree) these for negative 6 emotions that we extract (Anger, Contempt,
Disgust, Fear, Sadness and Surprise).But for positive emotion that we have (joy) we will do the
opposite .
We will change the responses to become (5 =Strongly disagree,4=Disagree,3= Natural ,2=Agree
and 1= Strongly agree). The final step is to calculate the usability score for camera data like
questionnaire data. In previous section we calculate the SUS score or usability score multiply 4(0-
4 Likert) by 10 (10 questions) to get score of questionnaire from 40 and to change it from 100 we
should multiply by 2.5 . On other hand , For camera to we should multiply 7(emotions)by 4(0-4
Likert) to get score from 28 and to change from 100 we should multiply by 3.57.
The next figure shows camera data after converted to Likert scale:
Figure 8. Camera Data after converted to Likert Scale
The next figure shows the usability score for camera data from 100:
Figure 9. The usability score of camera data from 100
11. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
11
10. DISCUSSION
The main aim of experiment is to find a new valid tool to measure usability or user experience in
automatic way that can be more sensitive and easier than SUS paper based questionnaire. To
achieve our goal we use two valid tools; Intel interactive gesture camera and SUS (System
Usability Score) questionnaire to measure the usability and user experience of Al-Quds
University website.
We infer that, there is a total agreement between the usability results of the questionnaire and the
usability results of the camera. To show the total agreement between the results, we will explore
the quantitative and qualitative data for both camera data and questionnaire data. For quantitative
data the next table is contain the basic statistical information about the SUS questionnaire
usability score (from 100) and camera usability score (from 100) ,Participants(N)=70.
Table3. Basic statistical information for usability about SUS and camera
Statistics Mean Median Standard
Deviation
Variance Minimum Maximum Average
SUS
Questionnaire
Usability
Score
55.68 57.50 8.732 76.254 35 73 55.9
Camera
Usability
Score
53.4990 53.5500 12.07197 145.733 24.99 82.11 53.5
The above table shows that basic statistical information for both the SUS questionnaire usability
score and camera usability score are very close and the camera can be a good alternative for SUS
questionnaire.
The Cronbach alpha, which refers to reliability of measurements, is estimated as 0.498 for
questionnaire data and 0.373 for camera data. These values indicate the strong of questionnaire
and camera as instruments used in the evaluation (Harrati, Bouchrika, Tari, Ladjailia, 2016). The
SUS questionnaire is valid measure tool for more than 25 years (Brooke, 2013). On the other
hand, Intel interactive gestures camera and its application that we used is built on FAPs model
.So, the two tools that we use are concurrent valid tools.
There are many ways to express SUS usability scores like acceptability ranges ,adjective rating
and grade scale .In 2009 Bangour, Kortum and Miller have study 3.500 SUS scores over one
decade for different systems and technologies . They found a relation and correlation between
SUS scores and people evaluate systems and products. People who evaluate systems and products
use adjectives like good, excellent and poor. On other hand, they use scoring system ranging from
0 to 100 to interrupting SUS scores as percentage ,but it is not interrupt (Brooke,2013) .The using
of percentiles scores which give a more meaningful interrupting for SUS score
(Sauro,2011)(Bangour, Kortumand Miller,2008). The next figure showing the original imagine of
grade rankings of SUS scores from “Determining What Individual SUS Scores Mean: Adding an
Adjective Rating Scale,” by A. Bangor, P.T. Kortum, and J.T. Miller, 2009, Journal of Usability
Studies.
12. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
12
Figure 10. Grade Rankings of SUS Scores ( Bangor & et al, 2009)
Implementing (Bangor, Kortum, and Miller) on our results of experiment (SUS and camera ) we
can get the next figures :
Figure 11. SUS Questionnaire Score results of our experiment
Figure (5-44): Camera Score results of our experiment
13. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
13
Implementing the Sauro and Lewis SUS Score grading (2015) (see table (5-14)) on usability
score results of questionnaire and camera we will get the next figure:
Figure 12. The questionnaire and camera usability grading score
In figure11, we see how the scores grades are distributed for camera and questionnaire from 100.
Beside, we notice that camera is more sensitive than questionnaire in capturing usability.
The qualitative data that we collected through the sessions with users are indicate that the users
opinion is support the usability results of camera and questionnaire .As we mentioned the
usability average score of SUS questionnaire is 55.9 and for camera is 53.5. These averages
indicate that usability is below average (68) depending on (Sauro, 2011) (Bangour, Kortumand
and Miller, 2008). In interviews, the participants have present key comments about the website of
Al-Quds University. We asked the participants about the usability of website by mentioned the
negative and positive points in site .One participant stated :( There are many negative points like
moving between pages and update critical information on e-class) another participant stated :( I
think the worst thing in website finding the desired link that I want to use and unclear icons that
used in site to instruct the user). When we asked participant about the degree of ease and
difficulty to access information from the website. He said (the degree of ease to access
information is below median) .In general, the participants have impression that website have
some complicated in usability issues and should work to improve the site.
11. CONCLUSIONS
Human-computer interaction does not depend on traditional input devices, but it has exceeded the
normal limits and has become very close to human–human interaction. In human–human
interaction, the facial expressions and human gestures are the basic part of interaction beside the
speech. Facial expressions and gestures have also become the basic part of human-computer
interaction. Keyboard, mouse and other input devices have become old interaction tools, the new
interactions tools are facial expressions and gestures like raising right eyebrow, raising left
eyebrow, smiling etc. These new interaction tools are also indicators to evaluate usability or user
experience of any computer application or program. This hidden relation between emotions and
body language that include (facial expression and gestures ) gives the researchers the opportunity
to investigate the affect state of human who uses computer device without explicitly
communicating with it. Personally think that reading of nonverbal cues of user makes the
evaluation operation of any application more efficient and more accurate while it decreases
14. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
14
annoyance to the user. The emotions or digitally transformed/converted emotions that result from
the human–computer interactions have the same characteristics and aspects of emotions that
results from human–human interactions, and we can use the same category approaches of
emotions to implement them on digitally transformed/converted emotions. Therefore, methods for
monitoring these digitally transformed/converted emotions can be developed and using these
digitally transformed/converted emotions in measuring the usability of applications. Digitally
transformed/converted emotions can be a good and a clear indicator to usability of any
application.
We can infer digitally transformed/converted emotion by traditional ways from psychological
behavior through blood pressure, eyelid, etc. but these traditional methods have problems make it
not identical for use. The alternative methods that depend on reading the emotions by body
language (facial expressions and gestures )like Facial action coding system FACS that can be
used in psychology and in computer science and the closet one to this model is MPEG or Facial
Animation Parameter used only in computer science. Moreover, we can use hand over face
gestures using the analysis method used by Mahmoud and Robinson.
This research evolve new model (MUDE) depending on the methods (FACS, FAP and Pease &
Pease) that inferring digitally transformed /converted emotions that results fronm human –
computer interaction . MUDE Model can be used to measure the User Experience of user using
the expected emotions that user expressed before, during and after interaction with application or
software. This new model makes the way of measuring so natural and not bothering users during
the evaluation and measuring process. In addition, it provides us with the needed information
without asking users. Furthermore, it is more accurate and truthful way, by which the researcher
can determine the usability of application and user experience of user.
The method can be used to measure the impression of any human not only in HCI domain, but
also in diverse domains like coffee shops, to take impression of customers about new drinks. We
can also use the model in airports to take feedback from passengers about services that have been
served to them. I think that this model will have a wide range of uses in different domains.
Beside the model this thesis introduce a new experiment that implement the MUDE model and
find a new valid tool to measure usability or user experience in automatic way that can be more
sensitive and easier than SUS paper based questionnaire. To achieve our goal we use two valid
tools; Intel interactive gesture camera and SUS (System Usability Score) questionnaire to
measure the usability and user experience of Al-Quds University website. The qualitative data in
experiment that we collected through the sessions with users are indicate that the users opinion is
support the usability results of camera and questionnaire .As we mentioned the usability average
score of SUS questionnaire is 55.9 and for camera is 53.5. These averages indicate that usability
is below average (68) depending on (Sauro, 2011) (Bangour, Kortumand and Miller, 2008).So,
the results give us indicator that camera can be valid tool to user experience and usability and
there is a new way to develop new accurate and reliable methods.
This field of new interactions have its principles and rules that should be studied to successfully
extract the accurate indications and meaning of the facial expressions and human gestures, in
order to develop applications and programs that are capable of understanding these cues.
To fully understand this subject, we must have a sufficient knowledge of models like Ekman and
FAPs. We need to develop new models to enhance the overall operation cycle; also, we have
experimental proofs that these new interactions tools are real sources of usability and user
experience data.
15. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
15
12. FUTURE WORK AND LIMITATIONS
We concluded that the human–computer interactions tools have advanced to become very close to
human–human interactions. We should work on new models to contribute in the development of
human–computer interactions. On the other hand, we should work hard to develop new
applications and programmers that are compatible with these interactions. We need to concentrate
on applications utilizing from emotions extracted from facial expressions and gestures to evaluate
usability and user experience without bothering the users. However, there are many limitations
and constraints like hardware slow improvement in the domain of monitoring facial expressions
and gestures. A limited number of companies manufacture the monitoring hardware tools.
Another limitation is that sometimes the facial expressions of users do not indicate the real
psychological state of human, in some situations the facial expressions shows sadness expression,
while in reality the human may not be in this situation. The third limitation is that facial
movements is only one of other nonverbal cues, but we need to consider the all-nonverbal cues
that occur to understand the real psychological state. The next limitation is that people faces are
different, some have expressing faces, others do not have clear expressing faces that are difficult
to read, and hence hardware tools like camera will not be completely accurate. The finial limit is
that reading facial expressions and gestures is not acceptable for people in some societies, and
may be considered a controversial topic in some cultures. On the other hand, there are many open
roads to future work there main way to work on hardware of Intel interactive gestures camera to
develop it to be more accurate, specific and sensitive.
There is main way to develop its applications like developing new application that give us the
usability and user experience of any application directly comparing with SUS questionnaire .The
third main way is working on MUDE model, which needs developing, and more become widely
to cover more areas like evaluate the satisfaction of customers about any service in cafe or
restaurant.
13. PROBLEMS
1- The research topic is very difficult and have limited number in references.
2- The camera that I used (Intel interactive gesture camera) take long period to be imported
(about 6 months).
3- Many people refused to participate in the experiment (especially girls).
4- No budget for scientific research in Al-Quds University.
5- This research needs full time researcher to accomplish the research.
REFERENCE
[1] Andreassi , J.(2000): Human behavior and Physiological response,Forth Edition. Lawrence Erlbaum
Association Inc. Publishers, USA.
[2] Bangor, A.&et al. (2009): Determining What Individual SUS Scores Mean: Adding an Adjective
Rating Scale. Journal of Usability Studies, 4, pp. 114-123.
[3] Shneiderman, B.& et al. (2002): Social and Psychological Influences on Computer User Frustration.
(Technical Research Report, HCIL-2002-19) .University of Maryland Human – Computer-Interaction
Lab. Maryland.
16. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
16
[4] Bettadapura, V. (2012): Face Expression Recognition and Analysis: The State of the Art College of
Computing, Georgia Institute of Technology. Cornell University Library.
[5] Branco , P. (2006):Computer-based Facial Expression Analysis for Assessing User
Experience.University of Minho. Portugal.
[6] Brave, S., Nass, C. (2003): Emotion in Human–Computer Interaction. In: A.Sears, J.Jacko (Eds.)
Human –Computer Interaction Fundamentals (pp.53-67) .Taylor and Frances Group, USA.
[7] Brooke,J.(2013). SUS: A Retrospective. Journal of Usability Studies, 8, pp. 29-40.
[8] Brooke,J.(1990). SUS - A quick and dirty usability scale. US. (http://www.usability.gov/how-to-and-
tools/methods/system-usability-scale.html, 19/1/2016)
[9] Biyaye, M. (2009). Non-verbal Emotional Dictionary Automatic generation of facial expressions
from text input. Delft University of Technology. Holland.
[10] Damasio, A. (1994): Descartes’ Error: Emotion, Reason, and the Human Brain, First Edition. Putnam
Publishing Group, New York.
[11] Darwin, C. (1872):The expression of the emotions in man and animals, London, UK.(http://darwin-
online.org.uk/content/frameset?pageseq=1&itemID=F1142&viewtype=text,20/4/2015)
[12] Dumas ,J., Redish ,J.(1999): A Practical guide to usability testing,( Revised Edition). Intellect Books,
England.
[13] Doss, S., Raj, C.(2013): Intel Interactive gestures camera ,USA. (https://software.intel.com/en-
us/articles/developing-applications-using-intel-perceptual-computing-technology,19/6/2015)
[14] Ekman, P., Friesen, W. (1971): Constants across cultures in the face and emotion. Journal of
postnatally and psychology,17,pp 124-129.
[15] Ekman, P., Friesen, W. (1975): Unmasking the Face. A guide to Recognizing Emotions From Facial
Clues, First Edition. Prentice-Hall, USA.
[16] Ekman, P. (1982): Emotion in the Human Face, Second Edition. Cambridge University Press, USA.
[17] Ekman, P. (2003): Emotions Revealed: Recognizing Faces and Feelings to Improve Communication
and Emotional Life, First Edition. Times Book, USA.
[18] Ekman, P., & et al. (2010): The manual of FACS .(https://www.slideshare.net/wooo17/manual-1st)
[19] Fasel, B. & Luttin, J. (2002): Automatic facial expression analysis: a survey. Pattern Recognition.
Elsevier Science.
[20] Findlater,& L.et al . (2009): Ephemeral adaptation: The user of gradual onset to improve menu
selection performance. Proceedings of ACM CHI '09,4-9 April 2009.USA. pp. 1655-1664.
[21] Faulkner, L. (2006): History of Usability Profession .usabilitybok.org.
(http://www.usabilitybok.org/what-is-usability/history-of-usability, 30/5/2015).
[22] Grandjean , D.& et al. (2008):Conscious emotional experience emerges as a function of multilevel,
appraisal-driven response synchronization. Elsevier Science. Consciousness and Cognition, 17,
pp.484–495.
[23] Guo, F.(2012): More Than Usability: The Four Elements of User Experience. uxmatters.com. USA.
(http://www.uxmatters.com/mt/archives/2012/04/more-than-usability-the-four-elements-of-user-
experience-part-i.php, 19/6/2015)
17. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
17
[24] Harrati, N. & et al. (2016):Exploring User Satisfaction for E-learning Systems via Usage –Based
Metrics and System Usability Scale Analysis. Elsevier Science. Computers in Human Behavior, 61,
pp. 463–471.
[25] Hartson, H. & et al (1996) :Remote evaluation: The Network as an Extension of the Usability
Laboratory. In Proceedings of the Conference on Human Factors in Computing Systems(CHI’96),
13–18 April 1996, ACM , Canada. pp. 228–235.
[26] Hartson, R., Pyla, P. (2012) :The UX Book: Process and Guidelines for Ensuring A Quality User
Experience.Morgan Kaufmann Publisher, USA.
[27] Harty, J. (2011): Finding usability bugs with automated tests. Communications of the ACM, 54, pp.
44-49.
[28] Hassenzahl, M. (2013): User Experience and Experience Design. In: M. Soegaard , R.Dam (Eds.).
The Encyclopedia of Human-Computer Interaction (Chapter 3).The Interaction Design Foundation,
Aarhus, Denmark.
[29] Highfield, R. & et al .(2009): How your looks betray your personality. New Scientist- Magazine issue
2695.UK(https://www.newscientist.com/article/mg20126957-300-how-your-looks-betray-your-
personality/,17/5/2015)
[30] ISO 92411pt 11 (1998): ISO 92411pt 11-standard. Iso.org. Geneva, Switzerland.
(http://www.iso.org/iso/catalogue_detail.htm?csnumber=16883, 22/5/2015)
[31] ISO 9241-210 (2010): ISO 9241-210-standard Iso.org. Geneva, Switzerland.
(https://www.iso.org/obp/ui/#iso:std:iso:9241:-210:ed- 1:v1: en,12/6/2015)
[32] Kabac, M.& et al .(2015):An evaluation of the DiaSuite Toolset by Professional Developers .
Proceedings of the 6th Workshop on Evaluation and Usability of Programming Languages and Tools
(PLATEAU), October2015. ACM, USA .pp. 9-16.
[33] Kleinginna, P., Kleinginna, A. (1981): A categorized list of emotion definitions, with suggestions for
a consensual definition. In: Motivation and Emotion, (pp. 345–379).Springer. USA.
[34] Krug, S. (2006): Don’t Make Me Think. A common sense approach to web usability,Secound Edition.
New Riders Publishing Berkeley, USA.
[35] Lewis, H.(2012): Body Language :A guide for Professional ,(Third Revised Edition). SAGE
Publications, India.
[36] Lewis J., Sauro, J. (2009): The Factor Structure of the System Usability Scale .In: M. Kurosu (Ed.),
Human Centered Design, HCII 2009, 2009.Springer, Verlag, Berlin, Heidelberg. Pp. 94-103.
[37] Lewis ,J. & et al . (2015) : Investigating the Correspondence Between UMUX -LITE and SUS Scores
Design .In: User Experience and Usability: Design Discourse (DUXU2015),2015.Springer,USA. pp
204-211
[38] Lewis ,J . & et al .(2013) :UMUX-LITE –When There’s No Time for the SUS .In: Proceeding CHI
'13 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 27 April-2
May 2013.ACM,France.pp. 2099-2102.
[39] Mahmoud, M., Robinson, P. (2011): Interpreting Hand-Over-Face Gestures. University of
Cambridge, UK
[40] Maslow, A. (2012): Toward Psychology of Being, Third Edition. John Wiley and Sons, Canada.
18. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
18
[41] Mifsud, J. (2011): The Difference (and Relationship) Between Usability and User
Experience.(http://usabilitygeek.com/the-difference-between-usability-and-user-experience,
19/6/2015)
[42] Moffatt, K. & et al. (2008): Hover or tap? Supporting pen-based menu navigation for older adults. In:
Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility,
13-15 October2008.ACM, Canada pp.51-58.
[43] Moville, P.(2004): User Experience Design.semanticstudios.com. Michigan
.USA.(http://semanticstudios.com/user_experience_design/, 19/6/2015)
[44] Nielsen, J. (1993): Usability engineering, First Edition.. Morgan Kaufmann Publisher. USA.
[45] Orfanou, K .& et al. (2015): Perceived Usability Evaluation of Learning Management Systems:
Empirical Evaluation of the System Usability Scale. The International Review of Research in Open
and Distributed Learning (IRRODL). Athabasca University .Greece.
(http://www.irrodl.org/index.php/irrodl/article/view/1955/3262, 19/6/2016)
[46] Pease, A., Pease, B. (2004): The Definitive Book of Body Language How To Read Others’ Thoughts
By Their Gestures, First Edition .Pease International, Australia.
[47] Ruoti, S. , Seamons, K.(2016) : Standard Metrics and Scenarios for Usable Authentication . In:
Symposium on Usable Privacy and Security (SOUPS) 2016, 22–24 June 2016, Colorado.
(https://www.usenix.org/system/files/conference/soups2016/way_2016_paper_ruoti_metrics.pdf,19/7/
2016)
[48] Russell ,J., Fernandez-Dols ,J. (1997): The Psychology of Facial Expression,First Edition. Cambridge
University Press,UK.
[49] Sauro,J.(2011): Measuring Usability with the System Usability Scale (SUS). Measuringu.com.
Denver, Colorado, USA ( http://www.measuringu.com/sus.php, 19/1/2016)
[50] Sauro,J.(2013) : A Brief History of Usability. Measuringu.com. Denver, Colorado, USA
(https://measuringu.com/usability-history/, 30/5/2015).
[51] Shneiderman, B. (2002): .Leonardo’s laptop: human needs and the new computing technologies, First
Edition. MIT Press,USA.
[52] Shah, K.& et al. (2014):.A survey on Human Computer Interaction Mechanism Using Finger
Tracking .In: International Journal of Computer Trends and Technology (IJCTT).
(https://pdfs.semanticscholar.org/6fb2/5d71f9d7d56d1c89a92ce5eab0b46f41e164.pdf, 16/5/2105)
[53] Soegaard, M.(2012): The History Of Usability: From Simplicity To Complexity. Smashing Magazine,
Germany. (http://www.smashingmagazine.com/2012/05/the-history- of-usability-from-simplicity-to-
complexity/, 18/7/2015)
[54] Tullis, T., Albert, B. (2008): Measuring the User Experience: Collecting, Analyzing, and Presenting
Usability Metrics, First Edition. Morgan Kaufmann Publisher, USA.
[55] The Moving Pictures Experts Groups (MPEG) Website (2015): mpeg.chiariglione.org
(http://mpeg.chiariglione.org/, 30/15/2015)
[56] Usability (2016):Usability.org (http://www.usability.org/, 19/1/2016)
[57] UXPA (2014): User Experience Professionals Association. uxpa.org.
USA.(http://uxpa.org/resources/definitions-user-experience-and-usability, 12/6/2015)
19. International Journal of Managing Information Technology (IJMIT) Vol.10, No.2, May 2018
19
[58] Valstar, M. (2008): Timing is everything a spatio-temporal approach to the analysis of facial Actions.
Imperial College of Science, Technology and Medicine.University of London, UK.
[59] Wikiversity (2015): wikiversity.org
(https://en.wikiversity.org/wiki/Motivation_and_emotion/Book/2014/Plutchik%27s_wheel_of_emotio
ns, 22/5/2015)
[60] Wikipedia Usability (2015): Usability. (https://en.wikipedia.org/wiki/Usability, 19/6/2015)
[61] Wikipedia Emotions (2015): Emotion
(https://en.wikipedia.org/wiki/Emotion#Ancient_Greece_and_Middle_Ages, 22/5/2015)
[62] Wikipedia Appraisal(2015): Appraisal Theory (https://en.wikipedia.org/wiki/Appraisal_theory, 22 /5 /
2015)
[63] Wikipedia FACS (2015): Facial Action Coding
System.(https://en.wikipedia.org/wiki/Facial_Action_Coding_System, 12/5/2015)
[64] Zhao, Y. (2012): Human Emotion Recognition from Body Language of the Head using Soft
Computing Techniques. Ottawa-Carleton Institute for Computer Science (OCICS)Faculty of
engineering .University of Ottawa, Canada.