Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.
Chapter 8: Implementation support
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
Multimodal interaction provides the user with multiple modes of interacting with a system. A multimodal interface provides several distinct tools for input and output of data.
Chapter 8: Implementation support
from
Dix, Finlay, Abowd and Beale (2004).
Human-Computer Interaction, third edition.
Prentice Hall. ISBN 0-13-239864-8.
http://www.hcibook.com/e3/
A simple natural language interface application for launching applications and showing user information based on voice input processed by using natural language programming concepts
Paper-Digital User Interfaces - Applications, Frameworks and Future ChallengesBeat Signer
Invited talk given at the User Interface Colloquium, Otto-von-Guericke University Magdeburg, Germany, November 2, 2009
While there have been dramatic increases in the use of digital technologies for information storage, processing and delivery over the last few decades, the affordances of paper have ensured its retention as a key information medium. Despite predictions of the paperless office, paper is ever more present in our daily work. However, there is a gap between the paper and digital worlds: information present in paper documents cannot be seamlessly transferred to digital media and digital services are not easily accessible from the paper world.
ABSTRACT: In this talk I will present an information-centric approach for integrating paper with digital as well as physical media based on a general cross-media information platform (iServer). Some details about the architecture and implementation of the iServer platform as well as the underlying resource-selector-link (RSL) metamodel for cross-media linking will be highlighted. A selection of interactive paper applications that have been developed based on this platform over the past nine years will be presented, including the EdFest interactive paper guide for the Edinburgh festivals, the PaperPoint presentation tool as well as the PaperProof proof-editing solution. Challenges and solutions for novel forms of interactive paper and cross-media publishing are discussed based on the presented applications. This includes specific extensions of the iServer platform and RSL model as well as the application of our solution in new domains such as digital libraries, cross-media annotation and retrieval or personal cross-media information management that goes beyond the hierarchical information management imposed by the desktop metaphor.
Emotive Media - Visualization and Analysis of Human Bio-Feedback DataArtur Lugmayr
Emotive Media are media environments where an emotional channel is added in addition to other transmission channels in computer mediated communication. The medium actively recognizes human emotions, and is capable of simulating emotions as well. In this presentation, very basic aspects of emotive media are presented. The talk layouts the basics through an investigation of methods emerging from affective computation, brain-interfaces, and cognitive concepts. Practical examples, such as cognitive Big Data, the LudoViCo UX Machine, Empathy Interfaces, Portable Personality (P2), Financial Texts Mining for Sentiment Analysis, and Visualization are presented. More information (in particular publications) can be found on www.artur-lugmayr.com.
Design considerations for machine learning systemAkemi Tazaki
Critical commentary based on my professional experience in designing apps with artificial intelligence and on desktop research. Presentation slides for Botscampe 2016.
This is the COSC 426 Lecture 4 on Designing AR Interfaces. Taught by Mark Billinghurst from the HIT Lab NZ at the University of Canterbury. This is part of his graduate course on Augmented Reality. Taught on August 2nd 2013
UCL joint Institute of Education (London Knowledge Lab) & UCL Interaction Centre seminar, 20th April 2016. Replay: https://youtu.be/0t0IWvcO-Uo
Algorithmic Accountability & Learning Analytics
Simon Buckingham Shum
Connected Intelligence Centre, University of Technology Sydney
ABSTRACT. As algorithms pervade societal life, they are moving from the preserve of computer science to becoming the object of far wider academic and media attention. Many are now asking how the behaviour of algorithms can be made “accountable”. But why are they “opaque” and to whom? As this vital discussion unfolds in relation to Big Data in general, the Learning Analytics community must articulate what would count as meaningful questions and satisfactory answers in educational contexts. In this talk, I propose different lenses that we can bring to bear on a given learning analytics tool, to ask what it would mean for it to be accountable, and to whom. From a Human-Centred Informatics perspective, it turns out that algorithmic accountability may be the wrong focus.
BIO. Simon Buckingham Shum is Professor of Learning Informatics at the University of Technology Sydney, which he joined in August 2014 to direct the new Connected Intelligence Centre. Prior to that he was at The Open University’s Knowledge Media Institute 1995-2014. He brings a Human-Centred Informatics (HCI) approach to his work, with a background in Psychology (BSc, York), Ergonomics (MSc, London) and HCI (PhD, York) where he worked with Rank Xerox Cambridge EuroPARC on Design Rationale. He co-edited Visualizing Argumentation (2003) followed by Knowledge Cartography (2008, 2nd Edn. 2014), and with Al Selvin wrote Constructing Knowledge Art (2015). He is active in the emerging field of Learning Analytics and is a co-founder of the Society for Learning Analytics Research, Compendium Institute and Learning Emergence network.
Artificial intelligence uses in productive systems and impacts on the world...Fernando Alcoforado
This essay aims to present the scientific and technological advances of artificial intelligence, their uses in productive systems and their impacts in the world of work.
People in the Machine: Human-centric Software Engineering for Smart SystemsArosha Bandara
Talk delivered at the Symposium of Software Engineering for Smart Systems, focussing on ways of supporting and integrating people into software engineering for smart cyber-physical-social systems.
Designing User Interactions with AI: Servant, Master or Symbiosis. Alan Dix
The AI Summit London, 22nd Sept. 2021.
https://www.alandix.com/academic/talks/AI-Summit-2021-UI-with-AI/
All AI ultimately affects people, in some cases deeply buried, in others interacting directly with users whether physically, such as autonomous vehicles, or virtually, such as recommender systems. In these interactions AI may be a servant, such as Alexa operating on command; or AI may be the master, such as gig-work platforms telling workers what to do. However, potentially the most productive interactions are a symbiosis, human and AI complimenting one another. Designing human-in-the-loop systems changes the requirements of both AI algorithms and user interfaces. This talk will explore some of the design principles and examples in this exciting area.
Key Takeaways:
* Deterministic ground
– helping users know what may or may not adapt
* Appropriate intelligence
– tuning AI to offer human alternatives and fail well
* Epistemic interaction
– choosing user interactions that are informative for ML
user interface, artificial intelligence, design, machine learning, deterministic ground, appropriate intelligence, alien intelligence, Epistemic interaction
In recent years, the development of Internet of Things meant that we live in an ever more connected world. It also provides the enabling technologies for developments needed in the context of smart cities, smart services and more human like interfaces, i.e. knowledge-driven intelligent user interfaces with cognitive abilities. The side effects of IoT advances is increased volumes of data generated on large scales. This requires better data analytics algorithms to support the smart services we may wish to develop and better privacy so to prevent the misuse of this data.
In this talk we will look at the role Artificial Intelligence can play in this forum, from data analytics and its machine learning bases to privacy preserving and secure data/knowledge handling in user-centric cognitive systems.
This was an invited talk given at Manchester Metropolitan University (MMU) - 27 November 2018
Demonstration of visual based and audio-based hci systemeSAT Journals
Abstract This paper is an attempt to provide a bird’s eye view to the concept of Human Compute Interaction (HCI). The intention is to focus on the uni-modal architecture of HCI; especially the HCI system based on visual-based and color-based communication channels viz-a-viz color recognition and speech recognition. We have developed a Graphical User Interface (GUI) for the same using MATLAB; one push button assigned for color input (through webcam) and the other push button assigned for speech input (through microphone). In color recognition, primary colors i.e. RGB are detected in frames captured in real time or images uploaded offline. Subsequently, desired operation is executed (we have set commands to open D drive). In speech recognition, audio input through microphone is compared with a pre-stored audio file and then an operation is performed automatically (here, we have set commands to open Google web browser). The respective algorithms of these two processes have been described with flow-charts and snapshots of MATLAB results have been displayed. Keywords: Human Computer Interaction, Uni-Modal Architecture, Color Recognition, Speech Recognition
Similar to Implicit Human-Computer Interaction - Lecture 11 - Next Generation User Interfaces (4018166FNR) (20)
Indoor Positioning Using the OpenHPS FrameworkBeat Signer
Research paper presentation given at IPIN 2021, Lloret de Mar, Spain.
Hybrid positioning frameworks use various sensors and algorithms to enhance positioning through different types of fusion. The optimisation of the fusion process requires the testing of different algorithm parameters and optimal lowas well as high-level sensor fusion techniques. The presented OpenHPS open source hybrid positioning system is a modular framework managing individual nodes in a process network, which can be configured to support concrete positioning use cases or to adapt to specific technologies. This modularity allows developers to rapidly develop and optimise their positioning system while still providing them the flexibility to add their own algorithms. In this paper we discuss how a process network developed with OpenHPS can be used to realise a customisable indoor positioning solution with an offline and online stage, and how it can be adapted for high accuracy or low latency. For the demonstration and validation of our indoor positioning solution, we further compiled a publicly available dataset containing data from WLAN access points, BLE beacons as well as several trajectories that include IMU data.
Research paper: https://beatsigner.com/publications/indoor-positioning-using-the-openhps-framework.pdf
Personalised Learning Environments Based on Knowledge Graphs and the Zone of ...Beat Signer
Presentation given at CSEDU 2022, Virtual Event.
The learning of new knowledge and skills often requires previous knowledge, which can lead to some frustration if a teacher does not know a learner's exact knowledge and skills and therefore confronts them with exercises that are too difficult to solve. We present a solution to address this issue when teaching techniques and skills in the domain of table tennis, based on the concrete needs of trainers that we have investigated in a survey. We present a conceptual model for the representation of knowledge graphs as well as the level at which individual players already master parts of this knowledge graph. Our fine-grained model enables the automatic suggestion of optimal exercises in a player's so-called zone of proximal development, and our domain-specific application allows table tennis trainers to schedule their training sessions and exercises based on this rich information. In an initial evaluation of the resulting solution for personalised learning environments, we received positive and promising feedback from trainers. We are currently investigating how our approach and conceptual model can be generalised to some more traditional educational settings and how the personalised learning environment might be further improved based on the expressive concepts of the presented model.
Research paper: https://beatsigner.com/publications/personalised-learning-environments-based-on-knowledge-graphs-and-the-zone-of-proximal-development.pdf
Cross-Media Technologies and Applications - Future Directions for Personal In...Beat Signer
Webinar given at icity Lab Talks - The Digital Value Chain
In this talk, I will first provide an overview of the lab’s research on a general data-driven approach for cross-media information system and architectures based on the resource-selector-link (RSL) hypermedia metamodel. We will then have a look at several cross-media applications for personal information management and next-generation presentation solutions (MindXpres). Finally, I will outline the lab’s most recent research on tangible interaction and dynamic data physicalisation.
Codeschool in a Box: A Low-Barrier Approach to Packaging Programming CurriculaBeat Signer
Presentation given at CSEDU 2023, Prague, Czech Republic.
The tech industry is a fast-growing field, with many companies facing issues in finding skilled workers to fill their open vacancies. At the same time, many people have limited access to the quality education necessary to enter this job market. To address this issue, various small and often volunteer-run non-profit organisations have emerged to up-skill capable learners. However, these organisations face tight constraints and many challenges while trying to design and deliver high-quality education to their learners. In this position paper, we discuss some of these challenges and present a preliminary version of a curriculum packager addressing some of these issues. Our proposed solution, inspired by first-hand experience in these organisations as well as computing education research (CER), is based on a combination of micromaterials, study lenses and a companion mobile application. While our solution is designed for the specific context of small organisations providing vocational ICT training, it can also be applied to the broader domain of learning environments facing similar constraints.
Research paper: https://beatsigner.com/publications/codeschool-in-a-box-a-low-barrier-approach-to-packaging-programming-curricula.pdf
Towards a Framework for Dynamic Data PhysicalisationBeat Signer
Presentation given at the International Workshop Toward a Design Language for Data Physicalization, Berlin, Germany, October 2018
ABSTRACT: Advanced data visualisation techniques enable the exploration and analysis of large datasets. Recently, there is the emerging field of data physicalisation, where data is represented in physical space (e.g. via physical models) and can no longer only be explored visually, but also by making use of other senses such as touch. Most existing data physicalisation solutions are static and cannot be dynamically updated based on a user's interaction. Our goal is to develop a framework for new forms of dynamic data physicalisation in order to support an interactive exploration and analysis of datasets. Based on a study of the design space for dynamic data physicalisation, we are therefore working on a grammar for representing the fundamental physical operations and interactions that can be applied to the underlying data. Our envisioned extensible data physicalisation framework will enable the rapid prototyping of dynamic data physicalisations and thereby support researchers who want to experiment with new combinations of physical variables or output devices for dynamic data physicalisation as well as designers and application developers who are interested in the development of innovative dynamic data physicalisation solutions.
Paper: https://www.academia.edu/37336859/Towards_a_Framework_for_Dynamic_Data_Physicalisation
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Implicit Human-Computer Interaction - Lecture 11 - Next Generation User Interfaces (4018166FNR)
1. 2 December 2005
Next Generation User Interfaces
Implicit Human-Computer Interaction
Prof. Beat Signer
Department of Computer Science
Vrije Universiteit Brussel
http://www.beatsigner.com
2. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 2December 5, 2016
Implicit Human-Computer Interaction
Over the last decade, we have seen a clear trend
towards smart environments and living spaces where
sensors and information processing is embedded into
everyday objects as foreseen in Mark Weiser’s vision of
ubiquitous computing with the goal to simplify the use of
technology
In Implicit Human-Computer Interaction (IHCI), we try to
use contextual factors (e.g. various sensor input) to build
human-centered anticipatory user interfaces based on
naturally occurring human interactive behaviour
Context-aware computing can be used to design implicit
human-computer interaction
3. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 3December 5, 2016
Implicit Human-Computer Interaction …
Implicit Human-Computer Interaction (IHCI) is orthogonal
to (traditional) explicit HCI
implicit communication channels (incidental interaction) can help
in building more natural human-computer interaction
[https://www.interaction-design.org/encyclopedia/context-aware_computing.html]
4. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 4December 5, 2016
Context
Context-aware systems often focus on location as the
only contextual factor
However, even if location is an important factor, it is only
one context dimension
Context is any information that can be used to character-
ize the situation of an entity. An entity is a person, place,
or object that is considered relevant to the interaction
between a user and an application, including the user and
applications themselves.
A.K. Dey, 2000
5. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 5December 5, 2016
Example: Navigation
Various contextual factors
can be taken into account
when designing the inter-
face of a car navigation
system
current location (GPS)
traffic information
daylight
- automatically adapt screen brightness
weather
current user task
- e.g. touch is disabled while driving and only voice input can be used
…
6. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 6December 5, 2016
Everyday Examples
Systems that take user actions as input and try to output
an action that is a proactive anticipation of what the
users need
simple motion detector at doors that open the door automatically
to allow humans with shopping carts to pass through
escalators that move slowly when not in use but speed up when
they sense a person pass the beginning of the escalator
smartphones and tablets automatically changing between
landscape and portrait mode based on their orientation
smart meeting rooms that keep track of the number of people in a
meeting room and alter the temperature and light appropriately
…
7. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 7December 5, 2016
Exercise: Context-aware Digital Signage
8. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 8December 5, 2016
Contextual Factors
Human factors
user
social environment
task
..
Physical environment
location
infrastructure
conditions
…
9. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 9December 5, 2016
From Sensor Input to Context
How do we compute the perceived context from a single
or multiple sensor inputs?
machine learning techniques?
rule-based solutions?
…
How should we model context?
e.g. generic context models without application-specific notion of
context
How to trigger implicit interactions based on context?
How to author new context elements?
relationships with sensor input, existing context elements as well
as application logic
10. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 10December 5, 2016
User-Context Perception Model (UCPM)
Musumba and Nyongesa, 2013
11. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 11December 5, 2016
Things Going Wrong
What if the implicit interaction with a system
goes wrong?
is it really the wrong system behaviour or is the user just not
aware of all factors taken into account (awareness mismatch)?
The quality of implicit human-computer interaction as
perceived by the user is directly related to the awareness
mismatch
Fully-automated vs. semi-automated systems
sometimes is might be better to not fully automate the interaction
since wrong implicit interactions might result in a very bad user
experience
take the user into the loop
12. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 12December 5, 2016
Intelligibility
Improved system intelligibility might increase a user's
trust, satisfaction and acceptance of implicit interactions
Users may ask the following questions (Lim et al., 2009)
What: What did the system do?
Why: Why did the system do W?
Why Not: Why did the system not do X?
What If: What would the system do if Y happens?
How To: How can I get the system to do Z, given the current
context?
Explanations should be provided on demand only in
order to avoid information overload
feedback easier for rule-based solutions than for machine
learning-based approaches
13. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 13December 5, 2016
Context Modelling Toolkit (CMT)
Multi-layered context
modelling approach
seamless transition between
end users, expert users and
programmers
Beyond simple "if this then
that" rules
reusable situations
Client-server architecture
server: context reasoning
based on Drools
client: sensor input as well as
applications
End User
Expert User
Functions
Actions
Template
Filled in
template
Situation
Situations
Facts
Rule
Programmer
Rule
(4)
(5)
(6)
(7)
(8)
Trullemans and Signer, 2016
14. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 14December 5, 2016
Context Modelling Toolkit (CMT) …
Trullemans and Signer, 2016
15. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 15December 5, 2016
HCI and HCII in Smart Environments
Smart meeting room in the WISE lab
16. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 16December 5, 2016
Some Guidelines for Implicit HCI
Always first investigate what users want/have to do
as a second step see what might be automated
use context-awareness as a source to make things easier
The definition of a feature space with factors that will
influence the system helps in realising context-aware
implicit interactions
find parameters which are characteristic for a context to be
detected and find means to measure those parameters
Always try to minimise the awareness mismatch
increase intelligibility by providing information about the used
sensory information (context) in the user interface
17. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 17December 5, 2016
Some Guidelines for Implicit HCI …
Designing proactive applications and implicit HCI is a
very difficult task because the system has to anticipate
what the user wants
always investigate whether a fully-automated solution is best or
whether the user should be given some choice (control)
18. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 18December 5, 2016
Affective Computing
Computing that takes into account the
recognition, interpretation, modelling,
processing and synthesis of human
affects (emotions)
Implicit human-computer interaction can
be based on recognised human emotions
Rosalind W. Picard
19. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 19December 5, 2016
Emotions
External events
behaviour of others, change in a current situation, …
Internal events
thoughts, memories, sensations ...
Emotions are episodes of coordinated changes in several
components (neurophysiological activation, motor
expression, subjective feelings, action tendencies and
cognitive processes) in response to external or internal
events of major significance to the organism.
Klaus R. Scherer, Psychological Models of Emotion, 2000
20. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 20December 5, 2016
Emotion Classification
Different models to classify emotions
Discrete models treat emotions as discrete and different
constructs
Ekman’s model
…
Dimensional models characterise emotions via
dimensional values
Russell’s model
Plutchik’s model
PAD emotional state model
…
21. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 21December 5, 2016
Ekman’s Emotions Model
Theory of the universality
of six basic facial emotions
anger
fear
disgust
surprise
happiness
sadness
Discrete categories can be
used as labels for emotion
recognition algorithms
multiple existing databases rely on Ekman’s model
22. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 22December 5, 2016
Russell’s Circumplex Model of Affect
Emotions are mapped to
two dimensions
valence (x-axis)
- intrinsic attractiveness or
aversiveness
arousal (y-axis)
- reactiveness to a stimuli
23. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 23December 5, 2016
Pluchik’s Wheel of Emotions
Three-dimensional
"extension" of Russell’s
circumplex model
8 basic emotions
joy vs. sadness
trust vs. disgust
fear vs. anger
surprise vs. anticipation
8 advanced emotions
optimism (anticipation + joy)
love (joy + trust)
submission (trust + fear)
25. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 25December 5, 2016
PAD Emotional State Model
Representation of emotional states via three numerical
dimensions
pleasure-displeasure
arousal-nonarousal
dominance-submissiveness
Example
anger is a quite unpleasant, quite aroused and moderately
dominant emotion
26. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 26December 5, 2016
Self-Assessment of PAD Values
Self-Assessment Manikin
(SAM) is a language
neutral form that can be
used to assess the PAD
values
each row represents five
values for one of the
dimensions
- pleasure
- arousal
- dominance
27. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 27December 5, 2016
Emotion Recognition
Emotions can be manifested via different modalities
acoustic features (voice pitch, intonation, etc.)
verbal content (speech)
visual facial features
body pose and gestures
biosignals (physiological monitoring)
- pulse, heart rate, …
In general, artificial intelligence algorithms are used for
an accurate recognition of emotions
Potential multimodal fusion of multiple modalities
improve emotion recognition accuracy by observing multiple
modalities
28. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 28December 5, 2016
Acoustic Feature Recognition
Behaviour and evolution
of acoustic features over
time is meaningful for
emotion detection
Typical features
intonation
intensity
pitch
duration
29. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 29December 5, 2016
Speech-based Emotion Recognition
Recognition of emotions from speech content
(e.g. via speech recogniser) is based on typical methods
such as
bag of words (unigrams)
n-gram language models
Typical features
emotion dictionaries
lattices
orthography (punctuation, capitalisation, emoticons)
wordnet
syntax
semantic roles
world knowledge
30. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 30December 5, 2016
Facial Emotion Recognition
Find face parts
use orientation or prominent
features such as the eyes
and the nose
Extract facial features
geometry based
appearance based (textures)
Classification through
support vector machines
neural networks
fuzzy logic systems
active appearance models
31. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 31December 5, 2016
Facial Action Coding System (FACS)
Used to describe changes,
contraction or relaxations
of muscles of the face
Based on so-called
Action Units (AUs)
description for component
movement or facial actions
combination of AUs leads to
facial expressions
- e.g. sadness = AU 1+4+15
http://www.cs.cmu.edu/~
face/facs.htm
32. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 32December 5, 2016
Body Pose and Gestures
Body language carries rich emotional information
body movement, gestures and posture
relative behaviour (e.g. approach/depart, looking/turning away)
Detailed features extracted from motion capture
33. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 33December 5, 2016
Biosignals
Different emotions lead to different biosignal activities
anger: increased heart rate and skin temperature
fear: increased heart rate but decreased skin temperature
happiness: decreased heart rate and no change in skin temperature
Advantages
hard to control deliberately (fake)
can be continuously processed
Disadvantages
user has to be equipped with sensors
Challenge
wearable biosensors
34. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 34December 5, 2016
Emotiv EPOC Neuroheadset
Non-invasive EEG device
14 sensors
Integrated gyroscope
Wireless
Low cost
Average sensor sensibility
mainly due to sensor non-invasiveness
35. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 35December 5, 2016
Emotiv EPOC Neuroheadset …
36. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 36December 5, 2016
From Signals to Labelled Emotions
Five potential channels
visual: face
visual: body movement
acoustic: speech content
acoustic: acoustic features
physiological: heart rate, blood pressure, temperature, GSR, EMG
Associating emotion descriptors
machine learning problem
SVMs, HMMs, NNs?
rely on only single modality or fusion of multiple modalities?
associate emotion descriptors before or after fusing the
modalities?
- i.e. feature- or decision-level fusion?
37. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 37December 5, 2016
Synthesis of Emotions
Intelligent agents support
social interactions with
users (showing emotions)
real life (robots)
virtual reality (virtual agents)
"Characters with a brain"
reason about environment
understand and express emotion
communicate via speech and gesture
applications
- e-learning
- robots and digital pets
- …
Kismet, MIT A.I lab
38. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 38December 5, 2016
Virtual Characters
Virtual character with
human behaviour that sup-
ports face-to-face human-
machine interaction
Basic physical behaviour
walking, grasping
Non-verbal expressive behaviour
gestures, facial expression (emotion), gaze
Spontaneous and reactive behaviour
responsiveness to events
Max Headroom, 1987
39. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 39December 5, 2016
Video: Text-driven 3D Talking Head
40. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 40December 5, 2016
Effectors in Emotion Synthesis
Facial expressions
emotion categories have associated facial action programs
Facial Action Coding System (FACS)
Gestures
deictic, iconic, …
timing and structure are important
Gaze
roles of gaze: attention, dialogue regulation, deictic reference
convey intentions, cognitive and emotional state
Head movement
during conversation head is constantly in motion
nods for affirmation, shakes for negation, …
41. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 41December 5, 2016
EmoVoice Framework
Real-time recognition of
emotions from acoustic
speech properties
uses features from pitch,
energy, duration, voice
quality and spectral
information
uses the Open Sound
Control (OSC) protocol
mirroring of emotions to the
user
- http://www.informatik.uni-
augsburg.de/lehrstuehle/hcm/projects
/tools/emovoice/ joy sadness anger
42. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 42December 5, 2016
Homework
Read the following paper that is available
on PointCarré (papers/Weiser 1991)
M. Weiser, The Computer for the 21st Century, ACM Mobile
Computing and Communications Review, July 1991
43. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 43December 5, 2016
References
M. Weiser, The Computer for the 21st Century,
ACM Mobile Computing and Communications
Review, July 1991
http://dx.doi.org/10.1145/329124.329126
A. Schmidt, Context-Awareness, Context-Aware User
Interfaces and Implicit Interactions
https://www.interaction-design.org/encyclopedia/context-
aware_computing.html
G.W. Musumba and H.O. Nyongesa, Context Awareness
in Mobile Computing: A Review, International Journal of
Machine Learning and Applications, 2(1), 2013
http://dx.doi.org/10.4102/ijmla.v2i1.5
44. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 44December 5, 2016
References …
B.Y. Lim, A.K. Dey and D. Avrahami, Why and
Why Not Explanations Improve the Intelligibility of
Context-aware Intelligent Systems, Proceedings of CHI
2009, Boston, USA, April 2009
https://doi.org/10.1145/1518701.1519023
S. Trullemans and B. Signer, A Multi-layered Context
Modelling Approach for End Users, Expert Users and
Programmers, Proceedings of SERVE 2016,
International Workshop on Smart Ecosystems cReation
by Visual dEsign, Bari, Italy, June 2016
http://beatsigner.com/publications/trullemans_SERVE2016.pdf
45. Beat Signer - Department of Computer Science - bsigner@vub.ac.be 45December 5, 2016
References …
J.A. Russel, A Circumplex Model of Affect,
Journal of Personality and Social Psychology,
39(6), 1980
https://www2.bc.edu/james-russell/publications/Russell1980.pdf
R.W. Picard, Affective Computing, MIT Technical Report
No. 321, 1995
http://affect.media.mit.edu/pdfs/95.picard.pdf
Expressive Text-driven 3D Talking Head
http://www.youtube.com/watch?v=TMxKcbQcnK4