11. Motivatio
Related
works
Backgrou
nd
Systemdescription
Demo
Future
plan
, J., Churchill, E., & Prevost, S. (Eds.). (2000). Embodied conversational agents. MIT press.
sson, K. R. (1999). The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational
tificial Intelligence, 13(4-5), 519-538.
aola, S. (2018). A computational model of empathy for interactive agents. Biologically inspired cognitive architectures, 26,
). Empathy framework for embodied conversational agents. Cognitive Systems Research, 59, 123-132.
). Evaluating empathy in artificial agents. arXiv preprint arXiv:1908.05341.
aola, S. (2019, August). M-path: a conversational system for the empathic virtual agent. In Biologically Inspired Cognitive
ting (pp. 597-607). Springer, Cham.
aola, S. (2019). Evaluating levels of emotional contagion with an embodied conversational agent. In Proceedings of the
ence of the cognitive science society.
, October). Modeling Empathy in Embodied Conversational Agents. In Proceedings of the 2018 on International
ltimodal Interaction (pp. 546-550). ACM.
, S., & Bernardet, U. (2018, August). An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual
EEE Conference on Computational Intelligence and Games (CIG) (pp. 1-8). IEEE.
J. N., & Rickertsen, K. (2007, April). A meta-analysis of the impact of the inclusion and realism of human-like faces on user
erfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1-10). ACM.
11
References
12. Motivatio
Related
works
Backgrou
nd
Systemdescription
Demo
Future
plan
André, E., Cassell, J., Petajan, E., & Badler, N. (2002). Creating interactive virtual humans: Some assembly required. IEEE
17(4), 54-63.
Chen, T., Nielsen, A., Scarborough, J. K., & Robles, E. (2009). Evaluating the effects of behavioral realism in embodied
of Human-Computer Studies, 67(10), 842-849.
more, T., & Sidner, C. (2011, March). An intelligent conversational agent for promoting long-term health behavior change
nterviewing. In 2011 AAAI Spring Symposium Series.
kmore, T. (2009, April). Persuading users through counseling dialogue with a conversational agent. In Proceedings of the
nference on persuasive technology (p. 25). ACM.
e, T., Nikolopoulou, A., & Paasche-Orlow, M. (2017, August). Talk about death: End of life planning with a virtual agent.
nference on Intelligent Virtual Agents (pp. 441-450). Springer, Cham.
, T., Bowen, D. J., Norkunas, T., Campion, M., Cabral, H., ... & Paasche-Orlow, M. (2015). Acceptability and feasibility of a
ICKY) to collect family health histories. Genetics in Medicine, 17(10), 822.
g, L. (2010, September). Making it personal: end-user authoring of health narratives delivered by virtual agents.
nference on Intelligent Virtual Agents (pp. 399-405). Springer, Berlin, Heidelberg.
mani, E., Trinh, H., Pusateri, A., Paasche-Orlow, M. K., & Magnani, J. W. (2018, November). Managing Chronic Conditions
-based Conversational Virtual Agent. In IVA (pp. 119-124).
son, K. R. (1999). The power of a nod and a glance: Envelope vs. emotional feedback in animated conversational
ficial Intelligence, 13(4-5), 519-538.
12
References
13. Motivatio
Related
works
Backgrou
nd
Systemdescription
Demo
Future
plan
e, T., Rubin, A., Yeksigian, C., Sawdy, M., & Simon, S. R. (2018, November). User Gaze Behavior while Discussing Substance
l Agent. In Proceedings of the 18th International Conference on Intelligent Virtual Agents (pp. 353-354). ACM.
e, T., & Pedrelli, P. (2016). An affectively aware virtual therapist for depression counseling. In ACM SIGCHI Conference on
Computing Systems (CHI) workshop on Computing and Mental Health.
hekhi, A., Murali, P., Parmar, D., & Bickmore, T. (2019, July). Stagecraft for Scientists: Exploring Novel Interaction Formats for
nter Agents. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents (pp. 10-12). ACM.
n, J. N., & Rickertsen, K. (2007, April). A meta-analysis of the impact of the inclusion and realism of human-like faces on user
terfaces. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 1-10). ACM.
J., André, E., Cassell, J., Petajan, E., & Badler, N. (2002). Creating interactive virtual humans: Some assembly required. IEEE
ms, 17(4), 54-63.
C., Chen, T., Nielsen, A., Scarborough, J. K., & Robles, E. (2009). Evaluating the effects of behavioral realism in embodied
al of Human-Computer Studies, 67(10), 842-849.
ol.com
13
References
Editor's Notes
Even though we have texting and voice call to communicate with others, we eager to meet that person, at least see them through video call. Why do still people go for in person appointment for therapist and counseling? They could talk over voice call instead of face to face meeting. Non verbal behaviours is meaningful for us to build personal relational ship. We can feel the emotional and social interaction FROM PHYSICAL APPEARANCE Which is not possible in voice or text.
For example, confusion, or boredom. we can understand that from someone's facial expression while talking. it is reasonable to embody the agents with natural communication capabilities to create a familiar basis to interact with human. It is found that providing bodily feedback such as gestures and gaze, is an indication of shared contextual ground between the user and agent (Cassell & Thorisson, 1999) and interactions with virtually embodied agents was seen more positive than those without one (Yee et. al., 2007).
embodied agent is different than other voice and chat bot. It has human like physical appearance full of facial expressions, gaze, head and body gestures, verbal behaviours works with AI.
Justine Cassell prof of MIT media lab ( gesture and Narrative language) edited a book “Embodied conversational agent” in 2000.
That describes research in all aspects of the design, implementation, and evaluation of embodied conversational agents as well as details of specific working systems.
Many of the chapters are written by multidisciplinary teams of psychologists, linguists, computer scientists, artists, and researchers in interface design to convey human perception in embodied agent.
Lots of research has been going on around the world on embodied agent. Like, USC, MIT and so on. Timothy Bikemore has been doing researching on Relational Agents for health counseling applications from 2013 to now. Co presenting author.
Tanya: AFFECTIVELY AWARE AGENT FOR DEPRESSION COUNSELING
camram: counsel patients at home about medication adherence, stress management, advanced care planning, and spiritual support, and to provide referrals to palliative care services when needed.
Gabe: aim to provide age appropriate health promotion and identify and reduce health risks
Institute of creative technology of USC is also working of virtual Interactive Training Agent for Veterans 2015-present. Some of the agents are
Motivational Interviewing Novice Demonstration (MIND): therapist-client interaction with a simulated veteran using a multiple-choice-style progression through a therapy session
empathetic listener, CLOVR analyzes the user’s emotional state and responds adaptively while also providing conversational feedback loops and non-verbal behavior. —- ingoing 2017 to current, no link available.
Immersive Naval Officer Training System (INOTS) Counseling: laptop training application used to teach interpersonal skills to United States (US) Navy junior leaders.
Not only this, ivizlab in SIAT is also working on creating effective humanlike embodied agent to make real life application.
Engagement with Artificial Intelligence through Natural Interaction Models ,
An Eye Gaze Model for Controlling the Display of Social Status in Believable Virtual Humans,
Evaluating Levels of Emotional Contagion in an Embodied Conversational Agent,
A computational model of empathy for interactive agents,
Empathy Framework for Embodied Conversational Agents,
Evaluating Empathy in Artificial Agents.
https://ivizlab.org/publications/ from 2015 to present
(1) Client-Side Implementation: This
explains all the front-end client implementations such as 3D Model, Animation, Lip
Synchronization (2) Server-Side Implementation: This explains all the server-side
implementations such as Dialogue system, Speech to Text Recognition (STT), Text to Speech
Recognition (TTS). The user interacts with the client interface wherein the system continuously
processes user input from device microphone and obtains the desired output from the server.
Client-side implementation will be done in Unity 3D, Adobe Fuse and Maya will be used to make
3D model and animation and SALSA LIP SYNC PRO for real-time lip synchronization. Whereas,
server-side implementation will be done in IBM Watson Cloud Platform using Speech to Text,
Text to Speech, Watson Assistant and Voice services of IBM Watson cloud service. Conversation
dialogue flow will need to create for Watson's assistant. A