SlideShare a Scribd company logo
1 of 10
Download to read offline
Influence of Upper Body Pose Mirroring in
Human-Robot Interaction
Luis A. Fuente1
, Hannah Ierardi1
, Michael Pilling2
and Nigel T. Crook1
1
Department of Computing and Communication Technologies
2
Department of Psychology,
{lfuente-fernandez, 11092652, mpilling, ncrook}@brookes.ac.uk
Oxford Brookes University, Oxford OX3 0BP, England
Abstract. This paper explores the effect of upper body pose mirroring
in human-robot interaction. A group of participants is used to evaluate
how imitation by a robot affects people’s perception of their conversation
with it. A set of twelve questions about the participants’ university expe-
rience serves as a backbone for the dialogue structure. In our experimen-
tal evaluation, the robot reacts in one of three ways to the human upper
body pose: ignoring it, displaying its own upper body pose, and mirroring
it. The manner in which the robot behaviour influences human appraisal
is analysed using the the standard Godspeed questionnaire. Our results
show that robot body mirroring/non-mirroring influences the perceived
humanness of the robot. The results also indicate that body pose mirror-
ing is an important factor in facilitating rapport and empathy in human
social interactions with robots.
Keywords: body-pose mirroring, empathy, rapport, anthropomorphism
1 Introduction
Mirroring is a natural social behaviour demonstrated by humans whereby a par-
ticipant in a social interaction will often tend to subconsciously mirror another’s
body posture. There is significant evidence from psychological studies that peo-
ple in groups have a tendency to engage in this mirroring behaviour [1, 2]. People
are often not conscious of the fact that they are mirroring someone’s body pose
or that someone is mirroring them [3]. These studies have also experimentally
shown that this non-verbal synchrony in conversation is preserved over time and
has a positive influence in creating rapport, increasing empathy, and facilitating
social interaction [4].
In this article, we investigate the effect that upper body pose mimicry has
on how humans perceive robotic systems. The proposed system recognises upper
body poses from camera images and produces upper body gestures (torso, head
and arms) in the humanoid Nao robot. The robot’s text-to-speech output is also
used to achieve natural communication. A set of twelve predefined questions is
considered to engage the participants in communication with the robot in one of
the following three different conditions: the robot mirrors the upper body pose of
a human, the robot generates non-mirroring human-like upper body poses, and
the robot adopts a static body pose. The Godspeed questionnaire [12] is used
to measure the five key concepts of human-robot interaction and to evaluate the
effect of body pose mirroring in human-robot interaction (HRI).
The rest of this paper is structured as follows: Section 2 reviews existing
studies on the influence of behaviour mimicry in human perception of robotic
systems, Section 3 includes a description of the experimental setup, the method-
ology and the evaluation method, Section 4 presents statistical findings, and
Section 5 concludes with a brief discussion of the results and future work.
2 Related Work
Several recent studies have assessed the influence that a robot’s non-verbal be-
haviours have on the way humans perceive and interact with robots. Salem et al
[5] found that human beings have a tendency to anthropomorphise more (to like
the robot more), report greater shared reality and show increased intention for
future interaction with robots when they used bodily gestures with speech, as
opposed to speaking using a static pose. The same authors also suggested that
a robot’s use of gesture with speech tends to enhance people’s performance on
robot guided tasks [6]. Similarly, Riek et al [7] demonstrated that the manner in
which robots execute bodily gestures can have a major influence on the degree
to which people are willing to cooperate with them. Further, some robot ges-
ture combinations (i.e. gazing and pointing) also increase a person’s tendency to
reproduce the behaviour of a robot, resulting in entrainment [8]. In this sense,
Kim et al [9] stated that it is possible to use gesture manipulations to influence
the perceived personality of social robots and [10] showed that contingent non-
verbal behaviours (i.e. behaviours tightly coupled to what the human speaker is
doing) can create rapport with the human participant.
The use of mirroring behaviours by virtual characters and robots has been
shown to improve empathy and create rapport with the humans that interact
with them. Gonsior et al [11] studied the impact on human-robot interaction by
a robot that mirrors facial expressions. In their study, the human participants
engaged in a communicative task with the robot under one of three experimental
conditions: the robot displayed no facial expressions, the robot mirrored the par-
ticipant’s facial expression, and the robot displayed facial expressions according
to its internal model which indirectly mirrored the participant’s facial expres-
sion. Each participant completed two post-experiment questionnaires. The first
evaluated for empathy and subjective performance, and the second consisted
of the five Godspeed questionnaires [12]. Their results indicated that mirroring
conditions received higher ratings than neutral conditions. These results have
also been supported by Kanda et al in [17]. They indicated that cooperative
gestures from a robot (i.e. gestures that synchronised with the human partici-
pant) increase the human’s impression of the robot’s reliability and sympathy.
Similarly, Bailenson and Yee [14] found that an embodied artificial agent that
mimicked a human participant’s head movements was perceived as being more
persuasive and received more positive trait ratings than non-mimicking agents.
On the other hand, Riek et al [15], did not find that head gesture mirroring had
a noticeable impact on creating rapport between a human and a robot. However,
the authors acknowledge that the small sample size and other possible factors
concerning their experimental setup may have influenced this result.
3 Method
To date, we have not been able to find any research that evaluates the effect
that upper body mirroring during human-robot interaction has on the anthro-
pomorphism, animacy, likeability, perceived intelligence and perceived safety of
the robot. This study seeks to investigate this through a series of experiments in
which participants engage in spoken interactions with a robot. The empathy be-
tween the participants and humanoid robot was examined under three different
conditions:
A The robot mirrors the user’s body poses during the interaction with occa-
sional head nodding.
B The robot produces pre-programmed non-mirroring gestures with occasional
head nodding during the interaction.
C The robot remains static for the duration of the interaction apart from occa-
sional head nodding.
3.1 Hypotheses
Given the importance of mimicry in human-human communication, we won-
dered if it might be also important in human-robot communication. Thus, our
hypotheses are as follows:
H1 The participants will rate the likeability and perceived safety of the robot
more highly in condition A than in conditions B and C. This is motivated by
the work in [1], which showed that posture sharing and rapport are positively
correlated in humans and that this correlation holds over time, promotes
safety, and encourages each participant during conversation.
H2 The participants will rate the anthropomorphism, animacy and perceived
intelligence more positively in conditions A and B than in C. This hypoth-
esis is prompted by the idea that people will show the most appreciation
for a robot that mimics their upper-body and head gestures in real time.
This motivation comes from the work in [17] who ran an experiment with
the WowWee Alive Chimpanzee Robot capable of making head nods and
face expressions, as well as detecting human head nods. They found that
temporal-cooperative behaviours lead to a more positive interaction with
robots and enable better human-robot rapport when compared with a robot
that does not employ such behaviours.
3.2 Experimental Validation
In order to evaluate whether the proposed hypotheses improve HRI, an exper-
iment in which human participants engage in a spoken interaction with the
humanoid Nao robot was designed. The Nao robot is set up to allow upper body
motion only, and a depth camera is used to track body poses and movements.
During the interaction, the Nao robot speaks to the participant while acting
according to one of the three possible conditions. It also nods at random inter-
vals throughout the duration of the experiment. Subjects are divided into three
different groups depending on the following conditions applied:
A Mirroring: The robot mirrors the participant’s upper body pose. The pose of
the participant is estimated using a depth camera. The output of the depth
camera is first normalised and then processed to extract the rotational angles
of each shoulder and elbow in the participant’s body (eight angles in total).
These are then scaled and mapped onto the corresponding joint angles for
the robot. It is relevant to note that the Nao robot does not continually
mirror the participant’s body pose, since this would lead to unrealistic copy-
cat behaviours. Instead, the robot intermittently mirrors the participant’s
body pose after it has remained in that body pose for a certain period of
time (∼5s).
B No-Mirroring: The robot produces pre-programmed human-like non-mirroring
gestures (see Figures 1(b)-(e)). These only take place while asking the ques-
tions to the participant.
C Static: The robot remains static with no body movements.
(a) (b) (c) (d) (e)
Fig. 1. Predefined movements of the Nao robot during the experiments. Figure (a)
represents the resting/static pose and figures (b)-(e) illustrate the predefined upper
body robotic gestures.
It should be noted that head nodding was included to reduce the impact of
any bias arising from the movement of the robot in conditions A and B. It was
deemed that without the head nodding, the outcome of the experiment might
be influenced by the fact that the robot is animated in condition A while the
participant is speaking and not in condition B, as opposed to being influenced
by the different types of whole body movement in each case (i.e. mirroring as
opposed to non-mirroring poses). The importance of the head nodding in condi-
tion C is twofold. First, it allows participants to address the questions regarding
animacy in the post-experiment questionnaire. Surprisingly, it was noted that
participants waited for the robot to move first before engaging in a non-verbal
communication with the robot. Second, it eases the comparison of condition C
with the remaining conditions. The lack of movement in the robot itself neg-
atively affects the participant’s reaction to and rapport with the robot. In all
cases the robot waits until the participant finishes speaking before asking the
next question in the sequence.
3.3 Experimental Setup
Forty test subjects took part in this experiment, of which 9 were female and 31
were male. The participants were students from Oxford Brookes University and
had no previous experience in the robotics field. They were between the ages of 19
and 35 (mean of 21.67 and s.d. of 3.05). The distribution of the subjects over the
experimental conditions was 17 for A, 15 for B, and 8 for C3
. Participants were
not informed about the purpose of this study, but were instead advised that the
experiment was to evaluate an automated student advisory system which seeks
to use interactive humanoid robots. Additionally, they were also briefed about
the layout of the experiment room and robot design.
(a) Experimental room setup
Control Area
Kinnect
Customised
Nao'sChair
Participant's
Location
Nao
Robot
Camera
155 cm
(b) Schematic layout
Fig. 2. Layout of the quiet room during the participants’ interaction with the robot.
A quiet room with controlled lighting conditions was chosen for the experi-
ment with the layout shown in Figure 2. During the experiment, each participant
was seated facing the Nao robot which was located on a customised chair so that
the head of the robot was approximately at eye-level with the participant. The
robot was strapped into the seat facing the participant in a hands-in-lap resting
position (see Figure 1(a)). This restricts the lower-body motion of the robot but
allows it to move its torso, arms and head. Since the task rating relies on the
ability of the robot to effectively mimic the participants, a depth camera was
3
The lack of movement in the participants due to the still position of the robot
body in condition C lead to undesired problems in the motion capture impeding the
mirroring. These participants were not considered during the analysis.
preferred over the robot’s head camera. The depth camera was placed behind
the robot, angled downwards to capture the movement of the participants. Each
participant was seated sufficiently far (∼150 cm) from the robot’s chair so that
its body was in full view of the depth camera and robot. The experimental setup
was identical for each condition.
3.4 Experimental Procedure
Prior to the experiment, the instructor gave the participant a brief introduction
on the task and described the one-to-one interaction with the robot. It is impor-
tant to note that there was no visual or physical contact between the participant
and the instructor (who was also in the experimental room but hidden from the
participant), so that the participant was essentially alone with the robot. To
start the experiment, the robot introduces itself in order to allow the partici-
pant to become familiar with its voice, shape and movements. From this point,
the one-to-one interaction between the participant and the Nao robot begins. It
consists of the interaction through a set of twelve predefined questions (see Table
1). Under the conditions A and B, the robot is also animated whilst asking the
questions, before returning to the neutral hands-in-lap pose. The questions were
determined in advance and centred around the participant’s experience at the
university in order to prompt an emotional engagement with the robot. Further,
they also struck a balance between a subject that the participant could emo-
tionally connect with, but that steered away from being unnecessarily invasive.
The questions were identical for each participant.
Table 1. Sequence of questions asked by the robot
Q1 What subject are you studying?
Q2 When did you start on your course?
Q3 What have you enjoyed most about your course?
Q4 What did you enjoy least about your course?
Q5 Tell me about a challenging work that you have done during your studies?
Q6 What do you like doing outside of your studies?
Q7 What would you like to do after university?
Q8 What is your preferred mode of learning: lecture or practicals?
Q9 Tell me about a particular experience you had working in a group
Q10 Do you enjoy group work, or do you prefer working alone?
Q11 What made you want to choose your course?
Q12 What would you like to do after university?
During the experiment, the robot waited until the participant finished speak-
ing before asking the next question. It was anticipated that some participants
would speak for longer than others, which may bias the results obtained due to
the substantial variability in the participants’ exposure to the experimental con-
ditions. To minimise this, the robot was entitled to ask questions in sequential
order during the five minutes the experiment lasts. After the experiment was
completed by either the robot going through all the questions or reaching the
experiment time limit, participants were led to an isolated room, asked to fill
in a paper-based questionnaire to evaluate the interaction with the Nao robot,
and advised to avoid communication with the participants who had not yet
completed the experiment.
3.5 Questionnaire
A common approach of evaluating the human perception of robots is to use a
post-experiment questionnaire. Several of these exist in the literature and signifi-
cant work has been done to assure their reliability and validity [16]. In this study,
we have chosen to use the Godspeed questionnaire to evaluate the participant’s
interaction with the Nao robot. This has already been tested and validated in the
context of social robotics and therefore represents suitable measure of human-
robot interaction. It combines a set of five questionnaires based on semantic
differential scales as a standarised metric for the five key concepts in HRI:
– Anthropomorphism: rates the user’s impression of the robot on five semantic
differentials.
– Animacy: rates the user’s impression of the robot on six semantic differen-
tials.
– Likeability: rates the user’s impression of the robot on five semantic differ-
entials.
– Perceived Intelligence: rates the user’s impression of the robot on five se-
mantic differentials.
– Perceived Safety: rates the emotional state of the user on three semantic
differentials.
As recommended, the semantic differentials were randomised and the cate-
gories removed so as to hide the different concepts and hence mask the particular
areas the participants were meant to be evaluating.
4 Results
We conducted a Principle Complement Analysis (PCA) for all the semantic dif-
ferentials in the Godspeed questionnaire in order to obtain the minimum number
of dependent variables which explain the subjects’ responses. The PCA identified
three underlying dimensions in which the robot is collectively perceived. A cor-
relation threshold of 0.5 was set to determine the extent to which each semantic
differential significantly loads onto any one of the factors. The first factor (F1)
is related to the perceived “affability” of the robot. The semantic differentials
which load heavily onto this were connected to unkind, unfriendly, awful, fool-
ish, incompetent, unpleasant and dislike differentials, which are mostly about
the perceived affective qualities of the robot. The second factor (F2) is strongly
related to the perceived “humanness” of the robot. The semantic differentials
that load significantly on this were the mechanical, artificial, stagnant, fake, ma-
chinelike, moving rigidly, unconscious and dead differentials. Finally, the third
factor (F3) emerging from this analysis was related to the perceived “responsive-
ness” of the robot based on the semantic differential item loadings of anxious,
unintelligent, apathetic, moving rigidly and agitated.
Fig. 3. Mean values of the PCA solution factor for three conditions: Mirror (A), Non-
Mirror (B) and Static (C) movements
Based on the three identified principle components and on the participants’
scoring on the semantic differentials, a series of three factor scores was then
produced for each participant. The across-participant means of these scores are
shown in Figure 3 separately for the three conditions. An analysis of the variance
(ANOVA) was conducted to compare the differences in means between the three
group conditions. This analysis showed for affability (F = 0.916, p = 0.41), hu-
manness (F = 3.01, p = 0.06) and responsiveness (F = 1.623, p = 0.21). Thus,
the factor F1 (affability) showed no meaningful differentiation between the three
conditions. Only for factor F2 (perceived humanness of the robot) was there any
statistically notable difference between the ratings of the three condition groups.
As can be seen in Figure 3, the humanness factor (F2) only rates positively in
condition A on average; this indicates that mirroring appears to have a positive
influence on the anthropomorphic perception of the robot. The responsiveness
factor (F3) also showed some differentiation across the conditions (though these
were statistically less pronounced than that for humanness), suggesting that
likeability and animacy in robotic entities is closely connected with movement,
and that anthropomorphic gesturing positively influences rapport in HRI. These
results evidence that robotic manipulation (i.e. mirroring/non-mirroring) is an
important factor in order to induce empathy and engage human-robot commu-
nication, opposite to the initial experiments in which the participants remained
fully static expecting for the Nao to move. These results are also perhaps sur-
prising since participants equally perceived animacy when the robot was static
or moving using predefined human-like poses. However, it is arguable that the
disparity in the number of participants between these two conditions may have
influenced the results. Mean values and total scores for the perceived five key
concepts in HRI, as derived from the Godspeed questionnaires, are depicted in
Figure 4.
5 Conclusion
The goal of this study was to evaluate the psychological implications of upper
body pose mirroring in human-robot social interactions. Three different exper-
imental conditions of upper body mimicry were implemented in the humanoid
Nao robot and measured in terms of the five key concepts of HRI. In general,
the results support the initial hypothesis H1 by displaying a trend towards a
perceived greater humanness of the robot in the mirroring condition compared
to the other two conditions. Likewise, the results have shown that non-mirroring
body poses also may also influence the participants’ empathy towards the robot,
which is indicative of rapport during the interaction.
Fig. 4. Mean values of the five Godspeed aspects for the three proposed conditions:
Mirror (A), Non-Mirror (B) and Static (C) movements on a 5-item Likert scale from
1 (strongly disagree) to 5 (strongly agree).
Higher anthropomorphism, animacy and perceived intelligence for conditions
A and B than for C (hypothesis H2) is partially revealed, which denotes certain
correlation between upper-body mimicry and perceived humanness (anthropo-
morphism) of a robot (conditions B and C resulted in similar participant ratings
in terms of this factor). This can be possible to explain by the fact that anthro-
pomorphism might not be based only on robot manipulation and human-like
gesturing but also on alternative communicative components and social factors.
It is also debatable that the human-like appearance of the Nao robot itself may
bias its social acceptability, which is closely related to being human-like, as well
as the elicitation of empathetic behaviours in the participants even when it re-
mains static, as likeability is equally highly rated in each of the three conditions.
Future work will re-evaluate the gained insights on inducing rapport using upper
body mimicry by extending the number of participants and the usage of addi-
tional humanoid robots in order to provide new understanding with regard to
the usage of robotic entities as companion systems.
References
1. LaFrance, M.: Nonverbal synchrony and rapport: Analysis by the cross-lag panel
technique, Social Psychology Quarterly, 42, 66–70 (1979)
2. LaFrance, M., Broadbent, M.: Posture sharing as a nonverbal indicator, Group and
Organizational Studies, 1, 328–333 (1976)
3. Chartrand, T. L., Maddux, W., Lakin, J.: Beyond the perception-behavior link:
The ubiquitous utility and motivational moderators of nonconscious mimicry, Un-
intended thought II: The new unconscious, Oxford University Press, 334–361, 2004.
4. Chartrand, T. L., Bargh, T. L.: The chameleon effect: The perception-behavior link
and social interaction, Journal of Personality and Social Psychology, 76, 6, 893–910,
(1999)
5. Salem, M., Eyseel, F., Rohlfing, K., Kopp, S., Joublin, F. L.: To Err is Human(-
like): Effects of Robot Gesture on Perceived Anthropomorphism and Likeability,
International Journal of Social Robotics, 5, 3, 313–323 (2013)
6. Salem, M.: Conceptual Motorics-Generation and Evaluation of Communicative
Robot Gesture Logos, Verlag Berlin GmbH (2013)
7. Riek, L. D., Rabinowitch, T. C., Bremner, P., Pipe, A. G., Fraser, M., Robinson P.:
Cooperative gestures: Effective signaling for humanoid robot, IEEE International
Conference on Human-Robot Interaction (ROMAN), 61–68 (2010)
8. Iio, T., Shiomi, M., Shinozawa, K., Akimoto, T., Shimohara, K., Hagita, N.: Inves-
tigating entrainment of people’s pointing gestures by robot’s gestures using a WOZ
Method, International Journal of Social Robotics, 3, 405–414 (2011)
9. Kim, H., Kwak, S. S., Kim, M.: Personality Design of Sociable Robots by con-
trol of gesture design factors, IEEE Symposium on Robot and Human Interactive
Communication (ROMAN), 494–499 (2008)
10. Gratch, J., Wang, N., Gerten, J., Fast, E., Duffy, R.: Creating rapport with virtual
agents, Intelligent Virtual Agents, Springer Berlin Heidelberg, 125–138 (2007)
11. Gonsior, B., Sosnowski, S., Mayer, C., Blume, J., Radig, B., Wollherr, D., Kuhn-
lenz, K.: Improving aspects of empathy and subjective performance for HRI through
mirroring facial expressions IEEE International Workshop on Robot and Human In-
teractive Communication (ROMAN), 350–356) (2011)
12. Bartneck, C., Croft, E., Kulic, E.: Measuring the anthropomorphism, animacy,
likeability, perceived intelligence and perceived safety of robots, Metrics for HRI
Workshop, Technical Report, 471, 37–44 (2008).
13. Kanda, T., Kamasima, M., Imai, M., Ono, T., Sakamoto, D., Ishiguro, H., Anzai,
Y.: A humanoid robot that pretends to listen to route guidance from a human,
Autonomous Robots, 22, 1, 87–100 (2007)
14. Bailenson, J.N., Yee, N.: Digital chameleons: Automatic assimilation of nonverbal
gestures in immersive virtual environments, Psychological Science, 16, 10, 814–819
(2005)
15. Riek, L.D., Paul P.C., Robinson, P.: When my robot smiles at me: Enabling human-
robot rapport via real-time head gesture mimicry, Journal on Multimodal User
Interfaces, 3, 99–108 (2010)
16. MacDorman, K. F.: Subjective ratings of robot video clips for human likeness,
familiarity, and eeriness: An exploration of the uncanny valley, ICCS/CogSci Long
Symposium: Toward Social Mechanisms of Android Science, 26-29 (2006)
17. Kanda T., Kamasima M., Imai M., Ono T., Sakamoto D., Ishiguro H., Anzai
Y., A humanoid robot that pretends to listen to route guidance from a human.
Autonomous Robots, 22, 1, 87–100 (2007)

More Related Content

Similar to ICSR15 Paper

The study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenariosThe study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenariosjournalBEEI
 
humanrobotinteraction-180527054931 (1) (1).pdf
humanrobotinteraction-180527054931 (1) (1).pdfhumanrobotinteraction-180527054931 (1) (1).pdf
humanrobotinteraction-180527054931 (1) (1).pdfHeenaSyed6
 
Human robot interaction
Human robot interactionHuman robot interaction
Human robot interactionPrakashSoft
 
diplomarbeit-alesiaivanova-clear
diplomarbeit-alesiaivanova-cleardiplomarbeit-alesiaivanova-clear
diplomarbeit-alesiaivanova-clearAlessya Ivanova
 
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD ijcsit
 
How women think robots perceive them – as if robots were men
How women think robots perceive them – as if robots were men How women think robots perceive them – as if robots were men
How women think robots perceive them – as if robots were men Matthijs Pontier
 
Interaction between abstract agents: Increasing the readability of causal e...
Interaction between abstract agents:  Increasing the readability of causal  e...Interaction between abstract agents:  Increasing the readability of causal  e...
Interaction between abstract agents: Increasing the readability of causal e...Mindtrek
 
ACII 2011, USA
ACII 2011, USAACII 2011, USA
ACII 2011, USALê Anh
 
Stafford - Ph.D. thesis abstract
Stafford - Ph.D. thesis abstractStafford - Ph.D. thesis abstract
Stafford - Ph.D. thesis abstractRebecca Stafford
 
[Seminar] seunghyeong 200724
[Seminar] seunghyeong 200724[Seminar] seunghyeong 200724
[Seminar] seunghyeong 200724ivaderivader
 
Agent-Based Modeling for Sociologists
Agent-Based Modeling for SociologistsAgent-Based Modeling for Sociologists
Agent-Based Modeling for SociologistsSimone Gabbriellini
 
A Corpus Based Analysis Of The Application Of Concluding Transition Signals ...
A Corpus Based Analysis Of The Application Of  Concluding Transition Signals ...A Corpus Based Analysis Of The Application Of  Concluding Transition Signals ...
A Corpus Based Analysis Of The Application Of Concluding Transition Signals ...Darian Pruitt
 
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...Elisabeth André
 
Temporal Reasoning Graph for Activity Recognition
Temporal Reasoning Graph for Activity RecognitionTemporal Reasoning Graph for Activity Recognition
Temporal Reasoning Graph for Activity RecognitionIRJET Journal
 
A deep dive into the mechanics of human-robot interaction
A deep dive into the mechanics of human-robot interactionA deep dive into the mechanics of human-robot interaction
A deep dive into the mechanics of human-robot interactionReputationGuards2
 
User-Defined-Gesture-for-Flying-Objects
User-Defined-Gesture-for-Flying-ObjectsUser-Defined-Gesture-for-Flying-Objects
User-Defined-Gesture-for-Flying-ObjectsPuchin Chen
 
Eye tracking measures for anthropomorphism in HRI
Eye tracking measures for anthropomorphism in HRIEye tracking measures for anthropomorphism in HRI
Eye tracking measures for anthropomorphism in HRI1rj
 

Similar to ICSR15 Paper (20)

The study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenariosThe study of attention estimation for child-robot interaction scenarios
The study of attention estimation for child-robot interaction scenarios
 
E0352435
E0352435E0352435
E0352435
 
humanrobotinteraction-180527054931 (1) (1).pdf
humanrobotinteraction-180527054931 (1) (1).pdfhumanrobotinteraction-180527054931 (1) (1).pdf
humanrobotinteraction-180527054931 (1) (1).pdf
 
Human robot interaction
Human robot interactionHuman robot interaction
Human robot interaction
 
diplomarbeit-alesiaivanova-clear
diplomarbeit-alesiaivanova-cleardiplomarbeit-alesiaivanova-clear
diplomarbeit-alesiaivanova-clear
 
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
CONSIDERATION OF HUMAN COMPUTER INTERACTION IN ROBOTIC FIELD
 
How women think robots perceive them – as if robots were men
How women think robots perceive them – as if robots were men How women think robots perceive them – as if robots were men
How women think robots perceive them – as if robots were men
 
Interaction between abstract agents: Increasing the readability of causal e...
Interaction between abstract agents:  Increasing the readability of causal  e...Interaction between abstract agents:  Increasing the readability of causal  e...
Interaction between abstract agents: Increasing the readability of causal e...
 
ACII 2011, USA
ACII 2011, USAACII 2011, USA
ACII 2011, USA
 
HUMAN-ROBOT INTERACTION.pdf
HUMAN-ROBOT INTERACTION.pdfHUMAN-ROBOT INTERACTION.pdf
HUMAN-ROBOT INTERACTION.pdf
 
Stafford - Ph.D. thesis abstract
Stafford - Ph.D. thesis abstractStafford - Ph.D. thesis abstract
Stafford - Ph.D. thesis abstract
 
[Seminar] seunghyeong 200724
[Seminar] seunghyeong 200724[Seminar] seunghyeong 200724
[Seminar] seunghyeong 200724
 
Agent-Based Modeling for Sociologists
Agent-Based Modeling for SociologistsAgent-Based Modeling for Sociologists
Agent-Based Modeling for Sociologists
 
A Corpus Based Analysis Of The Application Of Concluding Transition Signals ...
A Corpus Based Analysis Of The Application Of  Concluding Transition Signals ...A Corpus Based Analysis Of The Application Of  Concluding Transition Signals ...
A Corpus Based Analysis Of The Application Of Concluding Transition Signals ...
 
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...
Modeling the Dynamics of Gaze-Contingent Social Behaviors in Human-Agent Inte...
 
Temporal Reasoning Graph for Activity Recognition
Temporal Reasoning Graph for Activity RecognitionTemporal Reasoning Graph for Activity Recognition
Temporal Reasoning Graph for Activity Recognition
 
A deep dive into the mechanics of human-robot interaction
A deep dive into the mechanics of human-robot interactionA deep dive into the mechanics of human-robot interaction
A deep dive into the mechanics of human-robot interaction
 
User-Defined-Gesture-for-Flying-Objects
User-Defined-Gesture-for-Flying-ObjectsUser-Defined-Gesture-for-Flying-Objects
User-Defined-Gesture-for-Flying-Objects
 
Eye tracking measures for anthropomorphism in HRI
Eye tracking measures for anthropomorphism in HRIEye tracking measures for anthropomorphism in HRI
Eye tracking measures for anthropomorphism in HRI
 
SIGVerse Project: IROS 2016 Keynote talk
SIGVerse Project: IROS 2016 Keynote talkSIGVerse Project: IROS 2016 Keynote talk
SIGVerse Project: IROS 2016 Keynote talk
 

ICSR15 Paper

  • 1. Influence of Upper Body Pose Mirroring in Human-Robot Interaction Luis A. Fuente1 , Hannah Ierardi1 , Michael Pilling2 and Nigel T. Crook1 1 Department of Computing and Communication Technologies 2 Department of Psychology, {lfuente-fernandez, 11092652, mpilling, ncrook}@brookes.ac.uk Oxford Brookes University, Oxford OX3 0BP, England Abstract. This paper explores the effect of upper body pose mirroring in human-robot interaction. A group of participants is used to evaluate how imitation by a robot affects people’s perception of their conversation with it. A set of twelve questions about the participants’ university expe- rience serves as a backbone for the dialogue structure. In our experimen- tal evaluation, the robot reacts in one of three ways to the human upper body pose: ignoring it, displaying its own upper body pose, and mirroring it. The manner in which the robot behaviour influences human appraisal is analysed using the the standard Godspeed questionnaire. Our results show that robot body mirroring/non-mirroring influences the perceived humanness of the robot. The results also indicate that body pose mirror- ing is an important factor in facilitating rapport and empathy in human social interactions with robots. Keywords: body-pose mirroring, empathy, rapport, anthropomorphism 1 Introduction Mirroring is a natural social behaviour demonstrated by humans whereby a par- ticipant in a social interaction will often tend to subconsciously mirror another’s body posture. There is significant evidence from psychological studies that peo- ple in groups have a tendency to engage in this mirroring behaviour [1, 2]. People are often not conscious of the fact that they are mirroring someone’s body pose or that someone is mirroring them [3]. These studies have also experimentally shown that this non-verbal synchrony in conversation is preserved over time and has a positive influence in creating rapport, increasing empathy, and facilitating social interaction [4]. In this article, we investigate the effect that upper body pose mimicry has on how humans perceive robotic systems. The proposed system recognises upper body poses from camera images and produces upper body gestures (torso, head and arms) in the humanoid Nao robot. The robot’s text-to-speech output is also used to achieve natural communication. A set of twelve predefined questions is considered to engage the participants in communication with the robot in one of the following three different conditions: the robot mirrors the upper body pose of
  • 2. a human, the robot generates non-mirroring human-like upper body poses, and the robot adopts a static body pose. The Godspeed questionnaire [12] is used to measure the five key concepts of human-robot interaction and to evaluate the effect of body pose mirroring in human-robot interaction (HRI). The rest of this paper is structured as follows: Section 2 reviews existing studies on the influence of behaviour mimicry in human perception of robotic systems, Section 3 includes a description of the experimental setup, the method- ology and the evaluation method, Section 4 presents statistical findings, and Section 5 concludes with a brief discussion of the results and future work. 2 Related Work Several recent studies have assessed the influence that a robot’s non-verbal be- haviours have on the way humans perceive and interact with robots. Salem et al [5] found that human beings have a tendency to anthropomorphise more (to like the robot more), report greater shared reality and show increased intention for future interaction with robots when they used bodily gestures with speech, as opposed to speaking using a static pose. The same authors also suggested that a robot’s use of gesture with speech tends to enhance people’s performance on robot guided tasks [6]. Similarly, Riek et al [7] demonstrated that the manner in which robots execute bodily gestures can have a major influence on the degree to which people are willing to cooperate with them. Further, some robot ges- ture combinations (i.e. gazing and pointing) also increase a person’s tendency to reproduce the behaviour of a robot, resulting in entrainment [8]. In this sense, Kim et al [9] stated that it is possible to use gesture manipulations to influence the perceived personality of social robots and [10] showed that contingent non- verbal behaviours (i.e. behaviours tightly coupled to what the human speaker is doing) can create rapport with the human participant. The use of mirroring behaviours by virtual characters and robots has been shown to improve empathy and create rapport with the humans that interact with them. Gonsior et al [11] studied the impact on human-robot interaction by a robot that mirrors facial expressions. In their study, the human participants engaged in a communicative task with the robot under one of three experimental conditions: the robot displayed no facial expressions, the robot mirrored the par- ticipant’s facial expression, and the robot displayed facial expressions according to its internal model which indirectly mirrored the participant’s facial expres- sion. Each participant completed two post-experiment questionnaires. The first evaluated for empathy and subjective performance, and the second consisted of the five Godspeed questionnaires [12]. Their results indicated that mirroring conditions received higher ratings than neutral conditions. These results have also been supported by Kanda et al in [17]. They indicated that cooperative gestures from a robot (i.e. gestures that synchronised with the human partici- pant) increase the human’s impression of the robot’s reliability and sympathy. Similarly, Bailenson and Yee [14] found that an embodied artificial agent that mimicked a human participant’s head movements was perceived as being more
  • 3. persuasive and received more positive trait ratings than non-mimicking agents. On the other hand, Riek et al [15], did not find that head gesture mirroring had a noticeable impact on creating rapport between a human and a robot. However, the authors acknowledge that the small sample size and other possible factors concerning their experimental setup may have influenced this result. 3 Method To date, we have not been able to find any research that evaluates the effect that upper body mirroring during human-robot interaction has on the anthro- pomorphism, animacy, likeability, perceived intelligence and perceived safety of the robot. This study seeks to investigate this through a series of experiments in which participants engage in spoken interactions with a robot. The empathy be- tween the participants and humanoid robot was examined under three different conditions: A The robot mirrors the user’s body poses during the interaction with occa- sional head nodding. B The robot produces pre-programmed non-mirroring gestures with occasional head nodding during the interaction. C The robot remains static for the duration of the interaction apart from occa- sional head nodding. 3.1 Hypotheses Given the importance of mimicry in human-human communication, we won- dered if it might be also important in human-robot communication. Thus, our hypotheses are as follows: H1 The participants will rate the likeability and perceived safety of the robot more highly in condition A than in conditions B and C. This is motivated by the work in [1], which showed that posture sharing and rapport are positively correlated in humans and that this correlation holds over time, promotes safety, and encourages each participant during conversation. H2 The participants will rate the anthropomorphism, animacy and perceived intelligence more positively in conditions A and B than in C. This hypoth- esis is prompted by the idea that people will show the most appreciation for a robot that mimics their upper-body and head gestures in real time. This motivation comes from the work in [17] who ran an experiment with the WowWee Alive Chimpanzee Robot capable of making head nods and face expressions, as well as detecting human head nods. They found that temporal-cooperative behaviours lead to a more positive interaction with robots and enable better human-robot rapport when compared with a robot that does not employ such behaviours.
  • 4. 3.2 Experimental Validation In order to evaluate whether the proposed hypotheses improve HRI, an exper- iment in which human participants engage in a spoken interaction with the humanoid Nao robot was designed. The Nao robot is set up to allow upper body motion only, and a depth camera is used to track body poses and movements. During the interaction, the Nao robot speaks to the participant while acting according to one of the three possible conditions. It also nods at random inter- vals throughout the duration of the experiment. Subjects are divided into three different groups depending on the following conditions applied: A Mirroring: The robot mirrors the participant’s upper body pose. The pose of the participant is estimated using a depth camera. The output of the depth camera is first normalised and then processed to extract the rotational angles of each shoulder and elbow in the participant’s body (eight angles in total). These are then scaled and mapped onto the corresponding joint angles for the robot. It is relevant to note that the Nao robot does not continually mirror the participant’s body pose, since this would lead to unrealistic copy- cat behaviours. Instead, the robot intermittently mirrors the participant’s body pose after it has remained in that body pose for a certain period of time (∼5s). B No-Mirroring: The robot produces pre-programmed human-like non-mirroring gestures (see Figures 1(b)-(e)). These only take place while asking the ques- tions to the participant. C Static: The robot remains static with no body movements. (a) (b) (c) (d) (e) Fig. 1. Predefined movements of the Nao robot during the experiments. Figure (a) represents the resting/static pose and figures (b)-(e) illustrate the predefined upper body robotic gestures. It should be noted that head nodding was included to reduce the impact of any bias arising from the movement of the robot in conditions A and B. It was deemed that without the head nodding, the outcome of the experiment might be influenced by the fact that the robot is animated in condition A while the participant is speaking and not in condition B, as opposed to being influenced by the different types of whole body movement in each case (i.e. mirroring as opposed to non-mirroring poses). The importance of the head nodding in condi- tion C is twofold. First, it allows participants to address the questions regarding
  • 5. animacy in the post-experiment questionnaire. Surprisingly, it was noted that participants waited for the robot to move first before engaging in a non-verbal communication with the robot. Second, it eases the comparison of condition C with the remaining conditions. The lack of movement in the robot itself neg- atively affects the participant’s reaction to and rapport with the robot. In all cases the robot waits until the participant finishes speaking before asking the next question in the sequence. 3.3 Experimental Setup Forty test subjects took part in this experiment, of which 9 were female and 31 were male. The participants were students from Oxford Brookes University and had no previous experience in the robotics field. They were between the ages of 19 and 35 (mean of 21.67 and s.d. of 3.05). The distribution of the subjects over the experimental conditions was 17 for A, 15 for B, and 8 for C3 . Participants were not informed about the purpose of this study, but were instead advised that the experiment was to evaluate an automated student advisory system which seeks to use interactive humanoid robots. Additionally, they were also briefed about the layout of the experiment room and robot design. (a) Experimental room setup Control Area Kinnect Customised Nao'sChair Participant's Location Nao Robot Camera 155 cm (b) Schematic layout Fig. 2. Layout of the quiet room during the participants’ interaction with the robot. A quiet room with controlled lighting conditions was chosen for the experi- ment with the layout shown in Figure 2. During the experiment, each participant was seated facing the Nao robot which was located on a customised chair so that the head of the robot was approximately at eye-level with the participant. The robot was strapped into the seat facing the participant in a hands-in-lap resting position (see Figure 1(a)). This restricts the lower-body motion of the robot but allows it to move its torso, arms and head. Since the task rating relies on the ability of the robot to effectively mimic the participants, a depth camera was 3 The lack of movement in the participants due to the still position of the robot body in condition C lead to undesired problems in the motion capture impeding the mirroring. These participants were not considered during the analysis.
  • 6. preferred over the robot’s head camera. The depth camera was placed behind the robot, angled downwards to capture the movement of the participants. Each participant was seated sufficiently far (∼150 cm) from the robot’s chair so that its body was in full view of the depth camera and robot. The experimental setup was identical for each condition. 3.4 Experimental Procedure Prior to the experiment, the instructor gave the participant a brief introduction on the task and described the one-to-one interaction with the robot. It is impor- tant to note that there was no visual or physical contact between the participant and the instructor (who was also in the experimental room but hidden from the participant), so that the participant was essentially alone with the robot. To start the experiment, the robot introduces itself in order to allow the partici- pant to become familiar with its voice, shape and movements. From this point, the one-to-one interaction between the participant and the Nao robot begins. It consists of the interaction through a set of twelve predefined questions (see Table 1). Under the conditions A and B, the robot is also animated whilst asking the questions, before returning to the neutral hands-in-lap pose. The questions were determined in advance and centred around the participant’s experience at the university in order to prompt an emotional engagement with the robot. Further, they also struck a balance between a subject that the participant could emo- tionally connect with, but that steered away from being unnecessarily invasive. The questions were identical for each participant. Table 1. Sequence of questions asked by the robot Q1 What subject are you studying? Q2 When did you start on your course? Q3 What have you enjoyed most about your course? Q4 What did you enjoy least about your course? Q5 Tell me about a challenging work that you have done during your studies? Q6 What do you like doing outside of your studies? Q7 What would you like to do after university? Q8 What is your preferred mode of learning: lecture or practicals? Q9 Tell me about a particular experience you had working in a group Q10 Do you enjoy group work, or do you prefer working alone? Q11 What made you want to choose your course? Q12 What would you like to do after university? During the experiment, the robot waited until the participant finished speak- ing before asking the next question. It was anticipated that some participants would speak for longer than others, which may bias the results obtained due to the substantial variability in the participants’ exposure to the experimental con- ditions. To minimise this, the robot was entitled to ask questions in sequential
  • 7. order during the five minutes the experiment lasts. After the experiment was completed by either the robot going through all the questions or reaching the experiment time limit, participants were led to an isolated room, asked to fill in a paper-based questionnaire to evaluate the interaction with the Nao robot, and advised to avoid communication with the participants who had not yet completed the experiment. 3.5 Questionnaire A common approach of evaluating the human perception of robots is to use a post-experiment questionnaire. Several of these exist in the literature and signifi- cant work has been done to assure their reliability and validity [16]. In this study, we have chosen to use the Godspeed questionnaire to evaluate the participant’s interaction with the Nao robot. This has already been tested and validated in the context of social robotics and therefore represents suitable measure of human- robot interaction. It combines a set of five questionnaires based on semantic differential scales as a standarised metric for the five key concepts in HRI: – Anthropomorphism: rates the user’s impression of the robot on five semantic differentials. – Animacy: rates the user’s impression of the robot on six semantic differen- tials. – Likeability: rates the user’s impression of the robot on five semantic differ- entials. – Perceived Intelligence: rates the user’s impression of the robot on five se- mantic differentials. – Perceived Safety: rates the emotional state of the user on three semantic differentials. As recommended, the semantic differentials were randomised and the cate- gories removed so as to hide the different concepts and hence mask the particular areas the participants were meant to be evaluating. 4 Results We conducted a Principle Complement Analysis (PCA) for all the semantic dif- ferentials in the Godspeed questionnaire in order to obtain the minimum number of dependent variables which explain the subjects’ responses. The PCA identified three underlying dimensions in which the robot is collectively perceived. A cor- relation threshold of 0.5 was set to determine the extent to which each semantic differential significantly loads onto any one of the factors. The first factor (F1) is related to the perceived “affability” of the robot. The semantic differentials which load heavily onto this were connected to unkind, unfriendly, awful, fool- ish, incompetent, unpleasant and dislike differentials, which are mostly about the perceived affective qualities of the robot. The second factor (F2) is strongly related to the perceived “humanness” of the robot. The semantic differentials
  • 8. that load significantly on this were the mechanical, artificial, stagnant, fake, ma- chinelike, moving rigidly, unconscious and dead differentials. Finally, the third factor (F3) emerging from this analysis was related to the perceived “responsive- ness” of the robot based on the semantic differential item loadings of anxious, unintelligent, apathetic, moving rigidly and agitated. Fig. 3. Mean values of the PCA solution factor for three conditions: Mirror (A), Non- Mirror (B) and Static (C) movements Based on the three identified principle components and on the participants’ scoring on the semantic differentials, a series of three factor scores was then produced for each participant. The across-participant means of these scores are shown in Figure 3 separately for the three conditions. An analysis of the variance (ANOVA) was conducted to compare the differences in means between the three group conditions. This analysis showed for affability (F = 0.916, p = 0.41), hu- manness (F = 3.01, p = 0.06) and responsiveness (F = 1.623, p = 0.21). Thus, the factor F1 (affability) showed no meaningful differentiation between the three conditions. Only for factor F2 (perceived humanness of the robot) was there any statistically notable difference between the ratings of the three condition groups. As can be seen in Figure 3, the humanness factor (F2) only rates positively in condition A on average; this indicates that mirroring appears to have a positive influence on the anthropomorphic perception of the robot. The responsiveness factor (F3) also showed some differentiation across the conditions (though these were statistically less pronounced than that for humanness), suggesting that likeability and animacy in robotic entities is closely connected with movement, and that anthropomorphic gesturing positively influences rapport in HRI. These results evidence that robotic manipulation (i.e. mirroring/non-mirroring) is an important factor in order to induce empathy and engage human-robot commu- nication, opposite to the initial experiments in which the participants remained fully static expecting for the Nao to move. These results are also perhaps sur- prising since participants equally perceived animacy when the robot was static or moving using predefined human-like poses. However, it is arguable that the disparity in the number of participants between these two conditions may have influenced the results. Mean values and total scores for the perceived five key
  • 9. concepts in HRI, as derived from the Godspeed questionnaires, are depicted in Figure 4. 5 Conclusion The goal of this study was to evaluate the psychological implications of upper body pose mirroring in human-robot social interactions. Three different exper- imental conditions of upper body mimicry were implemented in the humanoid Nao robot and measured in terms of the five key concepts of HRI. In general, the results support the initial hypothesis H1 by displaying a trend towards a perceived greater humanness of the robot in the mirroring condition compared to the other two conditions. Likewise, the results have shown that non-mirroring body poses also may also influence the participants’ empathy towards the robot, which is indicative of rapport during the interaction. Fig. 4. Mean values of the five Godspeed aspects for the three proposed conditions: Mirror (A), Non-Mirror (B) and Static (C) movements on a 5-item Likert scale from 1 (strongly disagree) to 5 (strongly agree). Higher anthropomorphism, animacy and perceived intelligence for conditions A and B than for C (hypothesis H2) is partially revealed, which denotes certain correlation between upper-body mimicry and perceived humanness (anthropo- morphism) of a robot (conditions B and C resulted in similar participant ratings in terms of this factor). This can be possible to explain by the fact that anthro- pomorphism might not be based only on robot manipulation and human-like gesturing but also on alternative communicative components and social factors. It is also debatable that the human-like appearance of the Nao robot itself may bias its social acceptability, which is closely related to being human-like, as well as the elicitation of empathetic behaviours in the participants even when it re- mains static, as likeability is equally highly rated in each of the three conditions. Future work will re-evaluate the gained insights on inducing rapport using upper body mimicry by extending the number of participants and the usage of addi- tional humanoid robots in order to provide new understanding with regard to the usage of robotic entities as companion systems.
  • 10. References 1. LaFrance, M.: Nonverbal synchrony and rapport: Analysis by the cross-lag panel technique, Social Psychology Quarterly, 42, 66–70 (1979) 2. LaFrance, M., Broadbent, M.: Posture sharing as a nonverbal indicator, Group and Organizational Studies, 1, 328–333 (1976) 3. Chartrand, T. L., Maddux, W., Lakin, J.: Beyond the perception-behavior link: The ubiquitous utility and motivational moderators of nonconscious mimicry, Un- intended thought II: The new unconscious, Oxford University Press, 334–361, 2004. 4. Chartrand, T. L., Bargh, T. L.: The chameleon effect: The perception-behavior link and social interaction, Journal of Personality and Social Psychology, 76, 6, 893–910, (1999) 5. Salem, M., Eyseel, F., Rohlfing, K., Kopp, S., Joublin, F. L.: To Err is Human(- like): Effects of Robot Gesture on Perceived Anthropomorphism and Likeability, International Journal of Social Robotics, 5, 3, 313–323 (2013) 6. Salem, M.: Conceptual Motorics-Generation and Evaluation of Communicative Robot Gesture Logos, Verlag Berlin GmbH (2013) 7. Riek, L. D., Rabinowitch, T. C., Bremner, P., Pipe, A. G., Fraser, M., Robinson P.: Cooperative gestures: Effective signaling for humanoid robot, IEEE International Conference on Human-Robot Interaction (ROMAN), 61–68 (2010) 8. Iio, T., Shiomi, M., Shinozawa, K., Akimoto, T., Shimohara, K., Hagita, N.: Inves- tigating entrainment of people’s pointing gestures by robot’s gestures using a WOZ Method, International Journal of Social Robotics, 3, 405–414 (2011) 9. Kim, H., Kwak, S. S., Kim, M.: Personality Design of Sociable Robots by con- trol of gesture design factors, IEEE Symposium on Robot and Human Interactive Communication (ROMAN), 494–499 (2008) 10. Gratch, J., Wang, N., Gerten, J., Fast, E., Duffy, R.: Creating rapport with virtual agents, Intelligent Virtual Agents, Springer Berlin Heidelberg, 125–138 (2007) 11. Gonsior, B., Sosnowski, S., Mayer, C., Blume, J., Radig, B., Wollherr, D., Kuhn- lenz, K.: Improving aspects of empathy and subjective performance for HRI through mirroring facial expressions IEEE International Workshop on Robot and Human In- teractive Communication (ROMAN), 350–356) (2011) 12. Bartneck, C., Croft, E., Kulic, E.: Measuring the anthropomorphism, animacy, likeability, perceived intelligence and perceived safety of robots, Metrics for HRI Workshop, Technical Report, 471, 37–44 (2008). 13. Kanda, T., Kamasima, M., Imai, M., Ono, T., Sakamoto, D., Ishiguro, H., Anzai, Y.: A humanoid robot that pretends to listen to route guidance from a human, Autonomous Robots, 22, 1, 87–100 (2007) 14. Bailenson, J.N., Yee, N.: Digital chameleons: Automatic assimilation of nonverbal gestures in immersive virtual environments, Psychological Science, 16, 10, 814–819 (2005) 15. Riek, L.D., Paul P.C., Robinson, P.: When my robot smiles at me: Enabling human- robot rapport via real-time head gesture mimicry, Journal on Multimodal User Interfaces, 3, 99–108 (2010) 16. MacDorman, K. F.: Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley, ICCS/CogSci Long Symposium: Toward Social Mechanisms of Android Science, 26-29 (2006) 17. Kanda T., Kamasima M., Imai M., Ono T., Sakamoto D., Ishiguro H., Anzai Y., A humanoid robot that pretends to listen to route guidance from a human. Autonomous Robots, 22, 1, 87–100 (2007)