3. Hypotheses Made
1. Gaze patterns can distinguish between
human-human and human-robot interaction
scenarios
4. Hypotheses Made
2. Difference in gaze patterns between human-
human and human-robot interaction
scenarios correlate with the participants'
initial capital of anthropomorphism (ICA),
where the ICA was measured as the sum of
the ratings (responses) given by the
participants to the questions of the pre-
questionnaire.
6. Hypotheses Made
3. Gaze patterns can distinguish between high-
cognitive and low-cognitive tasks.
7. Hypotheses Made
4. Cognitive priming will have an effect on the
difference between ICA and adaptive
anthropomorphic perception (AAP) (where
AAP is measured for the post-questionnaire in
a similar way as the ICA).
9. Experiment Details
Three kind of scenarios each with a high
cognitive and a low cognitive version:
1. “pick up the brown toy” – low-cognitive
“pick up your favorite toy” – high-cognitive
2. “point to the noise” – low-cognitive
“point at the crying baby” – high-cognitive
3. “show me some movements” – low-cognitive
“dance on this song” – high-cognitive
10. Experiment Details
Scenario 1 Scenario 2 Scenario 3
Low Cognitive “pick up the brown
toy”
“point at the noise” “show me some
movements”
High Cognitive “pick up the
favorite toy”
“point at the crying
baby”
“dance on this
song”
ROBOT
Scenario 1 Scenario 2 Scenario 3
Low Cognitive “pick up the brown
toy”
“point at the noise” “show me some
movements”
High Cognitive “pick up the
favorite toy”
“point at the crying
baby”
“dance on this
song”
HUMAN
NOTE : The ONLY difference between low cognitive and high cognitive conditions
is of the initial audio command
B
E
T
W
E
E
N
W
I
T
H
I
N
13. Why not “dance on this song” ?
• Be it “dance on this song” or “show me
some movements”, one will always
observe the body parts and the
movements, nothing else.
14. Results and discussion
• Hypothesis 1 : Gaze patterns can distinguish
between human-human and human-robot
interaction scenarios
HEAD
F[1,36] = 6.60,
p < .05
15. Results and discussion
• Hypothesis 1 : Gaze patterns can distinguish
between human-human and human-robot
interaction scenarios
LEGS
F[1,36] = 2.89,
p < .1
16. Results and discussion
• Hypothesis 1 : Gaze patterns can distinguish
between human-human and human-robot
interaction scenarios
The results weakly validates our hypothesis.
Human : high cognitive source by default ; hence high
head fixations
Robot : gets higher fixations for legs because of novelty
effect
17. Results and discussion
• Hypothesis 2 : Difference in gaze patterns
between human-human and human-robot
interaction scenarios correlate with the
participants' ICA.
18. Results and discussion
• In simple words, for participants whose ICA is low, the gaze
patterns should be significantly different for the robot videos
as compared to that for the human videos and vice-versa.
Pearson Correlation Coefficient = -0.42, p < .01
20. Results and discussion
• Hypothesis 4 : Cognitive priming will have an
effect on the difference between ICA and AAP
21. Results and discussion
• In simple words, audio command (for priming
the context) given in the beginning of the
video induces the anthropomorphic attitude
more in the high-cognitive case than the low
cognitive case.
Example :
ICA1 | “pick up the favorite toy” | AAP1
ICA2 | “pick up the brown toy” | AAP2
then | ICA1 – AAP1 | >> | ICA2 – AAP2 |
23. Summary
• Four hypotheses.
• Three different interaction scenarios.
• Human vs Robot videos.
• Low vs High Cognitive conditions.
• Two questionnaires
• 40 participants