In interfaces we trust?
Dr Simone Stumpf
End-user interactions with smart systems.
@DrSimoneStumpf
#HCID2014
A slippery term called trust.
Trustor’s dependency on the reliability, truth or
ability of a Trustee.
Risk or uncertainty ...
[Oosterhof & Todorov]
Who do you trust more?
Approachability and dominance shown to matter in trust.
Which do you trust more?
Research of trust in tools and systems is in its infancy.
Some systems nowadays are smart.
Use implicit and explicit feedback to learn how to
behave using complex algorithms and st...
Why do we trust in these smart systems?
Previous research has indicated that the following
aspects seem to matter:
Reliabi...
Explaining makes system transparent.
End
User
Intelligent	
  
Agent
Mental
Model
FeedbackExplanation
What are the effects ...
Building a research prototype.
Figure 1. Users could debug by saying why the
current song was a good or bad choice.
. . .
...
Researching a music recommender.
[Kulesza et al. CHI 2013]
Between-group study design on depth of
explanations about how t...
How much do we need to explain?
End
User
Intelligent	
  
Agent
Mental
Model
FeedbackExplanation
How sound and complete do ...
Researching a music recommender.
[Kulesza et al. VL/HCC 2013]
Lab-based between-group study varying levels of
explanations...
What of the system needs explaining?
End
User
Intelligent	
  
Agent
Mental
Model
FeedbackExplanation
How can we explain sy...
Lo-fi and hi-fi research prototypes.
[Stumpf et al. IJCHS 2009]
[Kulesza et al. Vl/HCC 2010]
[Kulesza et al. TiiS 2011]
Explaining smart components.
Process is best understood if explained through
rules, keyword-based explanations second.
Peo...
Explaining smart suggestions.
Showing how confident system is of correctness of
suggestion gives user cue to trustworthine...
The way forward.
What is a good way to measure trust?
How can we personalise explanations?
How do explanations differ in h...
Thank you. Questions?
http://www.city.ac.uk/people/academics/simone-stumpf
Simone.Stumpf.1@city.ac.uk
@DrSimoneStumpf
#HCI...
Upcoming SlideShare
Loading in …5
×

HCID2014: In interfaces we trust? End user interactions with smart systems. Dr. Simone Stumpf, City University London

549 views

Published on

There are many cutting-edge systems that learn from users and do something smart as a result. These systems are often reasonably reliable but they do make mistakes. This talk gives an overview of research that investigates what matters to trust as users interact and how we could design interfaces to support users better.

Published in: Technology, Education
0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
549
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
6
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

HCID2014: In interfaces we trust? End user interactions with smart systems. Dr. Simone Stumpf, City University London

  1. 1. In interfaces we trust? Dr Simone Stumpf End-user interactions with smart systems. @DrSimoneStumpf #HCID2014
  2. 2. A slippery term called trust. Trustor’s dependency on the reliability, truth or ability of a Trustee. Risk or uncertainty means that trust may be misplaced. We use a number of cues to assess trustworthiness which shape trusting intentions and actions.
  3. 3. [Oosterhof & Todorov] Who do you trust more? Approachability and dominance shown to matter in trust.
  4. 4. Which do you trust more? Research of trust in tools and systems is in its infancy.
  5. 5. Some systems nowadays are smart. Use implicit and explicit feedback to learn how to behave using complex algorithms and statistical machine learning approaches. They might make decisions automatically without user control; they are autonomous. They might personalise themselves to a user as they interact with a system instead of following static pre-set rules.
  6. 6. Why do we trust in these smart systems? Previous research has indicated that the following aspects seem to matter: Reliability of the suggestions, especially the predictability and perceived accuracy. Understanding the process of how the system makes suggestions. Expectations of the system and personal attitudes towards trust. [Dzindolet et al. IJCHS 2003]
  7. 7. Explaining makes system transparent. End User Intelligent   Agent Mental Model FeedbackExplanation What are the effects of explanations on building (correct) mental models? [Stumpf et al. Pervasive Intelligibility 2012]
  8. 8. Building a research prototype. Figure 1. Users could debug by saying why the current song was a good or bad choice. . . . Figure 2. Participants could debug by adding guidelines on the type of music the station should or should not play, via a wide range of criteria. [Kulesza et al. CHI 2012]
  9. 9. Researching a music recommender. [Kulesza et al. CHI 2013] Between-group study design on depth of explanations about how the system works, free use over a week from home, then assessment. Deeper explanations helped to build a more correct mental model. Explanations also helped with user satisfaction and success in adapting playlists. Usage of the system did not help; in fact, it can cause persistent incorrect mental models.
  10. 10. How much do we need to explain? End User Intelligent   Agent Mental Model FeedbackExplanation How sound and complete do explanations need to be?
  11. 11. Researching a music recommender. [Kulesza et al. VL/HCC 2013] Lab-based between-group study varying levels of explanations’ soundness and completeness, then assessment. Could make explanations less sound but reducing soundness led to users losing trust in system. High levels of both combined are best for building correct mental models and for user satisfaction; completeness has more influence.
  12. 12. What of the system needs explaining? End User Intelligent   Agent Mental Model FeedbackExplanation How can we explain system behaviour in the best way? Algorithm? Features? Process?
  13. 13. Lo-fi and hi-fi research prototypes. [Stumpf et al. IJCHS 2009] [Kulesza et al. Vl/HCC 2010] [Kulesza et al. TiiS 2011]
  14. 14. Explaining smart components. Process is best understood if explained through rules, keyword-based explanations second. People struggle with understanding machine learning algorithms e.g. negative weights. Features used better understood than process. Similarity-based explanations are not well understood. Preference for explanation style individual to user.
  15. 15. Explaining smart suggestions. Showing how confident system is of correctness of suggestion gives user cue to trustworthiness. Carefully balance amount of explanation against usefulness and cost in assessing trust. Indicating prevalence of system suggestions in relation to what the user has provided also useful.
  16. 16. The way forward. What is a good way to measure trust? How can we personalise explanations? How do explanations differ in high-risk versus low- risk systems? What further cues can we gives users to assess trustworthiness of a smart system? How can we prevent disuse or misuse?
  17. 17. Thank you. Questions? http://www.city.ac.uk/people/academics/simone-stumpf Simone.Stumpf.1@city.ac.uk @DrSimoneStumpf #HCID2014

×