Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Scalable Exploration of
Relevance Prospects to
Support Decision Making
Katrien Verbert, KU Leuven
Karsten Seipp, KU Leuven...
INTRODUCTION
Recommender Systems: Introduction & Motivation
2
* Danboard (Danbo): Amazon’s cardboard robot, in these slide...
Recommender Systems (RecSys)
Systems that help people (or groups) to find
relevant items in a crowded item or information
...
Challenges of RecSys Addressed Here
Traditionally, RecSys has focused on producing
accurate recommendation algorithms. In ...
RELATED WORK OF
INTERACTIVE RECSYS
Previous research related to this work / Motivating results from
TalkExplorer study
5
PeerChooser – CF movies
6
O'Donovan, J., Smyth, B., Gretarsson, B., Bostandjiev, S., & Höllerer, T. (2008,
April). PeerCho...
SmallWorlds – CF Social
7
Gretarsson, B., O'Donovan, J., Bostandjiev, S., Hall, C., & Höllerer, T. (2010,
June). Smallworl...
TasteWeights – Hybrid Recommender
8
Bostandjiev, S., O'Donovan, J., & Höllerer, T. (2012, September). TasteWeights: a
visu...
9
He, C., Parra, D., & Verbert, K. (2016). Interactive recommender systems: A survey of the state
of the art and future re...
Our previous work: TalkExplorer
10
Verbert, K., Parra, D., Brusilovsky, P. (2016). Agents vs. users: visual recommendation...
TalkExplorer - I
11
Entities
Tags, Recommender
Agents, Users
TalkExplorer - II
12
Recommender
Recommender
Cluster
with
intersect
ion of
entities
Cluster (of
talks)
associated
to only ...
TalkExplorer - III
13
Items
Talks explored by the
user
Our Assumptions
•  Items which are relevant in more that one aspect could be
more valuable to the users
•  Displaying mult...
Results of Studies I & II
•  Two user studies:
–  Controlled study (Study I)
–  Field study (Study II)
•  Effectiveness in...
Study Results: challenges
•  but exploration distribution
was affected
•  Drawbacks
–  Not intuitive: users do not
often e...
INTERSECTIONEXPLORER (IE): A
SCALABLE MATRIX-BASED
INTERACTIVE RECOMMENDER
17
18
IntersectionExplorer (IE)
IntersectionExplorer
19
Research questions
•  RQ1: Under which condition may a scalable
visualisation increase user acceptance of
recommended item...
Evaluation: Intersections & Effectiveness
What do we
call an “Intersection”?
We used # explorations on intersections and t...
Research Platform
The studies were conducted using Conference Navigator, a
Conference Support System
22http://halley.exp.s...
CN3 baseline interface
23
CN3 baseline interface four ranked listed
provided by four recommenders
Evaluation setup
•  Within-subjects study with 20 users
•  Mean age: 32.9 years; SD: 6.32; female: 3
•  Baseline: explorat...
STUDY RESULTS
Description and Analysis of the results of the user study
Effectiveness
26
Effectiveness =
# of success /
# of exploration
Effectiveness was
higher when agents
were combined with
a...
Yield
27
Yield = # bookmarks /
# items explored
Yield was higher when
agents were combined
with another entity.
Combining different perspectives
Comparing different number of perspectives (users,
agents, tags)
Pearson’s correlation sh...
Time
Median time (mm:ss) and steps of each task with IE and CN3.
29
Subjective feedback
Questionnaire results with statistical significance. Differences
between the aspects “Fun” and “Choice...
CONCLUSIONS & FUTURE
WORK
Answering the research questions
RQ1: Under which condition may a scalable visualisation
increase user acceptance of recom...
Answering the research questions
RQ2: Does a scalable set visualisation increase
perceived effectiveness of recommendation...
Answering the research questions
RQ3 Does a scalable set visualisation increase
user trust in recommendations?
Subjective ...
Answering the research questions
RQ4 Does a scalable set visualisation improve
user satisfaction with a recommender system...
Simplicity vs. Effectiveness
•  Users require more time to set first bookmark in
IE than in CN3.
•  Ater this ‘training ph...
Limitations & Future Work
•  Limitations:
–  Low number of participants (n=20)
–  Participants had a high degree of visual...
THANKS!
QUESTIONS?
Upcoming SlideShare
Loading in …5
×

0

Share

Download to read offline

Scalable Exploration of Relevance Prospects to Support Decision Making

Download to read offline

Presented at IntRS 2016 - Interfaces and Human Decision Making for Recommender Systems, workshop at RecSys 2016

Citation: Verbert, K., Seipp, K., He, C., Parra, D., Wongchokprasitti, C., & Brusilovsky, P. (2016). Scalable Exploration of Relevance Prospects to Support Decision Making. Proceedings of the Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with ACM Conference on Recommender Systems (RecSys 2016), Boston, MA, USA, September 16, 2016.

Related Books

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Scalable Exploration of Relevance Prospects to Support Decision Making

  1. 1. Scalable Exploration of Relevance Prospects to Support Decision Making Katrien Verbert, KU Leuven Karsten Seipp, KU Leuven Chen He, KU Leuven Denis Parra, PUC Chile Chirayu Wongchokprasitti, University of Pittsburgh Peter Brusilovsky, University of Pittsburgh IntRS Workshop at RecSys 2016, Boston, MA, USA
  2. 2. INTRODUCTION Recommender Systems: Introduction & Motivation 2 * Danboard (Danbo): Amazon’s cardboard robot, in these slides represents a recommender system *
  3. 3. Recommender Systems (RecSys) Systems that help people (or groups) to find relevant items in a crowded item or information space (McNee et al. 2006) 3
  4. 4. Challenges of RecSys Addressed Here Traditionally, RecSys has focused on producing accurate recommendation algorithms. In this research, we address these challenges: 1.  HCI: Implementation of visualizations that enhance user acceptance, trust and satisfaction of the items suggested. 2.  Recommendation Tasks: Tackling exploration of recommendations, not only rating prediction or Top –N. 4
  5. 5. RELATED WORK OF INTERACTIVE RECSYS Previous research related to this work / Motivating results from TalkExplorer study 5
  6. 6. PeerChooser – CF movies 6 O'Donovan, J., Smyth, B., Gretarsson, B., Bostandjiev, S., & Höllerer, T. (2008, April). PeerChooser: visual interactive recommendation. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 1085-1088). ACM.
  7. 7. SmallWorlds – CF Social 7 Gretarsson, B., O'Donovan, J., Bostandjiev, S., Hall, C., & Höllerer, T. (2010, June). Smallworlds: visualizing social recommendations. In Computer Graphics Forum (Vol. 29, No. 3, pp. 833-842). Blackwell Publishing Ltd.
  8. 8. TasteWeights – Hybrid Recommender 8 Bostandjiev, S., O'Donovan, J., & Höllerer, T. (2012, September). TasteWeights: a visual interactive hybrid recommender system. In Proceedings of the sixth ACM conference on Recommender systems (pp. 35-42). ACM.
  9. 9. 9 He, C., Parra, D., & Verbert, K. (2016). Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities. Expert Systems with Applications, 56, 9-27.
  10. 10. Our previous work: TalkExplorer 10 Verbert, K., Parra, D., Brusilovsky, P. (2016). Agents vs. users: visual recommendation of research talks with multiple dimensions of relevance. ACM Transactions on Interactive Intelligent Systems, 6(2), 1-42.
  11. 11. TalkExplorer - I 11 Entities Tags, Recommender Agents, Users
  12. 12. TalkExplorer - II 12 Recommender Recommender Cluster with intersect ion of entities Cluster (of talks) associated to only one entity •  Canvas Area: Intersections of Different Entities User
  13. 13. TalkExplorer - III 13 Items Talks explored by the user
  14. 14. Our Assumptions •  Items which are relevant in more that one aspect could be more valuable to the users •  Displaying multiple aspects of relevance visually is important for the users in the process of item’s exploration 14
  15. 15. Results of Studies I & II •  Two user studies: –  Controlled study (Study I) –  Field study (Study II) •  Effectiveness increases with intersections of more entities •  Effectiveness wasn’t affected in the field study (study 2) 15
  16. 16. Study Results: challenges •  but exploration distribution was affected •  Drawbacks –  Not intuitive: users do not often explore intersections. –  Not scalable: visualization quickly becomes cluttered. 16
  17. 17. INTERSECTIONEXPLORER (IE): A SCALABLE MATRIX-BASED INTERACTIVE RECOMMENDER 17
  18. 18. 18 IntersectionExplorer (IE)
  19. 19. IntersectionExplorer 19
  20. 20. Research questions •  RQ1: Under which condition may a scalable visualisation increase user acceptance of recommended items? •  RQ2: Does a scalable set visualisation increase perceived effectiveness of recommendations. •  RQ3: Does a scalable set visualisation increase user trust in recommendations? •  RQ4: Does a scalable set visualisation improve user satisfaction with a recommender system? 20
  21. 21. Evaluation: Intersections & Effectiveness What do we call an “Intersection”? We used # explorations on intersections and their effectiveness, defined as: Effectiveness = # bookmarked items / # explorations 21
  22. 22. Research Platform The studies were conducted using Conference Navigator, a Conference Support System 22http://halley.exp.sis.pitt.edu/cn3/
  23. 23. CN3 baseline interface 23 CN3 baseline interface four ranked listed provided by four recommenders
  24. 24. Evaluation setup •  Within-subjects study with 20 users •  Mean age: 32.9 years; SD: 6.32; female: 3 •  Baseline: exploration of recommendations in CN3 •  Second condition: exploration of recommendations in IE •  Data from two conferences: –  EC-TEL 2014 (172 items) –  EC-TEL 2015 (112 items) 24
  25. 25. STUDY RESULTS Description and Analysis of the results of the user study
  26. 26. Effectiveness 26 Effectiveness = # of success / # of exploration Effectiveness was higher when agents were combined with another entity.
  27. 27. Yield 27 Yield = # bookmarks / # items explored Yield was higher when agents were combined with another entity.
  28. 28. Combining different perspectives Comparing different number of perspectives (users, agents, tags) Pearson’s correlation showed a positive correlation between number of perspectives in an exploration and yield (r = 1.0, n = 3, p = .015). 28
  29. 29. Time Median time (mm:ss) and steps of each task with IE and CN3. 29
  30. 30. Subjective feedback Questionnaire results with statistical significance. Differences between the aspects “Fun” and “Choice satisfaction” were not significant after the Bonferroni-Holm correction. 30
  31. 31. CONCLUSIONS & FUTURE WORK
  32. 32. Answering the research questions RQ1: Under which condition may a scalable visualisation increase user acceptance of recommended items? •  User acceptance of recommended items increased with the amount of sources used. •  Human-generated data, such as bookmarks of other users or tags, in addition to the agent-generated recommendations resulted in a significant increase of effectiveness and yield. •  Our data suggests that providing users with insight into relations of recommendations with bookmarks and tags of community members increases user acceptance. •  We thus recommend to combine automated sources and personal sources whenever possible. 32
  33. 33. Answering the research questions RQ2: Does a scalable set visualisation increase perceived effectiveness of recommendations? Increase in -  perceived effectiveness (expressed in the questionnaire) -  actual effectiveness (how frequently users bookmarked a recommended paper) 33
  34. 34. Answering the research questions RQ3 Does a scalable set visualisation increase user trust in recommendations? Subjective data shows user trust was increased with set-based visualisation of recommendations. 34
  35. 35. Answering the research questions RQ4 Does a scalable set visualisation improve user satisfaction with a recommender system? Overall, user satisfaction was higher when using the visualisation, suggesting this to be a key feature of the approach. 35
  36. 36. Simplicity vs. Effectiveness •  Users require more time to set first bookmark in IE than in CN3. •  Ater this ‘training phase’, the operational efficiency does not differ. •  Analysis of subjective data indicates that users perceived IE to be more effective and its recommendations more trustworthy than those given by CN3. •  In addition, users perceived items resulting from their use of IE to be of higher quality and found the overall experience more satisfying. 36
  37. 37. Limitations & Future Work •  Limitations: –  Low number of participants (n=20) –  Participants had a high degree of visualisation expertise (mean: 4.05, SD: 0.86). •  Future work –  Analyze results from larger scale study at Digital Humanities conference 2016 –  Apply our approach to other domains (fusion of data sources or recommendation algorithms) –  Consider other factors that interact with the user satisfaction 37
  38. 38. THANKS! QUESTIONS?

Presented at IntRS 2016 - Interfaces and Human Decision Making for Recommender Systems, workshop at RecSys 2016 Citation: Verbert, K., Seipp, K., He, C., Parra, D., Wongchokprasitti, C., & Brusilovsky, P. (2016). Scalable Exploration of Relevance Prospects to Support Decision Making. Proceedings of the Joint Workshop on Interfaces and Human Decision Making for Recommender Systems co-located with ACM Conference on Recommender Systems (RecSys 2016), Boston, MA, USA, September 16, 2016.

Views

Total views

545

On Slideshare

0

From embeds

0

Number of embeds

14

Actions

Downloads

4

Shares

0

Comments

0

Likes

0

×