A Federated Search and Social Recommendation Widget

  • 1,025 views
Uploaded on

2nd International Workshop on Social Recommender Systems at CSCW 2011, Hangzhou, China.

2nd International Workshop on Social Recommender Systems at CSCW 2011, Hangzhou, China.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,025
On Slideshare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
15
Comments
0
Likes
3

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. A FEDERATED SEARCH AND SOCIAL RECOMMENDATION WIDGET Sten Govaerts, Sandy El Helou, Erik Duval and Denis Gillet.Saturday 19 March 2011
  • 2. WHAT’S A WIDGET ?!?Saturday 19 March 2011
  • 3. WHAT’S A WIDGET ?!?Saturday 19 March 2011
  • 4. WHAT’S A WIDGET ?!?Saturday 19 March 2011
  • 5. WHAT’S A WIDGET ?!?Saturday 19 March 2011
  • 6. WHAT’S A WIDGET ?!?Saturday 19 March 2011
  • 7. CONTEXT • Personal Learning Environment: • customizable • re-use, creation & mashup of tools, resources • enable users to access content • in different contextsSaturday 19 March 2011
  • 8. CONTEXT • Personal Learning Environment: • customizable • re-use, creation & mashup of tools, resources • enable users to access content • in different contextsSaturday 19 March 2011
  • 9. CONTEXT • Personal Learning Environment: • customizable • re-use, creation & mashup of tools, resources • enable users to access content • in different contextsSaturday 19 March 2011
  • 10. ARCHITECTURESaturday 19 March 2011
  • 11. ARCHITECTURESaturday 19 March 2011
  • 12. ARCHITECTURESaturday 19 March 2011
  • 13. ARCHITECTURESaturday 19 March 2011
  • 14. ARCHITECTURESaturday 19 March 2011
  • 15. ARCHITECTURESaturday 19 March 2011
  • 16. ARCHITECTURESaturday 19 March 2011
  • 17. ARCHITECTURESaturday 19 March 2011
  • 18. ARCHITECTURESaturday 19 March 2011
  • 19. ARCHITECTURESaturday 19 March 2011
  • 20. DEMO...Saturday 19 March 2011
  • 21. DEMO...Saturday 19 March 2011
  • 22. DEMO...Saturday 19 March 2011
  • 23. DEMO...Saturday 19 March 2011
  • 24. PAGE RANK HYPERLINK WEB SITE Rank of node i: A node is important if and only if many other important nodes point to itSaturday 19 March 2011
  • 25. OUR CASE hare d R1 d /s save saved/shared d en tion Sten R2 fr i e c dis n like c on d lik ed R3 d n Sandy ien ctio fr e c on n lik ed R4 Erik R5Saturday 19 March 2011
  • 26. NOW FOR MULTI-DIRECTIONAL, PERSONALIZED & CONTEXTUAL RANKING A node is important to a particular set of nodes (representing the target user and the context) if and only if many important nodes connected to this root set, via important relation types, point to itSaturday 19 March 2011
  • 27. EVALUATION • 15 PhD students at K.U. Leuven and EPFL. • What? • usability • user satisfaction • usefulnessSaturday 19 March 2011
  • 28. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 29. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 30. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 31. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 32. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 33. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 34. FIRST PHASE • current media search tool: Google & YouTube • understanding recommendations: 6/15 from like/dislikeSaturday 19 March 2011
  • 35. SECOND PHASESaturday 19 March 2011
  • 36. SECOND PHASE • only 14 participants (one less) • open questions • usefulness of recommendations: 11/14 pro. • user satisfaction: System Usability Scale (SUS) & MS Desirability Toolkit • SUS score: 66,25% •2 groupsSaturday 19 March 2011
  • 37. SECOND PHASE • only 14 participants (one less) • open questions • usefulness of recommendations: 11/14 pro. • user satisfaction: System Usability Scale (SUS) & MS Desirability Toolkit • SUS score: 66,25% •2 K.U. Leuven: high (75%) groups EPFL + one K.U.Leuven: low (50%)Saturday 19 March 2011
  • 38. WHY THE DIFFERENT SUS? • 1st phase by 2 interviewersSaturday 19 March 2011
  • 39. WHY THE DIFFERENT SUS? • 1st phase by 2 interviewersSaturday 19 March 2011
  • 40. WHY THE DIFFERENT SUS? • 1st phase by 2 interviewersSaturday 19 March 2011
  • 41. WHY THE DIFFERENT SUS? • issues: • distracts of unrelated widget’s UI updates. • layout too dense • height of widgets too small • KULeuven student had prior experience with iGoogle. • not evaluating the widget but the whole experience...Saturday 19 March 2011
  • 42. DESIRABILITY...Saturday 19 March 2011
  • 43. DESIRABILITY...Saturday 19 March 2011
  • 44. DESIRABILITY...Saturday 19 March 2011
  • 45. DESIRABILITY...Saturday 19 March 2011
  • 46. DESIRABILITY...Saturday 19 March 2011
  • 47. DESIRABILITY...Saturday 19 March 2011
  • 48. DESIRABILITY...Saturday 19 March 2011
  • 49. DESIRABILITY...Saturday 19 March 2011
  • 50. DESIRABILITY...Saturday 19 March 2011
  • 51. DESIRABILITY...Saturday 19 March 2011
  • 52. DESIRABILITY...Saturday 19 March 2011
  • 53. DESIRABILITY...Saturday 19 March 2011
  • 54. DESIRABILITY...Saturday 19 March 2011
  • 55. RECOMMENDATIONS EVALUATION • compare recommendations to their favourite tool: Google •2 groups with different queriesSaturday 19 March 2011
  • 56. RECOMMENDATIONS EVALUATION • compare recommendations to their favourite tool: Google •2 groups with different queries # relevant items returned Precision = # total items returnedSaturday 19 March 2011
  • 57. RECOMMENDATIONS EVALUATION • compare recommendations to their favourite tool: Google •2 groups with different queries Precision # relevant items returned in top N list = at N # total items returnedSaturday 19 March 2011
  • 58. RECOMMENDATIONS EVALUATION • compare recommendations to their favourite tool: Google •2 groups with different queries Precision # relevant items returned in top N list = at N # total items returned • Google: more relevant results • Google: avg. prec. at 10 = 65% • widget: avg. prec. at 10 = 50%Saturday 19 March 2011
  • 59. RECOMMENDATIONS EVALUATION • compare recommendations to their favourite tool: Google •2 groups with different queries Precision # relevant items returned in top N list = at N # total items returned • Google: more relevant results • Google: avg. prec. at 10 = 65% less variation • widget: avg. prec. at 10 = 50% in resultsSaturday 19 March 2011
  • 60. FIXES...Saturday 19 March 2011
  • 61. FIXES...Saturday 19 March 2011
  • 62. FIXES...Saturday 19 March 2011
  • 63. FIXES...Saturday 19 March 2011
  • 64. FUTURE WORK • evaluation in larger scale real-world situation (university + business) • evaluate user satisfaction of widget and not container • evaluate the recommendations further (based on use). • make recommendations transparentSaturday 19 March 2011
  • 65. MORE VISUAL SEARCH...Saturday 19 March 2011
  • 66. THANK YOU! QUESTIONS?...Saturday 19 March 2011