Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Interactive Recommender Systems

1,047 views

Published on

Interactive Recommender Systems: Bridging the gap between predictive algorithms and interactive user interfaces.
Invited talk at UFMG, Brasil. March 2017.

More on this topic:
Chen He, Denis Parra, and Katrien Verbert. 2016. Interactive recommender systems. Expert Syst. Appl. 56, C (September 2016), 9-27. DOI=http://dx.doi.org/10.1016/j.eswa.2016.02.013

Published in: Education
  • Mainly semen increase (Ejaculation) which is why I purchased this product initially. Increase in semen was my main priority and it was satisfied. ♣♣♣ http://t.cn/AiQ0txm6
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Interactive Recommender Systems

  1. 1. Interactive Recommender Systems: Bridging the gap between predictive algorithms and interactive user interfaces DenisParra,Ph.D.InformationSciences AssistantProfessor,CS Department Schoolof Engineering PontificiaUniversidadCatólicadeChile UFMG,March29th 2017
  2. 2. Outline • Brief Personal Introduction • Computer Science at PUC Chile • Projects at SocVis Lab • Overview of Recommender Systems • Interactive Recommender Systems • Summary & Current & Future Work March 29th, 2017 D.Parra ~ UFMG– Invited Talk 2
  3. 3. 1 slide Geography Class: Chile • One third of the 16 million Chileans lives in Santiago, the Capital • But Chile is a looong country (4.000 Km), in the north is hot and dry, in the south (Patagonia) is very cold. Very Hot! Very Cold! My hometown! Valdivia Santiago, PUC Chile March 29th, 2017 D.Parra ~ UFMG– Invited Talk 3
  4. 4. Personal Introduction 1/3 • B.Eng. and Engineering in Informatics from Universidad Austral de Chile (2004), Valdivia, Chile • Ph.D. in Information Sciences at University of Pittsburgh (2008-2013), Pittsburgh, PA, USA March 29th, 2017 D.Parra ~ UFMG– Invited Talk 4
  5. 5. Personal Introduction 2/3 • In 2009 I did an internship at Trinity College Dublin, with researcher Alexander Troussov (IBM) • In 2010 I did another internship at Telefonica I+D, Barcelona, with Xavier Amatrian (now VP Quora) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 5
  6. 6. Personal Introduction 3/3 • 2013: Moved back to Santiago, Chile • Department of CS, School of Engineering, PUC. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 6
  7. 7. DCC, Engineering, PUC Chile • DCC: Departamento de Ciencia de la Computación • Programs: BEng, Engineering title, Master, PhD • Research Areas: – Databases and Semantic Web – Information Technologies – Machine Learning and Computer Vision (GRIMA) – Software Engineering – Educational Technologies,MOOCs March 29th, 2017 D.Parra ~ UFMG– Invited Talk 7 http://dcc.ing.puc.cl
  8. 8. Academic activities (2017) • Research topics: Recommender Systems/Personalization, Visualization, SNA. • Teaching: Data Mining, Recommender Systems, Information Visualization, SNA. • Leading the Social Computing and Visualization (SocVis) Lab. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 8
  9. 9. SocVis Lab http://www.socvis.cl March 29th, 2017 D.Parra ~ UFMG– Invited Talk 9
  10. 10. People, Publications, News (ND) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 10
  11. 11. Projects at SocVis • Mood-based music artists recommendation – Collaboration with J. O’Donovan (UCSB) – Student: Raimundo Herrera • IR on evidence-based Medicine – Help doctors on answering clinical questions – Student: I. Donoso, collaboration Epistemonikos • Artwork Recommendation – Collaboration with online artwork store UGallery – Students: P. Messina & V. Dominguez March 29th, 2017 D.Parra ~ UFMG– Invited Talk 11
  12. 12. Recommender Systems Class • Recommender Systems at PUC Chile http://web.ing.puc.cl/~dparra/classes/recsys-2016-2/ March 29th, 2017 D.Parra ~ UFMG– Invited Talk 12
  13. 13. INTRODUCTIONTORECSYS Recommender Systems * Danboard (Danbo): Amazon’s cardboard robot, in these slides it represents a recommender system *
  14. 14. Recommender Systems (RecSys) Systems that help (groups of) people to find relevant items in a crowded item or information space(MacNee et al. 2006) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 14
  15. 15. Why do we care about RecSys? • RecSys have gained popularity due to several domains & applications that require people to make decisions among a large set of items. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 15
  16. 16. A lil’ bit of History • First recommender systems were built at the beginning of 90’s (Tapestry, GroupLens, Ringo) • Online contests, such as the Netflix prize, grew the attention on recommender systems beyond Computer Science (2006-2009) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 16
  17. 17. The Recommendation Problem • The most popular way of presenting the recommendation problem is rating prediction: • How good is my prediction? Item 1 Item 2 … Item m User 1 1 5 4 User 2 5 1 ? … User n 2 5 ? Predict! March 29th, 2017 D.Parra ~ UFMG– Invited Talk 17
  18. 18. Recommendation Methods • Without covering all possible methods, the two most typical classifications on recommender algorithms are Classification 1 Classification 2 - Collaborative Filtering - Content-based Filtering - Hybrid - Memory-based - Model-based March 29th, 2017 D.Parra ~ UFMG– Invited Talk 18
  19. 19. Collaborative Filtering (User-based KNN) • Step 1: Finding Similar Users (Pearson Corr.) 5 4 4 1 2 1 5 4 4 1 2 5 Active user User_1 User_2 User_3 active user user_1 user_2 user_3 March 29th, 2017 D.Parra ~ UFMG– Invited Talk 19
  20. 20. Collaborative Filtering (User-based KNN) • Step 1: Finding Similar Users (Pearson Corr.) 5 4 4 1 2 1 5 4 4 1 2 5 Active user User_1 User_2 User_3 ∑∑ ∑ ⊂⊂ ⊂ −− −− = nunu nu CRi nniCRi uui CRi nniuui rrrr rrrr nuSim ,, , 22 )()( ))(( ),( active user user_1 0.4472136 user_2 0.49236596 user_3 -0.91520863 March 29th, 2017 D.Parra ~ UFMG– Invited Talk 20
  21. 21. Collaborative Filtering (User-based KNN) • Step 2: Ranking the items to recommend 5 4 4 2 1 5 4 4 Active user User_1 User_2 2 3 4 2 Item 1 Item 2 March 29th, 2017 D.Parra ~ UFMG– Invited Talk 21
  22. 22. Collaborative Filtering (User-based KNN) • Step 2: Ranking the items to recommend 5 4 4 2 1 5 4 4 Active user User_1 User_2 ∑ ∑ ⊂ ⊂ −⋅ += )( )( ),( )(),( ),( uneighborsn uneighborsn nni u nuuserSim rrnuuserSim riupred2 3 4 2 Item 1 Item 2 Item 3 March 29th, 2017 D.Parra ~ UFMG– Invited Talk 22
  23. 23. Pros/Cons of CF PROS: • Very simple to implement • Content-agnostic • Compared to other techniques such as content- based, is more accurate. There is also the Item KNN. CONS: • Sparsity • Cold-start • New Item March 29th, 2017 D.Parra ~ UFMG– Invited Talk 23
  24. 24. Content-Based Filtering • Can be traced back to techniques from IR, where the User Profile represents a query. user_profile = {w_1, w_2, …., w_3} using TF-IDF, weighting Doc_1 = {w_1, w_2, …., w_3} Doc_2 = {w_1, w_2, …., w_3} Doc_3 = {w_1, w_2, …., w_3} Doc_n = {w_1, w_2, …., w_3} 5 4 5 March 29th, 2017 D.Parra ~ UFMG– Invited Talk 24
  25. 25. PROS/CONS of Content-Based Filtering PROS: • New items can be matched without previous feedback • It can exploit also techniques such as LSA or LDA • It can use semantic data (ConceptNet, WordNet, etc.) CONS: • Less accurate than collaborative filtering • Tends to overspecialization March 29th, 2017 D.Parra ~ UFMG– Invited Talk 25
  26. 26. Hybridization • Combine previous methods to overcome their weaknesses (Burke, 2002) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 26
  27. 27. C2. Model/Memory Classification • Memory-based methods use the whole dataset in training and prediction. User and Item-based CF are examples. • Model-based methods build a model during training and only use this model during prediction. This makes prediction performance way faster and scalable March 29th, 2017 D.Parra ~ UFMG– Invited Talk 27
  28. 28. Model-based: Matrix Factorization Latent vector of the item Latent vector of the user SVD ~ Singular Value Decomposition March 29th, 2017 D.Parra ~ UFMG– Invited Talk 28
  29. 29. PROS/CONS of MF and latent factors model PROS: • So far, state-of-the-art in terms of accuracy (these methods won the Netflix Prize) • Performance-wise, the best option nowadays: slow at training time O((m+n)3) compared to correlation O(m2n), but linear at prediction time O(m+n) CONS: • Recommendations are obscure: How to explain that certain “latent factors” produced the recommendation? March 29th, 2017 D.Parra ~ UFMG– Invited Talk 29
  30. 30. Other paradigms and techniques • Recommendation as a graph problem: – Model the problem as diffusion or link prediction – Personalized PageRank (Kamvar et al, 2010), (Santos et al 2016) • Recommendation as a ranking problem: – Rather than predicting ratings, predict a Top-N list – Learning-to-rank approaches developed in the IR community – Karatzoglou et al. (2013), Shi et al. (2014), Macedo et al. (2015) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 30
  31. 31. (Important) RecSys Topics Not Covered in this Presentation • Learning to rank • Graph-based methods • Context-aware recommenders • Recommendation problem as next-item in sequence • User-centric evaluation frameworks • Multiarmed Bandits • Reinforcement Learning • ... You need to take Professor Santos’ course J March 29th, 2017 D.Parra ~ UFMG– Invited Talk 31
  32. 32. Rethinking the Recommendation Problem • User feedback is scarce: need for exploiting different sources of user preference and context March 29th, 2017 D.Parra ~ UFMG– Invited Talk 32
  33. 33. Rethinking the Recommendation Problem • Ratings are scarce: need for exploiting other sources of user preference • User-centric recommendation takes the problem beyond ratings and ranked lists: evaluate user engagement and satisfaction, not only RMSE/MAP March 29th, 2017 D.Parra ~ UFMG– Invited Talk 33
  34. 34. Rethinking the Recommendation Problem • Ratings are scarce: need for exploiting other sources of user preference • User-centric recommendation takes the problem beyond ratings and ranked lists: evaluate user engagement and satisfaction, not only RMSE/MAP • Several other dimensions to consider in the evaluation: novelty of the results, diversity, coverage (user and catalog), trust • Study de effect of interface characteristics: controllability, transparency, explainability. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 34
  35. 35. My Take on RecSys Research (2009 ~) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 35
  36. 36. My Work on RecSys • In my research I have contributed to RecSys by: – Utilizing other sourcesof user preference(Social Tags) – Exploiting implicit feedback for recommendation and for mapping explicit feedback – Studying interactive interfaces:the effect of visualizations and user interactionon user satisfaction, perceptionof trust and accuracy. • Nowadays: Focus on interactive exploratory interfaces for recommender systems March 29th, 2017 D.Parra ~ UFMG– Invited Talk 36
  37. 37. This is not only My work J • Dr. Peter Brusilovsky University of Pittsburgh, PA, USA • Dr. Alexander Troussov IBM Dublin and TCD, Ireland • Dr. Xavier Amatriain TID / Netflix /Quora • Dr. Christoff Trattner NTNU, Norway • Dr. Katrien Verbert KU Leuven, Belgium • Dr. Leandro Balby-Marinho UFCG, Brasil March 29th, 2017 D.Parra ~ UFMG– Invited Talk 37
  38. 38. VISUALIZATION+USER CONTROLLABILITY Part of this work with Katrien Verbert
  39. 39. Human Factors in RecSys • Transparency and Explainability: Konstan et al (2000), Tintarev and Mastoff (2010) • Frameworks to evaluate RecSys user studies: ResQue (Pu et al , 2010), Knijnenburg et al (2012) • Controllability and Inspectability: O’Donovan (2008), Knijnenburg et al (2010, 2012),Hijikata (2012), Ekstrand et al (2015) • Visualization andInterfaces: O’Donovan (2008 - ..), Verbert et al (2013), Parra et al (2014), Loepp et al (2014, 2017), March 29th, 2017 D.Parra ~ UFMG– Invited Talk 39
  40. 40. Visualization & User Controllability • Motivation: Can user controllability and explainability improve user engagement and satisfaction with a recommender system? • Specific research question: How intersections of contexts of relevance (of recommendation algorithms) might be better represented for user experience with the recommender? March 29th, 2017 D.Parra ~ UFMG– Invited Talk 40
  41. 41. Traditional RecSys Interface MovieLens: example of traditional recommender list March 29th, 2017 D.Parra ~ UFMG– Invited Talk 41
  42. 42. Explanations and Control Options of User Control Explainability Recommendations of books GoodReads: Book recommender system March 29th, 2017 D.Parra ~ UFMG– Invited Talk 42
  43. 43. PeerChooser (2008) Controllability in CF March 29th, 2017 D.Parra ~ UFMG– Invited Talk 43 O’Donovan et al. “PeerChooser: Visual Interactive Recommendation” (2008)
  44. 44. SmallWorlds: Expanded Explainability March 29th, 2017 D.Parra ~ UFMG– Invited Talk 44 Gretarsson et al. “SmallWorlds: Visualizing social recommendations” (2010)
  45. 45. TasteWeights: Hybrid Control and Inspect Bostandjev et al. “TasteWeights: A Visual Interactive Hybrid Recommender System” (2012) Controllability: Sliders that let users control the importance of preferences and contexts Inspectability: lines that connect recommended items with contexts and user preferences March 29th, 2017 D.Parra ~ UFMG– Invited Talk 45
  46. 46. IUI 2017 • Loepp et al. (2017) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 46
  47. 47. More Details? Check our survey March 29th, 2017 D.Parra ~ UFMG– Invited Talk 47 He, C., Parra, D., & Verbert, K. (2016). Interactive recommender systems: a survey of the state of the art and future research challenges and opportunities. Expert Systems with Applications, 56, 9-27.
  48. 48. Visualization & User Controllability • Motivation: Can user controllability and explainability improve user engagement and satisfaction with a recommender system? • Specific research question: How overlapping contexts of relevance (of recommendation algorithms) might be better represented for user experience with the recommender? • Our scenario: Conference articles March 29th, 2017 D.Parra ~ UFMG– Invited Talk 48
  49. 49. Research Platform • The studies were conducted using Conference Navigator, a Conference Support System • Our goal was recommending conference talks Program Proceedings Author List Recommendations http://halley.exp.sis.pitt.edu/cn3/ March 29th, 2017 D.Parra ~ UFMG– Invited Talk 49
  50. 50. TalkExplorer – IUI 2013 • Adaptation of Aduna Visualization to CN • Main research question: Does fusion (intersection) of contexts of relevance improve user experience? March 29th, 2017 D.Parra ~ UFMG– Invited Talk 50
  51. 51. TalkExplorer - I Entities Tags, Recommender Agents, Users March 29th, 2017 D.Parra ~ UFMG– Invited Talk 51
  52. 52. TalkExplorer - II Recommender Recommender Cluster with intersecti on of entities Cluster (of talks) associated to only one entity • Canvas Area: Intersections of Different Entities User March 29th, 2017 D.Parra ~ UFMG– Invited Talk 52
  53. 53. TalkExplorer - III Items Talks explored by the user March 29th, 2017 D.Parra ~ UFMG– Invited Talk 53
  54. 54. Our Assumptions • Items which are relevant in more that one aspect could be more valuable to the users • Displaying multiple aspects of relevance visually is important for the users in the process of item’s exploration March 29th, 2017 D.Parra ~ UFMG– Invited Talk 54
  55. 55. TalkExplorer Studies I & II • Study I – Controlled Experiment:Users were asked to discover relevant talksby exploring the three types of entities: tags, recommenderagents and users. – Conducted at Hypertext and UMAP 2012 (21 users) – Subjects familiar with Visualizations and Recsys • Study II – Field Study: Users were left free to explore the interface. – Conducted at LAK 2012 and ECTEL 2013 (18 users) – Subjects familiar with visualizations, but not much with RecSys March 29th, 2017 D.Parra ~ UFMG– Invited Talk 55
  56. 56. Evaluation: Intersections & Effectiveness • What do we call an “Intersection”? • We used #explorations on intersections and their effectiveness, defined as: Effectiveness = March 29th, 2017 D.Parra ~ UFMG– Invited Talk 56
  57. 57. Results of Studies I & II • Effectiveness increases with intersections of more entities • Effectiveness wasn’t affected in the field study (study 2) • … but exploration distribution was affected March 29th, 2017 D.Parra ~ UFMG– Invited Talk 57
  58. 58. More Details About TalkExplorer • Verbert, K., Parra, D., Brusilovsky, P., & Duval, E. (2013). Visualizing recommendationsto support exploration, transparencyand controllability. In Proceedingsof the 2013 internationalconference on Intelligent user interfaces(pp. 351-362). ACM. • Verbert, K., Parra, D., & Brusilovsky, P. (2016). Agents Vs.Users: Visual Recommendationof ResearchTalks with Multiple Dimension of Relevance. ACM Transactionson Interactive Intelligent Systems (TiiS), 6(2), 11. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 58
  59. 59. SETFUSION:VENNDIAGRAMFOR USER-CONTROLLABLEINTERFACE
  60. 60. SetFusion – IUI 2014 March 29th, 2017 D.Parra ~ UFMG– Invited Talk 60
  61. 61. SetFusion I Traditional Ranked List Paperssorted by Relevance. It combines3 recommendation approaches. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 61
  62. 62. SetFusion - II Sliders Allow the user to control the importance of each data source or recommendation method Interactive Venn Diagram Allows the user to inspect and to filter papers recommended. Actionsavailable: - Filter item list by clicking on an area - Highlight a paper by mouse-over on a circle - Scroll to paper by clicking on a circle - Indicate bookmarkedpapers March 29th, 2017 D.Parra ~ UFMG– Invited Talk 62
  63. 63. Study : iConference • A laboratory within-subjectsstudy. 40 subjects. • In Preferenceelicitation phase, people did not have limit of papers. Under RecSys interfaces, minimum limit was 15. • In bookmarking, subjects could pick items relevant to a) themselves, b) themselves and others, and c) only to others. $12/hour Avg: 1.5 hours March 29th, 2017 D.Parra ~ UFMG– Invited Talk 63
  64. 64. Study : Population and General Stats Non-controllable Controllable # Total bookmarks 638 625 # Average bookmarks/user 15.95 15.63 # Average rating 2.48±0.089 2.46±0.076 Gender Female: 17 Male: 23 Age 31.75±6.5 Native Speaker Yes: 10 No: 30 Subject Occupation Information Sc. (16), Library Sc.(9), Comp. Sc. (6), Telecomm (3), (+6) PCA 15 questions on pre-questionnaire 4 Factors (User Characteristics) • Expertise in domain • Engaged with iSchools • Trusting Propensity • Experience w/RecSys Dropped • Experience w/CN March 29th, 2017 D.Parra ~ UFMG– Invited Talk 64
  65. 65. Study 2: Results (1) Variables Comment User Engagement Significant Talks explored, clicks (nbr. actions) , time spent on task All significantly higher in controllable interface User Experience Significant MAP Significantly higher in controllable interface User Characteristics Significant Trusting prop.:increases use of Venn diagram and MAP Native speaker: Decreases time spent on task Gender: Being male increases use of sliders Age: Each additional year decreases use of sliders Trusting propensity confirms results of previous studies March 29th, 2017 D.Parra ~ UFMG– Invited Talk 65
  66. 66. Rating per method – Effect of Visuals March 29th, 2017 D.Parra ~ UFMG– Invited Talk 66
  67. 67. Gender Differences on SetFusion? March 29th, 2017 D.Parra ~ UFMG– Invited Talk 67
  68. 68. Study : Results (2) Post-session surveys Controllable No-Controllable Understandability 4.05±0.09*** 2.95±0.16 Satisfaction with interface 4.28±0.09*** 3.4±0.16 Confidence of not missing relevant talks 3.9±0.11*** 3.13±0.15 Intention: I would use it again 4.23±0.09*** 3.45±0.15 Intention: I would recommend system to colleagues 4.28±0.09*** 3.48±0.16 Venn diagram visualization was useful to identify talks recommended by a specific or by a combination of recommendation methods. 4.35±0.11 -- Venn diagram visualization supported explainability 4.08±0.13 -- Satisfaction due to ability to control 4.05±0.12 -- Perception of Control with Sliders 4.03±0.13 -- March 29th, 2017 D.Parra ~ UFMG– Invited Talk 68
  69. 69. Study : Results (3) Non-control Controllable Both None Which interface did you prefer? 0 36 4 0 Non-control Controllable None Both Which interface would you suggest to implement permanently in CN? 1 33 1 5 “I like the Venn diagram especially because most papers I was interested in fell in the same intersections, so it was pretty easy to find and bookmark” “I thought the controllable one adds unnecessary complication if the list is not very long” “I prefer the sliders (over Venn diagram) because I have used a system before to control search results with a similar widget, so it was more familiar to me.” March 29th, 2017 D.Parra ~ UFMG– Invited Talk 69
  70. 70. Study Takeaways • User Engagement: Controllable interface significantly drives more user engagement (objective and subjective metrics) • User Experience: Controllable interface improves user experience by allowing user to interactively control ranking (MAP) and improving explainability. • User characteristics: Trusting propensity affects positively engagement and experience, engagement with iSchools shows the opposite. Males have a tendency to prefer sliders over Venn diagram to control and filter. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 70
  71. 71. More Details on SetFusion? • Effect of other variables: gender, age, experience with in the domain, familiarity with the system • Check our paper in the IJHCS “User-controllable Personalization: A Case Study with SetFusion”: Controlled Laboratory study with SetFusion versus traditional ranked list March 29th, 2017 D.Parra ~ UFMG– Invited Talk 71
  72. 72. Study 2 – UMAP 2013 • Field Study: let users freely explore the interface - ~50% (50 users) tried the SetFusion recommender - 28% (14 users) bookmarked at least one paper - Users explored in average 14.9 talks and bookmarked 7.36 talks in average. A AB ABC AC B BC C 15 7 9 26 18 4 17 16% 7% 9% 27% 19% 4% 18% Distribution of bookmarks per method or combination of methods March 29th, 2017 D.Parra ~ UFMG– Invited Talk 72
  73. 73. Hybrid RecSys: Visualizing Intersections Clustermap Venn diagram • Clustermap vs. Venn Diagram March 29th, 2017 D.Parra ~ UFMG– Invited Talk 73
  74. 74. TalkExplorer vs. SetFusion • Comparing distributions of explorations In studies 1 and 2 over TalkExplorer we observed an important change in the distribution of explorations. March 29th, 2017 D.Parra ~ UFMG– Invited Talk 74
  75. 75. TalkExplorer vs. SetFusion • Comparing distributions of explorations Comparing the field studies: - In TalkExplorer, 84% of the explorationsover intersectionswere performed over clusters of 1 item - In SetFusion, was only 52%, compared to 48% (18% + 30%) of multiple intersections, diff. not statistically significant March 29th, 2017 D.Parra ~ UFMG– Invited Talk 75
  76. 76. Summary & Conclusions • We showed that intersections of several contexts of relevance help to discover relevant items • The visual paradigm used can have a strong effect on user behavior: we need to keep working on visual representations that promote exploration without increasing the cognitive load over the users March 29th, 2017 D.Parra ~ UFMG– Invited Talk 76
  77. 77. Limitations & Future Work • Apply our approach to other domains (fusion of data sources or recommendation algorithms) • For SetFusion, find alternatives to scale the approach to more than 3 sets, potential alternatives: – Clustering and – Radial sets • Consider other factors that interact with the user satisfaction: – Controllability by itself vs. minimum level of accuracy March 29th, 2017 D.Parra ~ UFMG– Invited Talk 77
  78. 78. Current Work on Interfaces • MoodPlay – With Ivana Andjelkovic & John O’Donovan (UCSB) March 29th, 2017 D.Parra ~ UFMG– Invited Talk 78 Andjelkovic, I., Parra, D., & O'Donovan, J. (2016, July). Moodplay: Interactive Mood-based Music Discovery and Recommendation. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization (pp. 275-279). ACM.
  79. 79. MoodPlay • https://www.youtube.com/watch?v=eEdo32oOmcE March 29th, 2017 D.Parra ~ UFMG– Invited Talk 79
  80. 80. Emotion Models • Modelo de emociones de Russel (1980) • GEMS (2008)
  81. 81. Moods and Music: the GEMS model March 29th, 2017 D.Parra ~ UFMG– Invited Talk 81
  82. 82. System Architecture March 29th, 2017 D.Parra ~ UFMG– Invited Talk 82
  83. 83. Hybrid Recommendation Approach March 29th, 2017 D.Parra ~ UFMG– Invited Talk 83
  84. 84. User Study • Conducted on Mechanical Turk, 4 conditions March 29th, 2017 D.Parra ~ UFMG– Invited Talk 84
  85. 85. Interactions March 29th, 2017 D.Parra ~ UFMG– Invited Talk 85
  86. 86. Interaction Stats March 29th, 2017 D.Parra ~ UFMG– Invited Talk 86
  87. 87. Diversity Consumption March 29th, 2017 D.Parra ~ UFMG– Invited Talk 87
  88. 88. User Prior Mood and Artist Mood March 29th, 2017 D.Parra ~ UFMG– Invited Talk 88
  89. 89. Post-Study Survey March 29th, 2017 D.Parra ~ UFMG– Invited Talk 89 Accuracy
  90. 90. Post-Study Survey March 29th, 2017 D.Parra ~ UFMG– Invited Talk 90 Diversity
  91. 91. Post-Study Survey March 29th, 2017 D.Parra ~ UFMG– Invited Talk 91 Confusing Interface
  92. 92. Post-Study Survey March 29th, 2017 D.Parra ~ UFMG– Invited Talk 92 Easy to use
  93. 93. CONCLUSIONS (&CURRENT)& FUTUREWORK
  94. 94. Challenges in Interactive RecSys • Objectives • Controllability • Context-aware recommendations • Privacy • Visualization Techniques • Interaction Techniques • Conversation Interfaces • Evaluation Methodology March 29th, 2017 D.Parra ~ UFMG– Invited Talk 94
  95. 95. Future Work • Opportunities for using new devices (Sensors on Stmartphones, EEG) • Although new devices can capture a lot of new types of data, there is still a lot to be done with data we already produce but we haven’t consumed (user logs on social web sites, etc.) 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 95
  96. 96. MoodPlay in the Chilean news
  97. 97. MoodPlay in the Chilean news Moodplay as therapy?
  98. 98. Moodplay as therapy? • S. Koelsch. A neuroscientific perspective on music therapy. Annals of the New York Academy of Sciences, 1169(1):374–384, 2009. • Music can help on modulate certain mental states. 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 98
  99. 99. Previous work: MIT Mood Meter • http://moodmeter.media.mit.edu/ 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 99
  100. 100. Input Data: from Social Networks? • Michelle Zhou’s personality profile 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 100
  101. 101. Visual emotion detection • https://github.com/auduno/clmtrackr 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 101
  102. 102. Using EEG (BCI) 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 102 EMOTIV http://emotiv.com/epoc/ NEUROSKY http://neurosky.com/bi osensors/
  103. 103. Heatmaps to Moodplay 3/29/17 D. Parra, FuturePDtalk, UMAP 2016 103
  104. 104. THANKS! dparra@ing.puc.cl
  105. 105. EpistAid • Epistemonikos: Evidence-based Medicine • Physicians answer clinical questions March 29th, 2017 D.Parra ~ UFMG– Invited Talk 105
  106. 106. EpistAid 2 • Process of building evidence matrices is really slow March 29th, 2017 D.Parra ~ UFMG– Invited Talk 106
  107. 107. EpistAid: IUI to support physicians • Study 1: Relevance Feedback to find missing papers faster, off-line evaluation • Study 2: Study with physicians at PUC March 29th, 2017 D.Parra ~ UFMG– Invited Talk 107

×