Advertisement
Advertisement

More Related Content

Similar to Narrative-Driven Recommendation for Casual Leisure Needs(20)

Advertisement

Recently uploaded(20)

Narrative-Driven Recommendation for Casual Leisure Needs

  1. Narrative-Driven Recommendation for Complex Leisure Needs Marijn Koolen Royal Netherlands Academy of Arts and Sciences Humanities Cluster Google CSR Workshop, London, 2019-08-28
  2. Overview 1. Scenario: Narrative-Driven Recommendation 2. Analyzing Forum Discussions for 4 Leisure Domains 3. Lessons for System Design
  3. 1. Narrative-Driven Recommendation
  4. History of NDR ● Evolved from Social Book Search evaluation campaign (INEX/CLEF 2011-2016) ● Evolved into analysing forum requests as complex information needs ○ Books: Koolen et al. (CIKM 2012, ECIR 2014, ECIR 2015) ○ Movies and Books: Bogers et al. (iConference 2018) ○ Games: Bogers et al. (iConference 2019) ○ Books, Movies, Games, Music: Bogers et al. (in preparation) ● Scenario: Narrative-Driven Recommendation ○ Bogers, Koolen (ACM RecSys 2017, KaRS 2018).
  5. Scenario ● NDR (Bogers & Koolen 2017, 2018) is a complex scenario: ○ Narrative description of desired aspects of items ○ User preference info (user profile or example items) ● Related to Conversational Search & Recommendation ○ But human-directed, complex and often vaguely expressed (latent) needs ○ Interactions with conversational agents tend to be simpler (Kang et al. 2017) ■ E.g. more concrete aspects (genre, creator, title, year, …)
  6. Search-Recommendation Continuum ● Some requests are pure search ○ “sci-fi books about space traders” ● Some requests are almost pure recommendation ○ “Something as good as David Copperfield“ ● Majority of requests mix search and recommendation ○ “historical fiction set in 17th c. England that I’ll like based on my profile”
  7. ● In the book domain latent factors are related to amount of reading experience (Koolen et al. 2015) ○ Novice readers ask for recommendations based on example books (interests are latent) ○ Experienced readers describe detailed content aspects (interests are known) ○ In developing conversation on forums: start from latent factors and discuss examples to tease out more concrete aspects of interest. ● How does this work in other leisure domains? ● And what relevance aspects do discussion forum users mention? Continuum and Latent Interests
  8. 2. Analyzing Forum Discussions
  9. ● We developed a relevance aspect model for leisure needs ○ Grounded in actual forum requests ● Data from a range of discussion forums ○ Books: 503 requests (LibraryThing forums) ○ Movies: 538 requests (IMDB forums) ○ Games: 521 requests (reddit) ○ Music: 589 requests (reddit) Comparing Domains
  10. 3. Lessons for System Design
  11. Capturing Needs - Conversational Recommenders ● Complex narrative is difficult to interpret algorithmically ○ Possible interaction: conversational models for iterative structuring ● Kang et al., (RecSys 2017) look at queries in conversational movie recommendation ○ Many follow-up queries for refinement (clarify, constrain), reformulation ○ Objective: genre (“superhero movies”), deep features (“movies with open endings or plot twists”) ○ Subjective: emotion (“sad movie”), quality (“interesting characters, clever plot”), familiarity (“what would you recommend to a fan of Big Lebowski?”)
  12. ● User reviews are highly effective ○ for Narrative-Driven Recommendation (Bogers & Koolen 2018) ... ○ ... and also many search tasks (Koolen et al. 2012, Koolen 2014, Koolen et al. 2015) ● Why? ○ written in the language of the user (same as request) ○ discuss broad range of aspects… ○ … including reading/watching/listening/playing experience! ● Potential for conversational mode: ○ Background model for how people talk about books/films/games/music ○ “This review makes me want to buy this film.” i. “What aspect of this review triggered your interest?” The Effectiveness of User Reviews
  13. Explanations From Reviews ● Reviews as a source of explanations ○ “Several reviewers say this book/film changed their views.” ● Lu et al. (RecSys 2018) used adversarial s2s learning to generate explanations from user reviews ○ generate review for new item based on own reviews of consumed items ● Possible interaction: ○ User mentions examples, systems generates recommendations + explanations ○ User points out aspects of explanations they’re interested in, system refines recommendation and explanations
  14. Experience, Appeal and Impact ● How to extract information about experience or impact ○ Saricks (2005) identified appeal elements based on style, characterization, plot, pace ● We’re currently developing an impact model (for reviews in Dutch) ○ Identify and extract expressions of impact from user reviews (with Peter Boot, in preparation) i. “I couldn’t put the book down and finished it in one go.” ii. “I really started to understand the main character.” ○ Impact type: emotional (not just binary sentiment) and cerebral (changing your views, motivating you, bringing up memories) ○ Impact cause: e.g. style, narrative, reflection ○ On 400K reviews, inverse relationship between narrative absorption and stylistic/reflective impact
  15. Conclusions ● Narrative-Driven Recommendation is a challenging task. ○ Complex but common recommendation/search need ○ Requires combination of data sources and algorithms to solve. ○ Conversational mode useful for identifying and ranking relevance aspects ○ User-generated content essential for good performance ● Next steps: other domains, more advanced NLU analysis, interaction models
  16. ● Bogers, Koolen (RecSys 2017). Defining and Supporting Narrative-driven Recommendation ● Bogers, Koolen (KaRS 2018). “I’m looking for something like …”: Combining Narratives and Example Items for Narrative-driven Book Recommendation ● Bogers et al. (iConf 2018). "What was this Movie About this Chick?" - A Comparative Study of Relevance Aspects in Book and Movie Discovery ● Bogers et al. (iConf 2019). "Looking for an amazing game I can relax and sink hours into..." - A Study of Relevance Aspects in Video Game Discovery ● Bogers, Petras (iConf 2015). Tagging vs. Controlled Vocabulary: Which is More Helpful for Book Search? ● Bogers, Petras (iConf 2017). An in-depth analysis of tags and controlled metadata for book search ● Kang et al. (RecSys 2017). Understanding How People Use Natural Language to Ask for Recommendations ● Koolen et al. (CIKM 2012). Social book search: comparing topical relevance judgements and book suggestions for evaluation ● Koolen, M. (ECIR 2014). “User reviews in the search index? That’ll never work!” ● Koolen et al. (ECIR 2015). Looking for Books in Social Media: An Analysis of Complex Search Requests ● Kula (CBRecSys 2015). Metadata Embeddings for User and Item Cold-start Recommendations ● Lu et al. (RecSys 2018). Why I like it: Multi-task Learning for Recommendation and Explanation ● Reuter (JASIS 2007). Assessing aesthetic relevance: Children's book selection in a digital library ● Saricks (2005). Readers' advisory service in the public library ● Weston et al. (IJCAI 2011). WSABIE: Scaling Up To Large Vocabulary Image Annotation References
  17. Thank You! ● Acknowledgements: ○ collaborative work with Toine Bogers, Peter Boot, Maria Gäde, Jaap Kamps, Vivien Petras, Mette Skov ● Questions?
Advertisement