Your SlideShare is downloading. ×
Best Practices in Recommender System Challenges
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Best Practices in Recommender System Challenges

4,152
views

Published on

Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests …

Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.

Published in: Technology, Education

0 Comments
14 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
4,152
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
145
Comments
0
Likes
14
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Recommender Systems Challenges Best Practices Tutorial & Panel ACM RecSys 2012 Dublin September 10, 2012
  • 2. About us• Alan Said - PhD Student @ TU-Berlin o Topics: RecSys Evaluation o @alansaid o URL: www.alansaid.com• Domonkos Tikk - CEO @ Gravity R&D o Topics: Machine Learning methods for RecSys o @domonkostikk o http://www.tmit.bme.hu/tikk.domonkos• Andreas Hotho - Prof. @ Uni. Würzburg o Topics: Data Mining, Information Retrieval, Web Science o http://www.is.informatik.uni-wuerzburg.de/staff/hotho
  • 3. General Motivation"RecSys is nobodys home conference. We come from CHI, IUI, SIGIR, etc." Joe Konstan - RecSys 2010RecSys is our home conference - weshould evaluate accordingly!
  • 4. Outline• Tutorial o Introduction to concepts in challenges o Execution of a challenge o Conclusion• Panel Experiences of participating in and organizing challenges  Yehuda Koren  Darren Vengroff  Torben Brodt
  • 5. What is the motivationfor RecSys Challenges? Part 1
  • 6. Setup - information overloadusers content of service provider recommender
  • 7. Motivation of stakeholdersfind relevant contenteasy navigationserendipity, discovery user service increase revenue target user with recom the right content engage users facilitate goals of stakeholders get recognized
  • 8. Evaluation in terms of the business business reportingOnline evaluation (A/B test) Casting into a research problem
  • 9. Context of the contest• Selection of metrics• Domain dependent• Offline vs. online evaluation• IR centric evaluation o RMSE o MAP o F1
  • 10. Latent user needs
  • 11. Recsys Competition Highlights • Large scale • Organization • RMSE• 3-stage setup • Prize• selection by review• runtime limits• real traffic• revenue increase • offline • MAP@500 • metadata available • larger in dimensions • no ratings
  • 12. Recurring Competitions• ACM KDD Cup (2007, 2011, 2012)• ECML/PKDD Discovery Challenge (2008 onwards) o 2008 and 09: tag recommendation in social bookmarking (incl. online evaluation task) o 2011: video lectures• CAMRa (2010, 2011, 2012)
  • 13. Does size matter?• Yes! – real world users• In research – to some extent
  • 14. Research & IndustryImportant for both• Industry has the data and research needs data• Industry needs better approaches but this costs• Research has ideas but has no systems and/or data to do the evaluationDont exploit participantsDont be too greedy
  • 15. Running a Challenge Part 2
  • 16. Standard Challenge Setting• organizer defines the recommender setting e.g. tag recommendation in BibSonomy• provide data o with features or o raw data o construct your own data• fix the way to do the evaluation• define the goal e.g. reach a certain improvement (F1)• motivate people to participate: e.g. promise a lot of money ;-)
  • 17. Typical contest settings • offline o everyone gets access to the dataset o in principle it is a prediction task, the user cant be influenced o privacy of the user within the data is a big issue o results from offline experimentation have limited predictive power for online user behavior • online o after a first learning phase the recommender is plugged into a real system o user can be influenced but only by the selected system o comparison of different system is not completely fair • further ways o user study
  • 18. Example online setting(BibSonomy)BALBY MARINHO, L. ; HOTHO, A. ; JÄSCHKE, R. ; NANOPOULOS, A. ; RENDLE, S. ; SCHMIDT-THIEME, L. ; STUMME, G. ; SYMEONIDIS, P.:Recommender Systems for Social Tagging Systems : SPRINGER, 2012 (SpringerBriefs in Electrical and Computer Engineering). - ISBN 978-1-4614-1893-1
  • 19. Which evaluation measures?• Root Mean Squared Error (RMSE)• Mean Absolute Error (MAE)• Typical IR measures o precision @ n-items o recall @ n-items o False Positive Rate o F1 @ n-items o Area Under the ROC Curve (AUC)• non-quality measures o server answer time o understandability of the results
  • 20. Discussion of measures? RMSE - Precision• RMSE is not necessarily the king of metrics as RMSE is easy to optimize on• What about Top-n?• but RMSE is not influenced by popularity as top-n• What about user-centric stuff?• Ranking-based measure in KDD Cup 2011, Track 2
  • 21. Results influenced by ...• target of the recommendation (user, resources, etc...)• evaluation methodology (leave-one-out, time based split, random sample, cross validation)• evaluation measure• design of the application (online setting)• the selected part of the data and its preprocessing (e.g. p-core vs. long tail)• scalability vs. quality of the model• feature and content accessible and usable for the recommendation
  • 22. Dont forget..• the effort to organize a challenge is very big• preparing data takes time• answering questions takes even more time• participants are creative, needs for reaction• time to compute the evaluation and check the results• prepare proceedings with the outcome• ...
  • 23. What have we learnt? Conclusion Part 3
  • 24. Challenges are good since they...• ... are focused on solving a single problem• ... have many participants• ... create common evaluation criteria• ... have comparable results• ... bring real-world problems to research• ... make it easy to crown a winner• ... they are cheap (even with a 1M$ prize)
  • 25. Is that the complete truth? No!
  • 26. Is that the complete truth?• Why?Because using standard information retrieval metrics wecannot evaluate recommender system concepts like: • user interaction • perception • satisfaction • usefulness • any metric not based on accuracy/rating prediction and negative predictions • scalability • engineering
  • 27. We cant catch everything offline Scalability Presentation Interaction
  • 28. The difference between IR and RSInformation retrieval systems answer to a need A QueryRecommender systems identify the users needs
  • 29. Should we organize morechallenges?• Yes - but before we do that, think of o What is the utility of Yet Another Dataset - arent there enough already? o How do we create a real-world like challenge o How do we get real user feedback
  • 30. Take home message• Real needs of users and content providers are better reflected in online evaluation• Consider technical limitations as well• Challenges advance the field a lot o Matrix factorization & ensemble methods in the Netflix Prize o Evaluation measure and objective in the KDD Cup 2011
  • 31. Related events at RecSys• Workshops o Recommender Utility Evaluation o RecSys Data Challenge• Paper Sessions o Multi-Objective Recommendation and Human Factors - Mon. 14:30 o Implicit Feedback and User Preference - Tue. 11:00 o Top-N Recommendation - Wed. 14:30• More challenges: o www.recsyswiki.com/wiki/Category:Competition
  • 32. PanelPart 4
  • 33. Panel• Torben Brodt o Plista o Organizing Plista Contest• Yehuda Koren o Google o Member of winning team of the Netflix Prize• Darren Vengroff o RichRelevance o Organizer of RecLab Prize
  • 34. Questions• How does recommendation influence the user and system?• How can we quantify the effects of the UI?• How should we translate what weve presented into an actual challenge?• should we focus on the long tail or the short head?• Evaluation measures, click rate, wtf@k• How to evaluate conversion rate?