0
Recommender Systems    Challenges      Best Practices     Tutorial & Panel       ACM RecSys 2012           Dublin         ...
About us•   Alan Said - PhD Student @ TU-Berlin    o   Topics: RecSys Evaluation    o   @alansaid    o   URL: www.alansaid...
General Motivation"RecSys is nobodys home conference. We  come from CHI, IUI, SIGIR, etc."  Joe Konstan - RecSys 2010RecSy...
Outline•   Tutorial    o Introduction to concepts in challenges    o Execution of a challenge    o Conclusion•   Panel    ...
What is the motivationfor RecSys Challenges?          Part 1
Setup - information overloadusers                      content of service                          provider        recomme...
Motivation of stakeholdersfind relevant contenteasy navigationserendipity, discovery  user                                ...
Evaluation in terms of the business                           business                           reportingOnline evaluatio...
Context of the contest•   Selection of metrics•   Domain dependent•   Offline vs. online evaluation•   IR centric evaluati...
Latent user needs
Recsys Competition Highlights                          •   Large scale                          •   Organization          ...
Recurring Competitions•   ACM KDD Cup (2007, 2011, 2012)•   ECML/PKDD Discovery Challenge (2008    onwards)    o 2008 and ...
Does size matter?•   Yes! – real world users•   In research – to some extent
Research & IndustryImportant for both• Industry has the data and research needs  data• Industry needs better approaches bu...
Running a Challenge       Part 2
Standard Challenge Setting•   organizer defines the recommender setting e.g.    tag recommendation in BibSonomy•   provide...
Typical contest settings •   offline     o   everyone gets access to the dataset     o   in principle it is a prediction t...
Example online setting(BibSonomy)BALBY MARINHO, L. ; HOTHO, A. ; JÄSCHKE, R. ; NANOPOULOS, A. ; RENDLE, S. ; SCHMIDT-THIEM...
Which evaluation measures?•   Root Mean Squared Error (RMSE)•   Mean Absolute Error (MAE)•   Typical IR measures    o   pr...
Discussion of measures?    RMSE - Precision• RMSE is not necessarily the king of metrics    as RMSE is easy to optimize on...
Results influenced by ...•   target of the recommendation (user, resources, etc...)•   evaluation methodology (leave-one-o...
Dont forget..• the effort to organize a challenge is very big• preparing data takes time• answering questions takes even m...
What have we learnt?    Conclusion        Part 3
Challenges are good since they...•   ... are focused on solving a single problem•   ... have many participants•   ... crea...
Is that the complete truth?           No!
Is that the complete truth?•   Why?Because using standard information retrieval metrics wecannot evaluate recommender syst...
We cant catch everything offline        Scalability                      Presentation                      Interaction
The difference between IR and RSInformation retrieval systems answer to a need                 A QueryRecommender systems ...
Should we organize morechallenges?•   Yes - but before we do that, think of    o What is the utility of Yet Another Datase...
Take home message•   Real needs of users and content providers are better    reflected in online evaluation•   Consider te...
Related events at RecSys•   Workshops    o   Recommender Utility Evaluation    o   RecSys Data Challenge•   Paper Sessions...
PanelPart 4
Panel•   Torben Brodt    o   Plista    o   Organizing Plista Contest•   Yehuda Koren    o   Google    o   Member of winnin...
Questions•   How does recommendation influence the    user and system?•   How can we quantify the effects of the UI?•   Ho...
Upcoming SlideShare
Loading in...5
×

Best Practices in Recommender System Challenges

4,368

Published on

Recommender System Challenges such as the Netflix Prize, KDD Cup, etc. have contributed vastly to the development and adoptability of recommender systems. Each year a number of challenges or contests are organized covering different aspects of recommendation. In this tutorial and panel, we present some of the factors involved in successfully organizing a challenge, whether for reasons purely related to research, industrial challenges, or to widen the scope of recommender systems applications.

Published in: Technology, Education
0 Comments
14 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
4,368
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
152
Comments
0
Likes
14
Embeds 0
No embeds

No notes for slide

Transcript of "Best Practices in Recommender System Challenges"

  1. 1. Recommender Systems Challenges Best Practices Tutorial & Panel ACM RecSys 2012 Dublin September 10, 2012
  2. 2. About us• Alan Said - PhD Student @ TU-Berlin o Topics: RecSys Evaluation o @alansaid o URL: www.alansaid.com• Domonkos Tikk - CEO @ Gravity R&D o Topics: Machine Learning methods for RecSys o @domonkostikk o http://www.tmit.bme.hu/tikk.domonkos• Andreas Hotho - Prof. @ Uni. Würzburg o Topics: Data Mining, Information Retrieval, Web Science o http://www.is.informatik.uni-wuerzburg.de/staff/hotho
  3. 3. General Motivation"RecSys is nobodys home conference. We come from CHI, IUI, SIGIR, etc." Joe Konstan - RecSys 2010RecSys is our home conference - weshould evaluate accordingly!
  4. 4. Outline• Tutorial o Introduction to concepts in challenges o Execution of a challenge o Conclusion• Panel Experiences of participating in and organizing challenges  Yehuda Koren  Darren Vengroff  Torben Brodt
  5. 5. What is the motivationfor RecSys Challenges? Part 1
  6. 6. Setup - information overloadusers content of service provider recommender
  7. 7. Motivation of stakeholdersfind relevant contenteasy navigationserendipity, discovery user service increase revenue target user with recom the right content engage users facilitate goals of stakeholders get recognized
  8. 8. Evaluation in terms of the business business reportingOnline evaluation (A/B test) Casting into a research problem
  9. 9. Context of the contest• Selection of metrics• Domain dependent• Offline vs. online evaluation• IR centric evaluation o RMSE o MAP o F1
  10. 10. Latent user needs
  11. 11. Recsys Competition Highlights • Large scale • Organization • RMSE• 3-stage setup • Prize• selection by review• runtime limits• real traffic• revenue increase • offline • MAP@500 • metadata available • larger in dimensions • no ratings
  12. 12. Recurring Competitions• ACM KDD Cup (2007, 2011, 2012)• ECML/PKDD Discovery Challenge (2008 onwards) o 2008 and 09: tag recommendation in social bookmarking (incl. online evaluation task) o 2011: video lectures• CAMRa (2010, 2011, 2012)
  13. 13. Does size matter?• Yes! – real world users• In research – to some extent
  14. 14. Research & IndustryImportant for both• Industry has the data and research needs data• Industry needs better approaches but this costs• Research has ideas but has no systems and/or data to do the evaluationDont exploit participantsDont be too greedy
  15. 15. Running a Challenge Part 2
  16. 16. Standard Challenge Setting• organizer defines the recommender setting e.g. tag recommendation in BibSonomy• provide data o with features or o raw data o construct your own data• fix the way to do the evaluation• define the goal e.g. reach a certain improvement (F1)• motivate people to participate: e.g. promise a lot of money ;-)
  17. 17. Typical contest settings • offline o everyone gets access to the dataset o in principle it is a prediction task, the user cant be influenced o privacy of the user within the data is a big issue o results from offline experimentation have limited predictive power for online user behavior • online o after a first learning phase the recommender is plugged into a real system o user can be influenced but only by the selected system o comparison of different system is not completely fair • further ways o user study
  18. 18. Example online setting(BibSonomy)BALBY MARINHO, L. ; HOTHO, A. ; JÄSCHKE, R. ; NANOPOULOS, A. ; RENDLE, S. ; SCHMIDT-THIEME, L. ; STUMME, G. ; SYMEONIDIS, P.:Recommender Systems for Social Tagging Systems : SPRINGER, 2012 (SpringerBriefs in Electrical and Computer Engineering). - ISBN 978-1-4614-1893-1
  19. 19. Which evaluation measures?• Root Mean Squared Error (RMSE)• Mean Absolute Error (MAE)• Typical IR measures o precision @ n-items o recall @ n-items o False Positive Rate o F1 @ n-items o Area Under the ROC Curve (AUC)• non-quality measures o server answer time o understandability of the results
  20. 20. Discussion of measures? RMSE - Precision• RMSE is not necessarily the king of metrics as RMSE is easy to optimize on• What about Top-n?• but RMSE is not influenced by popularity as top-n• What about user-centric stuff?• Ranking-based measure in KDD Cup 2011, Track 2
  21. 21. Results influenced by ...• target of the recommendation (user, resources, etc...)• evaluation methodology (leave-one-out, time based split, random sample, cross validation)• evaluation measure• design of the application (online setting)• the selected part of the data and its preprocessing (e.g. p-core vs. long tail)• scalability vs. quality of the model• feature and content accessible and usable for the recommendation
  22. 22. Dont forget..• the effort to organize a challenge is very big• preparing data takes time• answering questions takes even more time• participants are creative, needs for reaction• time to compute the evaluation and check the results• prepare proceedings with the outcome• ...
  23. 23. What have we learnt? Conclusion Part 3
  24. 24. Challenges are good since they...• ... are focused on solving a single problem• ... have many participants• ... create common evaluation criteria• ... have comparable results• ... bring real-world problems to research• ... make it easy to crown a winner• ... they are cheap (even with a 1M$ prize)
  25. 25. Is that the complete truth? No!
  26. 26. Is that the complete truth?• Why?Because using standard information retrieval metrics wecannot evaluate recommender system concepts like: • user interaction • perception • satisfaction • usefulness • any metric not based on accuracy/rating prediction and negative predictions • scalability • engineering
  27. 27. We cant catch everything offline Scalability Presentation Interaction
  28. 28. The difference between IR and RSInformation retrieval systems answer to a need A QueryRecommender systems identify the users needs
  29. 29. Should we organize morechallenges?• Yes - but before we do that, think of o What is the utility of Yet Another Dataset - arent there enough already? o How do we create a real-world like challenge o How do we get real user feedback
  30. 30. Take home message• Real needs of users and content providers are better reflected in online evaluation• Consider technical limitations as well• Challenges advance the field a lot o Matrix factorization & ensemble methods in the Netflix Prize o Evaluation measure and objective in the KDD Cup 2011
  31. 31. Related events at RecSys• Workshops o Recommender Utility Evaluation o RecSys Data Challenge• Paper Sessions o Multi-Objective Recommendation and Human Factors - Mon. 14:30 o Implicit Feedback and User Preference - Tue. 11:00 o Top-N Recommendation - Wed. 14:30• More challenges: o www.recsyswiki.com/wiki/Category:Competition
  32. 32. PanelPart 4
  33. 33. Panel• Torben Brodt o Plista o Organizing Plista Contest• Yehuda Koren o Google o Member of winning team of the Netflix Prize• Darren Vengroff o RichRelevance o Organizer of RecLab Prize
  34. 34. Questions• How does recommendation influence the user and system?• How can we quantify the effects of the UI?• How should we translate what weve presented into an actual challenge?• should we focus on the long tail or the short head?• Evaluation measures, click rate, wtf@k• How to evaluate conversion rate?
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×