Revisiting the Multi-Criteria Recommender System of a Learning Portal

662 views

Published on

Presentation of paper for Recommender Systems in Technology Enhanced Learning (RecSysTEL) workshop, ECTEL'12, Saarbruecken, Germany

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
662
On SlideShare
0
From Embeds
0
Number of Embeds
25
Actions
Shares
0
Downloads
15
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Revisiting the Multi-Criteria Recommender System of a Learning Portal

  1. 1. Revisiting the Multi-CriteriaRecommender System of a Learning Portal Nikos Manouselis1, Giorgos Kyrgiazos2, Giannis Stoitsis1 Agro-Know Technologies, 2CTI 1 @RecSysTEL’12, Saarbruecken, 19/9/12
  2. 2. our nice portal
  3. 3. our nice portal
  4. 4. collected data•id user item•Name* URL•Email* tags Organic.Edunet •Value social data schema •Date •Value •Dimension •Date reviews ratings •Value
  5. 5. current service• recommendation of potentially interesting learning resources to users – not very “loud”• one recommendation algorithm based on collaborative filtering – rating history – neighborhood-based – multi-attribute over 3 criteria [Subject Relevance, Educational Usefulness, Metadata] – parameters defined & hard-coded
  6. 6. issues• lots of parameters could be different – selected recommendation methods – neighborhood size – similarity measures• parameterization took place using a similar dataset [but not the same] – EUN’s Learning Resource Exchange (MELT) multi- attribute ratings dump• Organic.Edunet’s user/content base continuously evolves
  7. 7. in the year 2007…
  8. 8. in the year 2007…
  9. 9. problem outline• How do we know that the selected algorithm is still(?) good for the given portal? – specific rating dimensions (criteria) – selected parameterization – alternative algorithms – specific dataset & its expected evolution
  10. 10. experiment
  11. 11. approach• carry out same experiment: simulation of how multi-attribute collaborative filtering algorithms perform – real data from Organic.Edunet users – simulated/synthetic data from expected future scenario (when more ratings will be provided) – base algorithms from 2007 vs. additional/alternative algorithms
  12. 12. real data from Organic.Edunet• 477 ratings – 99 users (only 0.02% of registered ones) – 345 items (only 0.03% of indexed resources)
  13. 13. simulated/synthetic data• used Monte Carlo simulator to generate more ratings of the same users – 1,280 ratings
  14. 14. 2007 base algorithms• Manouselis & Costopoulou (2006;2007)• classic neighborhood-based collaborative filtering – extended for multi-criteria ratings – prediction per criterion (PG) – many parameters open for tweaking/experimentation • different algorithm variations
  15. 15. additional/alternative algorithms• Adomavicius & Kwon (2007)• similar approach, neighborhood-based collaborative filtering extended for multi- criteria ratings – weights prediction based with average (AS) or minimum (WS) similarities per criterion – same parameters open for tweaking/experimentation • different algorithm variations
  16. 16. overall experiment setting• 18 variations of each examined algorithm (PG, AW, WS) – plus some base non-personalised ones• various values for parameters defining the neighborhood size-> over 1,080 algorithmic variations executed and compared over each dataset
  17. 17. results: real dataset
  18. 18. results: synthetic dataset
  19. 19. best over both Algorithm Similarity Normalization method AVG Coverage AVG MAE MNN variations PG Cosine Deviation-from-Mean 61.33% 0.8855 PG Euclidian Simple Mean 61.33% 0.8626 CWT variations PG Cosine Deviation-from-Mean 57.91% 0.8908 PG Cosine Simple Mean 57.91% 0.86732007:
  20. 20. implementation implications• based on existing dataset and the foreseen future scenario – keep same algorithm (PG) for recommendation service – adapt selection of options and their parameterization – “actual” performance (vs. 2007) is probably worse
  21. 21. conclusions
  22. 22. lessons learnt• after 2 years of service operation – tried to repeat an offline experimental simulation – candidate multi-criteria recommendation algorithms – data from real usage vs. synthetic data• feeling better about algorithm choice – some insight into expected performance – not real impact into the actual service
  23. 23. to explore• would be interesting to experiment with more future scenarios – make various estimations/projections about dataset size and sparseness – execute algorithms over synthetic datasets simulating these projections• would be interesting to make a service that is really used – get more ratings, on more items – provide visible recommendations – measure impact to search/discovery behaviour
  24. 24. up & beyond
  25. 25. experiments beyond a single dataset • combining data from various sources to boost the way recommenders work • design algorithms that could provide cross- border recommendations • provide many parallel/cascading/competing options for recommendation algorithms • not really care about data size & storage
  26. 26. a social data infrastructure for learning …portals… Meta Social Meta Social Meta Social Social data data Data Data data Data Data API API API API Federated Recommendation Aggregation of metadata, social and usage data Services Resolution services Social Metadata Data per URIwww.opendiscoveryspace.eu Anonymised
  27. 27. challenges• define common metadata schema(s)• aggregate (e.g. harvest/crawl) social data• transform each social data schema• URI resolution• scalability• anonymised approach• …
  28. 28. thank you! nikosm@ieee.org http://wiki.agroknow.grhttp://www.organic-edunet.eu

×