Recommender Systems in TEL


Published on

Nikos Manouselis

Published in: Education, Technology
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Recommender Systems in TEL

  1. 1. Recommender Systems in TEL Nikos Manouselis Greek Research & Technology Network (GRNET)
  2. 2. about me <ul><li>Computer Engineer </li></ul><ul><li>MSc on Operational Research </li></ul><ul><li>PhD from Informatics Lab of an Agricultural University </li></ul><ul><li>working on services for agricultural & rural communities </li></ul><ul><ul><li>learning repositories </li></ul></ul><ul><ul><li>social information retrieval </li></ul></ul><ul><ul><li>Organic.Edunet e Content plus </li></ul></ul>
  3. 3. (promised)aim of this lecture <ul><li>introduce recommender systems </li></ul><ul><li>discuss how they relate to TEL </li></ul><ul><li>identify open research issues </li></ul>
  4. 4. (actual)aim of this lecture <ul><li>share some concerns about TEL and recommender systems </li></ul>
  5. 5. structure <ul><li>tale of 3 friends </li></ul><ul><li>tasks </li></ul><ul><li>modeling & techniques </li></ul><ul><li>evaluation </li></ul><ul><li>wrap up </li></ul>
  6. 6. intro: tale of 3 friends
  7. 7. which movie?
  8. 8. lets ask some friend <ul><li>“ Guys, heard about the last Batman movie… should I watch it?” </li></ul>“ You will definitely like it” “ Maybe not, the scenario is too weak”
  9. 9. lets ask some friend <ul><li>“ Wait – did you like the previous one?” </li></ul>
  10. 10. … so, which movie? <ul><li>taking advantage of knowledge or experience from people in the social circle or network </li></ul><ul><ul><li>e.g. colleagues, friends, peers </li></ul></ul><ul><li>need to answer several questions </li></ul><ul><ul><li>how to identify like-minded people? </li></ul></ul><ul><ul><li>on which dimensions? </li></ul></ul><ul><ul><li>for which types of items? </li></ul></ul><ul><ul><li>does context matter? </li></ul></ul><ul><ul><li>… </li></ul></ul>
  11. 11. recommender systems
  12. 14. <ul><li>using the opinions of a community of users </li></ul><ul><ul><li>to help individuals in that community to identify more effectively content of interest </li></ul></ul><ul><ul><li>from a potentially overwhelming set of choices </li></ul></ul><ul><li>Resnick P. & Varian H.R., “Recommender Systems”, Communications of the ACM, 40(3),1997 </li></ul>definition (1/2)
  13. 15. definition (2/2) <ul><li>any system that </li></ul><ul><ul><li>produces individualized recommendations as output </li></ul></ul><ul><ul><li>or has the effect of guiding the user in a personalized way to interesting or useful objects in a large space of possible options </li></ul></ul><ul><li>Burke R. “Hybrid Recommender Systems: Survey and Experiments”, User Modeling & User-Adapted Interaction, 12, 331-370, 2002 </li></ul>
  14. 16. why do we need them? <ul><li>A trip to a local supermarket [F. Ricci] : </li></ul><ul><ul><li>85 different varieties and brands of crackers </li></ul></ul><ul><ul><li>285 varieties of cookies </li></ul></ul><ul><ul><li>165 varieties of “juice drinks” </li></ul></ul><ul><ul><li>75 iced teas </li></ul></ul><ul><ul><li>275 varieties of cereal </li></ul></ul><ul><ul><li>120 different pasta sauces </li></ul></ul><ul><ul><li>80 different pain relievers </li></ul></ul><ul><ul><li>40 options for toothpaste </li></ul></ul><ul><ul><li>95 varieties of snacks (chips, pretzels, etc.) </li></ul></ul><ul><ul><li>61 varieties of sun tan oil and sunblock </li></ul></ul><ul><ul><li>360 types of shampoo, conditioner, gel, and mousse. </li></ul></ul><ul><ul><li>90 different cold remedies and decongestants. </li></ul></ul><ul><ul><li>230 soups, including 29 different chicken soups </li></ul></ul><ul><ul><li>175 different salad dressings </li></ul></ul>
  15. 17. wait a second <ul><li>is TEL like a super market?? </li></ul>
  16. 18. large number of options
  17. 19. tasks for recommender systems
  18. 20. tasks usually supported <ul><li>annotation in context </li></ul><ul><li>find good items </li></ul><ul><li>find all good items </li></ul><ul><li>receive sequence of items </li></ul><ul><li>(+some less important ones) </li></ul><ul><li>Herlocker et al., “Evaluating Collaborative Filtering Recommender Systems” ACM Transactions on Information Systems, 22(1), 5-53, 2004. </li></ul>
  19. 21. 1. annotation in context <ul><li>integrated in existing working environment to provide additional support or information, e.g. </li></ul><ul><ul><li>predicted usefulness of an item that the user is currently viewing </li></ul></ul><ul><ul><li>links within a Web page that the user is recommended to follow </li></ul></ul>
  20. 22. annotation in context <ul><li>Screenshot/example </li></ul>
  21. 23. 2. find good items <ul><li>suggesting specific item(s) to a user </li></ul><ul><ul><li>characterized as core recommendation task, since occurring in most systems </li></ul></ul><ul><ul><li>e.g. presenting a ranked list of recommended items </li></ul></ul>
  22. 24. find good items <ul><li>Screenshot/example </li></ul>
  23. 25. 3. find all good items <ul><li>user wants to identify all items that might be interesting </li></ul><ul><ul><li>when its important not to overlook any potentially relevant case </li></ul></ul><ul><ul><li>e.g. medical or legal cases </li></ul></ul>
  24. 26. find all good items
  25. 27. 4. sequence of items <ul><li>sequence of related items is recommended to the user </li></ul><ul><ul><li>e.g. entertainment applications such as TV or radio programs </li></ul></ul>
  26. 28. sequence of items
  27. 29. and what about TEL? <ul><li>informal reminder: </li></ul><ul><ul><li>technology enhanced learning is generally dealing with the ways ICT can be used to support learning , teaching , and competence development </li></ul></ul><ul><li>[] </li></ul>
  28. 30. break2think <ul><li>bring yourself in one typical learning situation that occurs very often to YOU </li></ul>
  29. 31. break2think <ul><li>imagine that some magic TEL system is there to support you </li></ul><ul><ul><li>it could make some great suggestions about something to you </li></ul></ul><ul><li>name one learning task where a recommender system would be useful </li></ul>
  30. 32. modeling & techniques
  31. 33. typical classification <ul><li>content-based: information needs of user and characteristics of items are represented in some (usually textual) form </li></ul><ul><li>collaborative filtering: user is recommended items that people with similar tastes and preferences liked </li></ul><ul><li>hybrid: methods that combine content-based and collaborative methods </li></ul><ul><li>… other categorizations also exist (Burke, 2002) </li></ul>
  32. 34. example: content-based
  33. 35. example: collaborative filtering
  34. 36. generally speaking: some user <ul><li>has a profile with some user characteristics, e.g. </li></ul><ul><ul><li>past ratings [collaborative filtering] </li></ul></ul><ul><ul><li>keywords describing past selections [content-based recommendation] </li></ul></ul>
  35. 37. generally speaking: some items <ul><li>are represented using some dimensions, e.g. </li></ul><ul><ul><li>satisfaction over one (or more) criteria [collaborative filtering] </li></ul></ul><ul><ul><li>item attributes/features [content-based recommendation] </li></ul></ul>
  36. 38. generally speaking: a mechanism <ul><li>is taking advantage of the user profile and the item representations </li></ul><ul><ul><li>it provides personalised recommendations of items to users </li></ul></ul>
  37. 39. rings some bell? <ul><ul><li>for TEL, this sounds so… </li></ul></ul><ul><ul><li>adaptive educational hypermedia systems ( AEHS ) </li></ul></ul>
  38. 40. a generic architecture [Karampiperis & Sampson, 2005]
  39. 41. an example [Karampiperis & Sampson, 2005]
  40. 42. <ul><li>enhanced version of [Hanani et al., &quot;Information Filtering: Overview of Issues, Research and Systems&quot;, User Modeling and User-Adapted Interaction, 11, 2001] </li></ul>classification/analysis
  41. 43. recommend in TEL based on what? <ul><li>on learner models/profiles </li></ul><ul><ul><li>e.g. learning styles, competence gaps </li></ul></ul><ul><ul><li>… other ideas? </li></ul></ul><ul><li>on item characteristics </li></ul><ul><ul><li>e.g. interactivity, granularity, accessibility </li></ul></ul><ul><ul><li>… other ideas? </li></ul></ul>
  42. 44. evaluation
  43. 45. evaluating recommendation <ul><li>currently based on performance </li></ul><ul><li>“ how good are your algorithms?” </li></ul><ul><li>e.g. </li></ul><ul><ul><li>how accurate are they in predictions? </li></ul></ul><ul><ul><li>for how many unknown items can they produce a prediction? </li></ul></ul><ul><ul><li>… mainly information retrieval evaluation approaches </li></ul></ul><ul><li>[Herlocker et al., “Evaluating Collaborative Filtering Recommender Systems” ACM Transactions on Information Systems, 22(1), 5-53, 2004] </li></ul>
  44. 46. typical results means that a prediction could be 4,6 stars instead of 4 or 5 … does this really matter in TEL?
  45. 47. other issues <ul><li>live experiments vs. offline analyses </li></ul><ul><li>synthesized vs. natural data sets </li></ul><ul><ul><li>properties of data sets </li></ul></ul><ul><ul><li>existing data sets </li></ul></ul>
  46. 48. metrics (popular) <ul><li>accuracy </li></ul><ul><ul><li>predictive accuracy (MAE) </li></ul></ul><ul><ul><li>classification accuracy </li></ul></ul><ul><li>Precision and Recall </li></ul><ul><ul><li>probability that a selected item is relevant </li></ul></ul><ul><ul><li>probability that a relevant item will be selected </li></ul></ul><ul><li>ad hoc </li></ul><ul><ul><li>Rank Accuracy Metrics </li></ul></ul><ul><ul><li>Prediction-Rating Correlation </li></ul></ul><ul><li>coverage </li></ul><ul><ul><li>percentage of items for which prediction is possible </li></ul></ul>
  47. 49. metrics (not popular) <ul><li>novelty </li></ul><ul><li>serendipity </li></ul><ul><li>confidence </li></ul><ul><li>user evaluation </li></ul><ul><ul><li>explicit (ask) vs. implicit (observe) </li></ul></ul><ul><ul><li>laboratory studies vs. field studies </li></ul></ul><ul><ul><li>outcome vs. process </li></ul></ul><ul><ul><li>short-term vs. long-term </li></ul></ul>
  48. 50. evaluation in TEL recommenders <ul><li>few systems actually evaluated </li></ul><ul><ul><li>even fewer actually tried with users </li></ul></ul><ul><li>recent analysis of 15 TEL recommender systems: </li></ul><ul><ul><li>half of the systems (8/15) still at design or prototyping stage </li></ul></ul><ul><ul><li>only 5 systems evaluated through trials with human users </li></ul></ul><ul><ul><li>[N.Manouselis, H.Drachsler, R.Vuorikari, H.Hummel, R.Koper, “Recommender Systems in Technology Enhanced Learning”, Handbook of Recommender Systems (under review)] </li></ul></ul>
  49. 51. example: Altered Vista <ul><li>evaluate the effectiveness and usefulness </li></ul><ul><ul><li>system usability and performance </li></ul></ul><ul><ul><li>predictive accuracy of recommender engine </li></ul></ul><ul><ul><li>extent to which reviewing Web resources within a community of users supports and promotes collaborative and c ommunity-building activities </li></ul></ul><ul><ul><li>extent to which critical review of Web resources leads to improvements in user’s information literacy skills </li></ul></ul><ul><ul><li>[Walker et al., “ Collaborative Information Filtering: a review and an educational application”, International Journal of Artificial Intelligence in Education 14, 2004 ] </li></ul></ul>
  50. 52. another look at it <ul><li>e.g. using Kirckpatrick’s model on evaluating training programs </li></ul><ul><ul><li>reaction of student - what they thought and felt about the training </li></ul></ul><ul><ul><li>learning - the resulting increase in knowledge or capability </li></ul></ul><ul><ul><li>behaviour - extent of behaviour and capability improvement and implementation/application </li></ul></ul><ul><ul><li>results - the effects on the business or environment resulting from the trainee's performance </li></ul></ul>
  51. 53. what else could be evaluated? <ul><li>when deploying a recommender system in a TEL setting </li></ul><ul><li>… what could we evaluate and how to measure it? </li></ul>
  52. 54. wrap up & directions
  53. 55. basic conclusion <ul><li>assuming an information overload problem in TEL </li></ul><ul><ul><li>recommender systems are good </li></ul></ul><ul><ul><li>need to think out of the box </li></ul></ul><ul><ul><li>connect with existing research </li></ul></ul><ul><ul><li>focus on TEL particularities </li></ul></ul><ul><ul><li>explore alternative uses </li></ul></ul><ul><ul><li>integrate with existing theories </li></ul></ul>
  54. 56. interesting (?) issues <ul><li>recommendation of peers </li></ul><ul><li>criteria for expressing learner satisfaction (no more 5-stars) </li></ul><ul><li>study actual usage/acceptance </li></ul><ul><li>assess performance/learning improvement </li></ul><ul><li>… implement, deploy, pilot! </li></ul>
  55. 57. but do they exist?? <ul><li> </li></ul>
  56. 58. interested in more? <ul><li>Journal of Digital Information (JoDI) </li></ul><ul><ul><li>Special Issue on Social Information Retrieval for Technology-Enhanced Learning, 10(2), 2009 </li></ul></ul><ul><li>Workshop on Social Information Retrieval for Technology Enhanced Learning (SIRTEL) </li></ul><ul><ul><li>SIRTEL 2007 ( </li></ul></ul><ul><ul><li>SIRTEL 2008 ( </li></ul></ul><ul><ul><li>SIRTEL 2009 ( </li></ul></ul><ul><ul><ul><li>co-located with ICWL’09, Aachen, Germany, August 21 st - deadline: 12/6 </li></ul></ul></ul>
  57. 59. thank you! questions? ideas?
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.