RecSysTEL lecture at advanced SIKS course, NL

7,778 views

Published on

Updated version of the RecSys TEL lecture I already gave as invited talk in the UK, NL and DE. The conclusion parts is totally new and aligned to the new book on RecSys for Learning at Springer that will appear soon in 2012.

Published in: Technology, Education

RecSysTEL lecture at advanced SIKS course, NL

  1. Recommender Systems for Learning12. 04. 2012 Advanced SIKS course on Technology-Enhanced Learning Landgoed Huize Bergen, Vught, NederlandHendrik DrachslerCentre for Learning Sciences and Technology (CELSTEC)Open University of the Netherlands 1
  2. Goals of the lecture1. Crash course Recommender Systems (RecSys)2. Overview of RecSys in TEL3. Conclusions and open research issues for RecSys in TEL 2
  3. Introduction intoRecommender Systems Introduction Objectives Technologies Evaluation 3
  4. Introduction::Application areas Application areas • E-commerce websites (Amazon) • Video, Music websites (Netflix, last.fm) • Content websites (CNN, Google News) • Other Information Systems (Zite APP)Major claims • Highly application-oriented research area, every domain and task needs a specific RecSys • Always build around content or products they never exist on their own 4
  5. Introduction::DefinitionUsing the opinions of a community of users tohelp individuals in that community to identify moreeffectively content of interest from a potentiallyoverwhelming set of choices.Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3). 5
  6. Introduction::DefinitionUsing the opinions of a community of users tohelp individuals in that community to identify moreeffectively content of interest from a potentiallyoverwhelming set of choices.Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3).Any system that produces personalizedrecommendations as output or has the effect ofguiding the user in a personalized way to interestingor useful objects in a large space of possible options.Burke R. (2002). Hybrid Recommender Systems: Survey and Experiments,User Modeling & User Adapted Interaction, 12, pp. 331-370. 5
  7. Introduction::Example 6
  8. Introduction::Example 6
  9. Introduction::Example 6
  10. Introduction::Example 6
  11. Introduction::Example 6
  12. Introduction::Example 6
  13. Introduction::Example 6
  14. Introduction::Example 6
  15. Introduction::ExampleWhat did we learn from the small exercise? • There are different kinds of recommendations a. People who bought X also bought Y b. There are options to receive even more personalized recommendations • When registering, we have to tell the RecSys what we like (and what not). Thus, it requires information to offer suitable recommendations and it learns our preferences. 6
  16. Introduction:: The Long TailAnderson, C. (2004). The Long Tail. Wired Magazine. 7
  17. Introduction:: The Long Tail“We are leaving the age of information andentering the age of recommendation”. Anderson, C. (2004)Anderson, C. (2004). The Long Tail. Wired Magazine. 7
  18. Introduction::EmergenceJohnson, S. (2001). Emergence. New York Scribner. 8
  19. Introduction::EmergenceJohnson, S. (2001). Emergence. New York Scribner. 8
  20. Introduction::EmergenceJohnson, S. (2001). Emergence. New York Scribner. 8
  21. Introduction::EmergenceJohnson, S. (2001). Emergence. New York Scribner. 8
  22. Introduction::EmergenceJohnson, S. (2001). Emergence. New York Scribner. 8
  23. Introduction:: Age of RecSys? ...10 minutes on Google.
  24. Introduction:: Age of RecSys? ...10 minutes on Google.
  25. Introduction:: Age of RecSys?... another 10 minutes, research on RecSys is becoming very popular.Some examples:– ACM RecSys conference– ICWSM: Weblog and Social Media– WebKDD: Web Knowledge Discovery and Data Mining– WWW: The original WWW conference– SIGIR: Information Retrieval– ACM KDD: Knowledge Discovery and Data Mining– LAK: Learning Analytics and Knowledge– Educational data mining conference– ICML: Machine Learning– ...... and various workshops, books, and journals. 10
  26. Objectivesof RecSys probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 11 – People who have watched those movies also liked this movie
  27. Objectives::RecSys Aims• Converting Browsers into Buyers• Increasing Cross-sales• Building Loyalty Foto by markhillarySchafer, Konstan & Riedel, (1999). RecSys in e-commerce. Proc. of the 1st ACM onelectronic commerce, Denver, Colorado, pp. 158-169. 12
  28. Objectives::RecSys TasksFind good itemspresenting a ranked list ofrecommendendations. probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based methodFind all good itemsuser wants to identify all The idea is to pick from myitems that might be previous list 20-50 movies that share similar audience withinteresting, e.g. medical “Taken”, then how much I will like depend on how much I liked thoseor legal cases early movies – In short: I tend to watch this movie because I have watched thoseHerlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering movies … orRecommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53. 13 People who Systems, 22(1), movies also liked this movie
  29. Objectives::RecSys TasksFind good items Receive sequence of itemspresenting a ranked list of sequence of related items isrecommendendations. recommended to the user, e.g. music recommender probabilistic combination of – Item-based method – User-based method – Matrix FactorizationFind all good items Annotation in context – (May be) content-based methoduser wants to identify all predicted usefulness of anitems that might be item that pick from mythatis currently The idea is to the user previous list 20-50 moviesinteresting, e.g. medical viewing, e.g. linkslike share similar audience with within a “Taken”, then how much I willor legal cases websitehow much I liked those depend on early movies – In short: I tend to watch this movie because I have watched thoseHerlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering movies … orRecommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53. 13 People who Systems, 22(1), movies also liked this movie
  30. Objectives::RecSys TasksFind good items Receive sequence of itemspresenting a ranked list of sequence of related items isrecommendendations. recommended to the user, e.g. music recommender There are more tasks available... of probabilistic combination – Item-based method – User-based method – Matrix FactorizationFind all good items Annotation in context – (May be) content-based methoduser wants to identify all predicted usefulness of anitems that might be item that pick from mythatis currently The idea is to the user previous list 20-50 moviesinteresting, e.g. medical viewing, e.g. linkslike share similar audience with within a “Taken”, then how much I willor legal cases websitehow much I liked those depend on early movies – In short: I tend to watch this movie because I have watched thoseHerlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering movies … orRecommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53. 13 People who Systems, 22(1), movies also liked this movie
  31. RecSys Technologies1. Predict how much a user may like a certain product2. Create a list of Top-N best items3. Adjust its prediction based on feedback of the target user and like- minded usersHanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems", User Modeling and User-Adapted Interaction, 11. 14
  32. RecSys Technologies1. Predict how much a user may like a certain product2. Create a list of Top-N best items3. Adjust its prediction based on feedback of the Just some examples target user and like- there are more minded users technologies available.Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems", User Modeling and User-Adapted Interaction, 11. 14
  33. Technologies::Collaborative filtering User-based filtering (Grouplens, 1994)Take about 20-50 people who sharesimilar taste with you, afterwardspredict how much you might like anitem depended on how much the othersliked it.You may like it because your“friends” liked it. 15
  34. Technologies::Collaborative filtering User-based filtering Item-based filtering (Grouplens, 1994) (Amazon, 2001)Take about 20-50 people who share Pick from your previous list 20-50 itemssimilar taste with you, afterwards that share similar people with “thepredict how much you might like an target item”, how much you will like theitem depended on how much the others target item depends on how much theliked it. others liked those earlier items.You may like it because your You tend to like that item because“friends” liked it. you have liked those items. 15
  35. Technologies::Content-based filtering Information needs of user and characteristics of items are represented in keywords, attributes, tags that describe past selections, e.g., TF-IDF. 16
  36. Technologies::Hybrid RecSysCombination of techniques to overcomedisadvantages and advantages of single techniques. Advantages Disadvantages probabilistic combination of – Item-based method• No content analysis • Cold-start problem – User-based method – Matrix Factorization• Quality improves • Over-fitting – (May be) content-based method• No cold-start problem • New user / item problem The idea is to pick from my• No new user / item • Sparsity previous list 20-50 movies that share similar audience with problem “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 17 – People who have watched those movies also liked this movie
  37. Technologies::Hybrid RecSysCombination of techniques to overcomedisadvantages and advantages of single techniques. Advantages Disadvantages probabilistic combination of – Item-based method• No content analysis • Cold-start problem – User-based method – Matrix Factorization• Quality improves • Over-fitting – (May be) content-based method• No cold-start problem • New user / item problem The idea is to pick from my• No new user / item • Sparsity previous list 20-50 movies that share similar audience with problem “Taken”, then how much I will like Just some examples there depend on how much I liked those early movies are more (dis)advantages – In short: I tend to watch this movie because I have watched those 17 movies … or available. – People who have watched those movies also liked this movie
  38. Technologies::Overview probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based methodHanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems", User Modeling and User-Adapted Interaction, 11, 2001 18
  39. Evaluationof RecSys probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 19 – People who have watched those movies also liked this movie
  40. Evaluation::General idea Most of the time based on performance measures (“How good are your recommendations?”)For example:•Predict what rating will a user give an item?•Will the user select an item?•What is the order of usefulness of items to a user?Herlocker, Konstan, Riedl (2004). Evaluating Collaborative Filtering RecommenderSystems. ACM Transactions on Information Systems, 22(1), 5-53. 20
  41. Evaluation::Reference datasets ... and various commercial datasets. 21
  42. Evaluation::ApproachesMeasures 1. Offline study•User preference•Prediction accuracy•Coverage•Confidence•Trust•Novelty 2. User study•Serendipity•Diversity•Utility•Risk•Robustness +•Privacy•Adaptivity•Scalability 22
  43. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision and Recall into a single measurement.Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics ofRecommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,2009. 23
  44. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision and Recall into a single measurement.Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics ofRecommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,2009. 23
  45. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision and Recall into a single measurement.Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics ofRecommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,2009. 23
  46. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision Just some examples there and Recall into a single are more metrics available measurement. like MAE, RSME.Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics ofRecommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,2009. 23
  47. Evaluation::Metrics 5 Conclusion: 4 Pearson is better RMSE than Cosine, 3 Pearson because less 2 errors in predicting Cosine 1 TOP-N items. 0 Netflix BookCrossingGunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics ofRecommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,2009. 24
  48. Evaluation::Metrics 5 Conclusion: 4 Pearson is better RMSE than Cosine, 3 Pearson because less 2 errors in predicting Cosine 1 TOP-N items. 0 Netflix BookCrossing News Story Clicks Conclusion: 80% Cosine better than Precision 60% Pearson, because 40% of higher precision 20% and recall value on TOP-N items. 0% 5% 10% 15% 20% 25% 30% 35% 40% RecallGunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics ofRecommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,2009. 24
  49. RecSys::TimeToThinkWhat do you expect that a RecSys forLearning should do with respect to ...• Objectives• Tasks• Technology Blackmore’s custom-built LSD Drive• Evaluation http://www.flickr.com/photos/ rootoftwo/ 25
  50. Goals of the lecture1. Crash course Recommender Systems (RecSys)2. Overview of RecSys in TEL3. Conclusions and open research issues for RecSys in TEL 26
  51. Recommender Systemsfor TEL Introduction Objectives Technologies Evaluation 27
  52. TEL RecSys::Definition Using the experiences of a community of learners to help individual learners in that community to identify more effectively learning content or peer students from a potentially overwhelming set of choices.Extended definition by Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3). 28
  53. FTEL RecSys::Learning spectrum M WCross, J., Informal learning. Pfeifer. (2006). 29
  54. The Long TailGraphic: Wilkins, D., (2009). 30
  55. The Long Tail of LearningGraphic: Wilkins, D., (2009). 30
  56. The Long Tail of Learning Formal InformalGraphic: Wilkins, D., (2009). 30
  57. TEL RecSys::Technologies 31
  58. TEL RecSys:: TechnologiesDrachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R. (2009). ReMashed - Recommendations for Mash-Up Personal Learning Environments. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the EC- TEL 2009 (pp. 788-793). September, 29 - October, 2, 2009, Nice, France. Springer LNCS Vol. 5794. 32
  59. TEL RecSys:: TechnologiesDrachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R. (2009). ReMashed - Recommendations for Mash-Up Personal Learning Environments. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the EC- TEL 2009 (pp. 788-793). September, 29 - October, 2, 2009, Nice, France. Springer LNCS Vol. 5794. 33
  60. TEL RecSys:: Technologies RecSys Task: Find good items Hybrid RecSys: •Content-based on interests •Collaborative filteringDrachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R. (2009). ReMashed - Recommendations for Mash-Up Personal Learning Environments. In U. Cress, V. Dimitrova & M. Specht (Eds.), Learning in the Synergy of Multiple Disciplines. Proceedings of the EC- TEL 2009 (pp. 788-793). September, 29 - October, 2, 2009, Nice, France. Springer LNCS Vol. 5794. 33
  61. TEL RecSys::Tasks Find good items e.g. relevant items for a learning task or a learning goal The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movieDrachsler, H., Hummel, H., Koper, R., (2009). Identifyinghave goal, user model and because I the watched those conditions of recommender systems for formal and or movies … informal learning. Journal of Digital Information. 10(2). 34 – People who have watched those movies also liked this movie
  62. TEL RecSys::Tasks Find good items e.g. relevant items for a learning task or a learning goal Receive sequence of items e.g. recommend a learning path to achieve a certain competence The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movieDrachsler, H., Hummel, H., Koper, R., (2009). Identifyinghave goal, user model and because I the watched those conditions of recommender systems for formal and or movies … informal learning. Journal of Digital Information. 10(2). 34 – People who have watched those movies also liked this movie
  63. TEL RecSys::Tasks Find good items e.g. relevant items for a learning task or a learning goal Receive sequence of items e.g. recommend a learning path to achieve a certain competenceAnnotation in context The idea is to pick from mye.g. take into account location, previous list 20-50 movies that share similar audience with time, noise level, prior “Taken”, then how much I will like knowledge, peers around depend on how much I liked those early movies – In short: I tend to watch this movieDrachsler, H., Hummel, H., Koper, R., (2009). Identifyinghave goal, user model and because I the watched those conditions of recommender systems for formal and or movies … informal learning. Journal of Digital Information. 10(2). 34 – People who have watched those movies also liked this movie
  64. Evaluation of TEL RecSys probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 35 – People who have watched those movies also liked this movie
  65. TEL RecSys::Review study 36
  66. TEL RecSys::Review studyManouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).Berlin: Springer. 36
  67. TEL RecSys::Review studyManouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).Berlin: Springer. 36
  68. TEL RecSys::Review study Conclusions: Half of the systems (11/20) still at design or prototyping stage only 9 systems evaluated through trials with human users.Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).Berlin: Springer. 36
  69. The TEL recommenderresearch is a bit like this... 37
  70. The TEL recommender research is a bit like this... We need to design for each domain anappropriate recommender system that fits the goals, tasks, and particular constraints 37
  71. But...“The performance resultsof different researchefforts in recommendersystems are hardlycomparable.”(Manouselis et al., 2010) Kaptain Kobold http://www.flickr.com/photos/ kaptainkobold/3203311346/ 38
  72. But...TEL recommenderexperiments lack results “The performancetransparency and of different research efforts in recommenderstandardization. systems are hardlyThey need to be comparable.”repeatable to test:•(Manouselis et al., 2010) Validity• Verification Kaptain Kobold http://www.flickr.com/photos/• Compare results kaptainkobold/3203311346/ 38
  73. Data-driven Research and Learning Analytics EATEL-Hendrik Drachsler (a), Katrien Verbert (b)(a) CELSTEC, Open University of the Netherlands(b) Dept. Computer Science, K.U.Leuven, Belgium 39
  74. TEL RecSys::Evaluation/datasets 41
  75. TEL RecSys::Evaluation/datasetsDrachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G.,Lindstaedt, S., Stern, H., Friedrich, M., & Wolpers, M. (2010). Issues and Considerationsregarding Sharable Data Sets for Recommender Systems in Technology Enhanced Learning.Presentation at the 1st Workshop Recommnder Systems in Technology Enhanced Learning(RecSysTEL) in conjunction with 5th European Conference on Technology EnhancedLearning (EC-TEL 2010): Sustaining TEL: From Innovation to Learning and Practice.September, 28, 2010, Barcelona, Spain. 41
  76. 42
  77. Evaluation::Metrics MAE – Mean Absolute Error: Deviation of recommendations from the user-specified ratings. The lower the MAE, the more accurately the RecSys predicts user ratings.Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,(2011). Dataset-driven Research for Improving Recommender Systems for Learning. LearningAnalytics & Knowledge: February 27-March 1,43 2011, Banff, Alberta, Canada
  78. Evaluation::Metrics MAE – Mean Absolute Error: Deviation of recommendations from the user-specified ratings. The lower the MAE, the more accurately the RecSys predicts user ratings. Outcomes: Tanimoto similarity + item-based CF was the most accurate.Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,(2011). Dataset-driven Research for Improving Recommender Systems for Learning. LearningAnalytics & Knowledge: February 27-March 1,43 2011, Banff, Alberta, Canada
  79. Evaluation::Metrics MAE – Mean Absolute Error: Deviation of recommendations from the user-specified ratings. The lower the MAE, the more accurately the RecSys predicts user ratings.Outcomes:•User-based CF Algorithm thatpredicts the top 10 most relevant Outcomes:items for a user has a F1 score Tanimoto similarity +of almost 30%. item-based CF was•the most accurate. Implicit ratings like download rates, bookmarks can successfully be used in TEL.Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,(2011). Dataset-driven Research for Improving Recommender Systems for Learning. LearningAnalytics & Knowledge: February 27-March 1,43 2011, Banff, Alberta, Canada
  80. Goals of the lecture1. Crash course Recommender Systems (RecSys)2. Overview of RecSys in TEL3. Conclusions and open research issues for RecSys in TEL 44
  81. 10 years of TEL RecSys research in one book Chapter 1: Background Chapter 2: TEL context Recommender Chapter 3: Extended survey Systems for of 42 RecSys Learning Chapter 4: Challenges and OutlookManouselis, N., Drachsler, H., Verbert, K., Duval, E.(2012). Recommender Systems for Learning. Berlin:Springer. 45
  82. 10 years of TEL RecSys research in one book Chapter 1: Background Chapter 2: TEL context Recommender Chapter 3: Extended survey Systems for of 42 RecSys Learning Chapter 4: Challenges and OutlookManouselis, N., Drachsler, H., Verbert, K., Duval, E.(2012). Recommender Systems for Learning. Berlin:Springer. 45
  83. A framework for TEL RecSys 46
  84. Analysis according to the frameworkSupported tasks 47
  85. Analysis according to the frameworkSupported tasks Domain model 47
  86. Analysis according to the frameworkSupported tasks Domain model User model 47
  87. Analysis according to the frameworkSupported tasks Domain model User model Personalization Approach 47
  88. Available TEL datasets 48
  89. TEL RecSys::Open issues1. Evaluation2. Datasets3. Context4. Visualization5. Virtualization6. Privacy 49
  90. TEL RecSys::Ideal research design1. A selection of datasets for your RecSys task2. An offline study of different algorithms on the datasets3. A comprehensive controlled user study to test psychological, pedagogical and technical aspects4. Rollout of the RecSys in real-life scenarios 50
  91. Thank you for attending this lecture! This silde is available at: http://www.slideshare.com/Drachsler Email: hendrik.drachsler@ou.nl Skype: celstec-hendrik.drachsler Blogging at: http://www.drachsler.de Twittering at: http://twitter.com/HDrachsler 51
  92. TEL RecSys::TimeToThink• Consider the Recommender System framework and imagine some great TEL RecSys that could support you in your stakeholder role alternatively• Name a learning task where a TEL RecSys would be useful for. 52

×