Recent R&D on             Recommender Systems in TEL 18.01.2011 Learning Network Seminar on Recommender Systems in TELHend...
What is dataTEL• dataTEL is a Theme Team funded by the  STELLAR network of excellence.• It address 2 STELLAR Grand Challen...
dataTEL::Objectives                   Standardize research on                recommender systems in TELFive core questions...
Recommender in TEL         4
The TEL recommender  are a bit like this...           5
The TEL recommender  are a bit like this...  We need to select for each application an   appropriate RecSys that fits its n...
But...“The performance resultsof different researchefforts in recommendersystems are hardlycomparable.”(Manouselis et al.,...
But...The TEL recommender“The performance resultsexperiments lackof different researchtransparency. They needefforts in re...
Survey on TEL Recommender           7
Survey on TEL Recommender           7
Survey on TEL RecommenderManouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011). RecommenderSy...
Survey on TEL Recommender  The continuation of small-scale experiments with a limited amount of learners that rate the  re...
How others compare their    recommenders           8
What kind of data set we can                                                         Fin                                  ...
The Long TailGraphic Wilkins, D., (2009); Long10 concept Anderson, C. (2004)                                  tail
The Long Tail of LearningGraphic Wilkins, D., (2009); Long10 concept Anderson, C. (2004)                                  ...
The Long Tail of Learning         Formal                               InformalGraphic Wilkins, D., (2009); Long10 concept...
dataTEL::Collection        11
dataTEL::CollectionDrachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G., Lindstaedt...
The task of a CF algorithm is to find item likeliness of two forms :! Prediction – a numerical value, expressing the  predi...
Collaborative FilteringThe task of a CF algorithm is to find item likeliness of two forms :! Prediction – a numerical value...
Collaborative FilteringThe task of a CF algorithm is to find item likeliness of two forms :! Prediction – a numerical value...
MAE – Mean Absolute Error : deviation of recommendations from their true user-specified values in the data. The lower the M...
Evaluation criteriaMAE – Mean Absolute Error : deviation of recommendations from their true user-specified values in the da...
dataTEL::Evaluation        14
dataTEL::Evaluation        14
dataTEL::EvaluationVerbert, K., Duval, E., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G. (2011).Dat...
dataTEL::CriticsThe main research question remains:How can generic algorithms be modified to supportlearners or teachers?To...
Target: Knowledge Environment              16
17
Challenges to deal with for the Knowledge     Environment          17
1. Protection Rights        18
1. Protection Rights        18
1. Protection Rights  OVERSHARING        18
1. Protection Rights              OVERSHARINGWere the founders of PleaseRobMe.com actuallyallowed to grab the data from th...
1. Protection Rights              OVERSHARINGWere the founders of PleaseRobMe.com actuallyallowed to grab the data from th...
1. Protection Rights              OVERSHARINGWere the founders of PleaseRobMe.com actuallyallowed to grab the data from th...
2. Policies to use Data          19
2. Policies to use Data          19
2. Policies to use Data          19
2. Policies to use Data          19
2. Policies to use Data          19
2. Policies to use Data          19
3. Pre-Process Data SetsFor informal data sets:1. Collect data2. Process data3. Share dataFor formal data sets from LMS:1....
4. Evaluation CriteriaCombine approach by           Kirkpatrick model by Drachsler et al. 2008        Manouselis et al. 20...
4. Evaluation Criteria 1. Accuracy 2. Coverage 3. PrecisionCombine approach by           Kirkpatrick model by Drachsler et...
4. Evaluation Criteria 1. Accuracy 2. Coverage 3. Precision1. Effectiveness of learning2. Efficiency of learning3. Drop out...
4. Evaluation Criteria 1. Accuracy                      1. Reaction of learner 2. Coverage                      2. Learnin...
3 take away message1. In order to create evidence driven knowledge about the effect ofrecommender systems on learners and ...
Join us for a Coffee ...http://www.teleurope.eu/pg/groups/9405/datatel/                      23
Upcoming event ...Workshop: dataTEL- Data Sets for Technology Enhanced LearningDate: March 30th to March 31st, 2011Locatio...
Many thanks for your interests   This silde is available at:   http://www.slideshare.com/Drachsler   Email:       hendrik....
Upcoming SlideShare
Loading in...5
×

Recent Research and Developments on Recommender Systems in TEL

1,458

Published on

Presentation given at the Learning Network seminar series at CELSTEC. Special guest was Wolfgang Reinhardt who provided his view on data science in relation to awareness improvement of knowledge workers.

Published in: Education
0 Comments
5 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,458
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
0
Comments
0
Likes
5
Embeds 0
No embeds

No notes for slide

Recent Research and Developments on Recommender Systems in TEL

  1. 1. Recent R&D on Recommender Systems in TEL 18.01.2011 Learning Network Seminar on Recommender Systems in TELHendrik DrachslerCentre for Learning Sciences and Technology@ Open University of the Netherlands 1
  2. 2. What is dataTEL• dataTEL is a Theme Team funded by the STELLAR network of excellence.• It address 2 STELLAR Grand Challenges 1. Connecting Learner 2. Contextualisation 2
  3. 3. dataTEL::Objectives Standardize research on recommender systems in TELFive core questions: 1.How can data sets be shared according to privacy and legal protection rights? 2.How to development a policy to use and share data sets? 3.How to pre-process data sets to make them suitable for other researchers? 4.How to define common evaluation criteria for TEL recommender systems? 5.How to develop overview methods to monitor the performance of TEL recommender systems on data sets? 3
  4. 4. Recommender in TEL 4
  5. 5. The TEL recommender are a bit like this... 5
  6. 6. The TEL recommender are a bit like this... We need to select for each application an appropriate RecSys that fits its needs. 5
  7. 7. But...“The performance resultsof different researchefforts in recommendersystems are hardlycomparable.”(Manouselis et al., 2010) Kaptain Kobold http://www.flickr.com/photos/ kaptainkobold/3203311346/ 6
  8. 8. But...The TEL recommender“The performance resultsexperiments lackof different researchtransparency. They needefforts in recommenderto be repeatable to test:systems are hardlycomparable.”• Validity• Verificationet al., 2010)(Manouselis• Compare results Kaptain Kobold http://www.flickr.com/photos/ kaptainkobold/3203311346/ 6
  9. 9. Survey on TEL Recommender 7
  10. 10. Survey on TEL Recommender 7
  11. 11. Survey on TEL RecommenderManouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011). RecommenderSystems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci, L. Rokach, & B. Shapira (Eds.),Recommender Systems Handbook (pp. 387-415). Berlin: Springer. 7
  12. 12. Survey on TEL Recommender The continuation of small-scale experiments with a limited amount of learners that rate the relevance of suggested resources only adds little contributions to a evidence driven knowledge base on recommender systems in TEL.Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011). RecommenderSystems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci, L. Rokach, & B. Shapira (Eds.),Recommender Systems Handbook (pp. 387-415). Berlin: Springer. 7
  13. 13. How others compare their recommenders 8
  14. 14. What kind of data set we can Fin f c expect in TEL M Ma s f We aGraphic by J. Cross, Informal learning, Pfeifer (2006) 9
  15. 15. The Long TailGraphic Wilkins, D., (2009); Long10 concept Anderson, C. (2004) tail
  16. 16. The Long Tail of LearningGraphic Wilkins, D., (2009); Long10 concept Anderson, C. (2004) tail
  17. 17. The Long Tail of Learning Formal InformalGraphic Wilkins, D., (2009); Long10 concept Anderson, C. (2004) tail
  18. 18. dataTEL::Collection 11
  19. 19. dataTEL::CollectionDrachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G., Lindstaedt, S.,Stern, H., Friedrich, M., & Wolpers, M. (2010). Issues and Considerations regarding Sharable DataSets for Recommender Systems in Technology Enhanced Learning. Presentation at the 1st WorkshopRecommnder Systems in Technology Enhanced Learning (RecSysTEL) in conjunction with 5th EuropeanConference on Technology Enhanced Learning (EC-TEL 2010): Sustaining TEL: From Innovation to Learningand Practice. September, 28, 2010, Barcelona, Spain. 11
  20. 20. The task of a CF algorithm is to find item likeliness of two forms :! Prediction – a numerical value, expressing the predicted likeliness of an item the user hasnʼt expressed his/her opinion about.! Recommendation – a list of N items the active user will like the most (Top-N recommendations). 12
  21. 21. Collaborative FilteringThe task of a CF algorithm is to find item likeliness of two forms :! Prediction – a numerical value, expressing the predicted likeliness of an item the user hasnʼt expressed his/her opinion about.! Recommendation – a list of N items the active user will like the most (Top-N recommendations). 12
  22. 22. Collaborative FilteringThe task of a CF algorithm is to find item likeliness of two forms :! Prediction – a numerical value, expressing the predicted likeliness of an item the user hasnʼt expressed his/her opinion about.! Recommendation – a list of N items the active user will like the most (Top-N recommendations). 12
  23. 23. MAE – Mean Absolute Error : deviation of recommendations from their true user-specified values in the data. The lower the MAE, the more accurately the recommendation engine predicts user ratings.F1 - Measure balances Precision and Recall into a singlemeasurement. Recall is defined as the ratio of relevantitems by the recommender to a total number of relevantitems available. 13
  24. 24. Evaluation criteriaMAE – Mean Absolute Error : deviation of recommendations from their true user-specified values in the data. The lower the MAE, the more accurately the recommendation engine predicts user ratings.F1 - Measure balances Precision and Recall into a singlemeasurement. Recall is defined as the ratio of relevantitems by the recommender to a total number of relevantitems available. 13
  25. 25. dataTEL::Evaluation 14
  26. 26. dataTEL::Evaluation 14
  27. 27. dataTEL::EvaluationVerbert, K., Duval, E., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G. (2011).Dataset-driven Research for Improving Recommender Systems for Learning. Learning Analytics &Knowledge: February 27-March 1, 2011, Banff, Alberta, Canada 14
  28. 28. dataTEL::CriticsThe main research question remains:How can generic algorithms be modified to supportlearners or teachers?To give an example:Form a pure learning perspective the most valuableresources contain different opinions or facts thatchallenge the learners to disagree, agree and redefinetheir mental model. 15
  29. 29. Target: Knowledge Environment 16
  30. 30. 17
  31. 31. Challenges to deal with for the Knowledge Environment 17
  32. 32. 1. Protection Rights 18
  33. 33. 1. Protection Rights 18
  34. 34. 1. Protection Rights OVERSHARING 18
  35. 35. 1. Protection Rights OVERSHARINGWere the founders of PleaseRobMe.com actuallyallowed to grab the data from the web and present itin that way? 18
  36. 36. 1. Protection Rights OVERSHARINGWere the founders of PleaseRobMe.com actuallyallowed to grab the data from the web and present itin that way?Are we allowed to use data from social handles andreuse it for research purposes? 18
  37. 37. 1. Protection Rights OVERSHARINGWere the founders of PleaseRobMe.com actuallyallowed to grab the data from the web and present itin that way?Are we allowed to use data from social handles andreuse it for research purposes?and there is more company protection rights! 18
  38. 38. 2. Policies to use Data 19
  39. 39. 2. Policies to use Data 19
  40. 40. 2. Policies to use Data 19
  41. 41. 2. Policies to use Data 19
  42. 42. 2. Policies to use Data 19
  43. 43. 2. Policies to use Data 19
  44. 44. 3. Pre-Process Data SetsFor informal data sets:1. Collect data2. Process data3. Share dataFor formal data sets from LMS:1. Data storing scripts2. Anonymisation scripts 20
  45. 45. 4. Evaluation CriteriaCombine approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 21
  46. 46. 4. Evaluation Criteria 1. Accuracy 2. Coverage 3. PrecisionCombine approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 21
  47. 47. 4. Evaluation Criteria 1. Accuracy 2. Coverage 3. Precision1. Effectiveness of learning2. Efficiency of learning3. Drop out rate4. SatisfactionCombine approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 21
  48. 48. 4. Evaluation Criteria 1. Accuracy 1. Reaction of learner 2. Coverage 2. Learning improved 3. Precision 3. Behaviour 4. Results1. Effectiveness of learning2. Efficiency of learning3. Drop out rate4. SatisfactionCombine approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 21
  49. 49. 3 take away message1. In order to create evidence driven knowledge about the effect ofrecommender systems on learners and personalized learning morestandardized experiments are needed.2. The continuation of additional small-scale experiments with alimited amount of learners that rate the relevance of suggestedresources only adds little contributions to a evidence drivenknowledge base on recommender systems in TEL.3. The key question remains how generic algorithms need to bemodified in order to support learners or teachers 22
  50. 50. Join us for a Coffee ...http://www.teleurope.eu/pg/groups/9405/datatel/ 23
  51. 51. Upcoming event ...Workshop: dataTEL- Data Sets for Technology Enhanced LearningDate: March 30th to March 31st, 2011Location: Ski resort La Clusaz in the French Alps, Massif desAravisFunding: Food and lodging for 3 nights for 10 selected participantsSubmissions: For being funded, please send extended abstracts tohttp://www.easychair.org/conferences/?conf=datatel2011Deadline for submissions: October 25th, 2010CfP: http://www.teleurope.eu/pg/pages/view/46082/ 24
  52. 52. Many thanks for your interests This silde is available at: http://www.slideshare.com/Drachsler Email: hendrik.drachsler@ou.nl Skype: celstec-hendrik.drachsler Blogging at: http://www.drachsler.de Twittering at: http://twitter.com/HDrachsler 25

×