Metrics For Learning Object Metadata


Published on

ECTEL2006 Doctoral Consortium presentation about my research in Metrics for Learning Object Metadata. More information:

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Metrics For Learning Object Metadata

  1. 1. Xavier Ochoa, ESPOL Erik Duval, KULeuven
  2. 2. Context of the Research
  3. 3. Learnometrics <ul><li>Study empirical regularities on data </li></ul><ul><li>Develop mathematical models </li></ul><ul><li>To understand the influence/impact of LO </li></ul><ul><li>Produce useful metrics </li></ul>
  4. 4. Example of Learnometrics Number of Downloads does not depends on number of Object Published
  5. 5. Example of Learnometrics 2 The Download of objects follows a Power Distribution
  6. 6. More than Learning Object Metadata <ul><li>All information about Learning Objects </li></ul><ul><ul><li>Object Itself </li></ul></ul><ul><ul><li>LOM / DC / MPEG7 </li></ul></ul><ul><ul><li>Contextual Attention Metadata (CAM) </li></ul></ul><ul><ul><li>Sequencing Information (SCORM / LAMS) </li></ul></ul>
  7. 7. Uses of Learning Object Metadata Metrics <ul><li>To improve Learning Object Tools </li></ul><ul><ul><li>Indexing Material </li></ul></ul><ul><ul><ul><li>LOM Quality Metrics </li></ul></ul></ul><ul><ul><li>Searching / Finding </li></ul></ul><ul><ul><ul><li>Ranking Metrics </li></ul></ul></ul><ul><ul><ul><li>Recommendation Metrics </li></ul></ul></ul><ul><ul><li>Reuse </li></ul></ul><ul><ul><ul><li>Adaptation Metrics </li></ul></ul></ul>
  8. 8. Learning Object Metadata Quality <ul><li>The production, management and consumption of Learning Object Metadata is vastly surpassing the human capacity to review or process these metadata. </li></ul>
  9. 9. LOM Quality Metrics
  10. 10. Evaluation LOM Quality Metrics Textual Information Content correlates highly with human-assigned quality score
  11. 11. LOM Quality Visualization
  12. 13. Ranking Metrics <ul><li>Network-Analysis Rank (Popularity) </li></ul><ul><ul><li>Most users prefer these objects… </li></ul></ul><ul><li>Similarity Recommendation (Clustering) </li></ul><ul><ul><li>If you like this LO, you will also like … </li></ul></ul><ul><li>Personalized Rank (Profiling) </li></ul><ul><ul><li>Based on your history, you will like these objects… </li></ul></ul><ul><li>Contextual Recommendation Rank </li></ul><ul><ul><li>This object seems right for the lesson you are creating right now… </li></ul></ul>
  13. 14. Network-Analysis Metrics <ul><li>CAM as K-Partite Graph </li></ul>O 1 O 2 O 3 C 1 C 2 U 1 U 2 A 1 A 2 User Partition Course Partition Author Partition Object Partition
  14. 15. Application
  15. 16. Similarity Metric
  16. 17. Communities ARIADNE
  17. 18. Application
  18. 19. Personalized Rank <ul><li>We can create a profile of the user based on its CAM </li></ul><ul><li>We can use the same LOM record to store this profile </li></ul><ul><li>Instead of having a crisp preference for a value, the user will have a fuzzy set with different degrees of “preference” for all the possible values. </li></ul>
  19. 20. Personalized Rank <ul><li>Topic Importance = 0.9 </li></ul><ul><li>Language Importance = 0.6 </li></ul><ul><li>U1 = {(0.8/ComputerScience + 0.2/Physics), (0.6/English + 0.2/Spanish + 0.2/French)} </li></ul><ul><li>O1 = {(1.0/ComputerScience), (1.0/Spanish)} </li></ul><ul><li>O2 = {(1.0/Physics, 1.0/English)} </li></ul><ul><li>Rank(O1) = 0.9*0.8 + 0.6*0.2 = 0.84 </li></ul><ul><li>Rank(O2) = 0.9*0.2 + 0.6*0.6 = 0.54 </li></ul>
  20. 21. Contextual Recommending <ul><li>If the CAM is considered not only as a source for historic data, but also as a continuous stream of contextualized attention information. </li></ul><ul><li>LMSs could provide much more contextual information. </li></ul><ul><li>Use techniques to exploit contextual information. Most simple: Term Extraction </li></ul>
  21. 22. Evaluation <ul><li>Experimentation </li></ul><ul><ul><li>Ranking vs. No Ranking </li></ul></ul><ul><ul><li>Different Ranking Strategies/Combinations </li></ul></ul><ul><li>User feedback </li></ul><ul><ul><li>Machine Learning – Optimization </li></ul></ul><ul><li>Transference </li></ul><ul><ul><li>Other reusable components </li></ul></ul>
  22. 23. Research Questions (Summary) <ul><li>How information about Learning Objects (Learning Object, LOM, CAM, SCORM) can be used to create a relevance/quality metrics to rank/recommend Learning Objects? </li></ul><ul><li>Are the resulting metrics feasible to calculate, easy to integrate in existing applications and meaningful/useful for the end users? </li></ul><ul><li>Can these metrics be also applied to other reusable components? </li></ul>
  23. 24. Thank you, Gracias Comments, Suggestions, Critics… are Welcome! More Information: [email_address]