Learnometrics: Metrics for Learning Objects

1,974 views

Published on

Ph.D. Defense presentation. K.U.Leuven
How to measure the characteristics of the different processes involved in the Learning Object lifecycle

  • Be the first to comment

Learnometrics: Metrics for Learning Objects

  1. 1. Learnometrics: Metrics for Learning Objects Xavier Ochoa
  2. 2. Learning Object Any digital resource that can be reused to support learning (Wiley, 2004)
  3. 3. Share and Reuse
  4. 4. Sharing
  5. 5. Sharing
  6. 6. Repository
  7. 7. Metadata Book Metadata
  8. 8. Learning Object Metadata General Title: Landing on the Moon Technical File format: Quicktime Movie Duration: 2 minutes Educational Interactivity Level: Low End-user: learner Relational Learning Object Relation: is-part-of Resource: History course LOM
  9. 9. Learning Object Repository and / or Metadata Object Repository Repository
  10. 10. Learning Object Economy Market Makers Producers Market Consumers Policy Makers
  11. 11. How it works? How it can be improved?
  12. 12. Purpose Generate empirical knowledge about LOE Test existing techniques to improve LO tools
  13. 13. Quantitative Analysis
  14. 14. Metrics Proposal and Evaluation
  15. 15. Quantitative Analysis of the Publication of LO • What is the size of Repositories? • How do repositories grow? • How many objects per contributor? • Can it be modeled? 16
  16. 16. Size is very unequal 17
  17. 17. Size Comparison Repository Referatory OCW LMS IR
  18. 18. Growth is Linear Bi-phase Linear ln(a.exp(b.x)+c)
  19. 19. Objects per Contributor • Heavy-tailed distributions (no bell curve) LORP - LORF Lotka with cut-off “fat-tail”
  20. 20. Objects per Contributor • Heavy-tailed distributions (no bell curve) OCW - LMS Weibull “fat-belly”
  21. 21. Objects per Contributor • Heavy-tailed distributions (no bell curve) IR Lotka high alpha “light-tail”
  22. 22. Engagement
  23. 23. Model
  24. 24. Analysis Conclusions – Few big repositories concentrate most of the material – Repositories are not growing as they should – There is not such thing as an average contributor – Differences between repositories are based on the engagement of the contributor – Results point to a possible lack of “value proposition”
  25. 25. Quantitative Analysis of the Reuse of Learning Objects • Which percentage of learning objects is reused? • Does the granularity affect reuse? • How many times a learning object is reused? 26
  26. 26. Reuse Paradox
  27. 27. Measuring Reuse
  28. 28. Measuring Reuse
  29. 29. Measuring Reuse ~20%
  30. 30. Distribution of Reuse
  31. 31. Analysis Conclusions – Learning Objects are being reuse with or without the help of Learning Object technologies – Reuse paradox need to be re-evaluated – Reuse seems to be the results of a chain of successful events.
  32. 32. Quality of Metadata
  33. 33. Quality of Metadata Title: “The Time Machine” Author: “Wells, H. G.” Publisher: “L&M Publishers, UK” Year: “1965” Location: ----
  34. 34. Metrics for Metadata Quality – How the quality of the metadata can be measured? (metrics) – Does the metrics work? • Does the metrics correlate with human evaluation? • Does the metrics separate between good and bad quality metadata? • Can the metrics be used to filter low quality records?
  35. 35. Textual Information correlate with human evaluation
  36. 36. Some metrics could filter low quality records
  37. 37. Study Conclusions – Humans and machines have different needs for metadata – Metrics can be used to easily establish some characteristics of the metadata – The metrics can be used to automatically filter or flag low quality metadata
  38. 38. Abundance of Choice 38
  39. 39. Relevance Ranking Metrics – What means relevance in the context of Learning Objects? – How existing ranking techniques can be used to produce metrics to rank learning objects? – How those metrics can be combined to produce a single ranking value? – Can the proposed metrics outperform simple text based ranking?
  40. 40. Metrics improve over Base Rank
  41. 41. RankNet outperform Base Ranking by 50%
  42. 42. Relevance Ranking Metrics • Implications – Even basic techniques can improve the ranking of learning objects – Metrics are scalable and easy to implement • Warning: – Preliminary results: not based in real world observation
  43. 43. Applications - MQM
  44. 44. Applications - RRM 44
  45. 45. Applications - RRM 45
  46. 46. General Conclusions • Publication and reuse is dominated by heavy-tailed distributions • LMSs have the potential bootstrap LOE • Models/Metrics sets a baseline against which new models/metrics can be compared and improvement measured • More questions are raised than answered 46
  47. 47. Publications • Chapter 2 – Quantitative Analysis of User-Generated Content on the Web. Proceedings of the First International Workshop on Understanding Web Evolution (WebEvolve2008) at WWW2008. 2008, 19-26 – Quantitative Analysis of Learning Object Repositories. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications ED- Media 2008, 2008, 6031-6040 • Chapter 3 – Measuring the Reuse of Learning Objects. Third European Conference on Technology Enhanced Learning (ECTEL 2008), 2008, Accepted.
  48. 48. Publications • Chapter 4 – Towards Automatic Evaluation of Learning Object Metadata Quality. LNCS: Advances in Conceptual Modeling - Theory and Practice, Springer, 2006, 4231, 372-381 – SAmgI: Automatic Metadata Generation v2.0. Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications ED-Media 2007, AACE, 2007, 1195-1204 – Quality Metrics for Learning Object Metadata. World Conference on Educational Multimedia, Hypermedia and Telecommunications 2006, AACE, 2006, 1004-1011
  49. 49. Publications • Chapter 5 – Relevance Ranking Metrics for Learning Objects. IEEE Transactions on Learning Technologies. 2008. 1(1), 14 – Relevance Ranking Metrics for Learning Objects. LNCS: Creating New Learning Experiences on a Global Scale, Springer, 2007, 4753, 262-276 – Use of contextualized attention metadata for ranking and recommending learning objects. CAMA '06: Proceedings of the 1st international workshop on Contextualized attention metadata at CIKM 2006, ACM Press, 2006, 9-16
  50. 50. My Research Metrics (PoP) • Papers: 14 • h-index: 5 • Citations: 55 • g-index: 7 • Years: 6 • hc-index: 5 • Cites/year: 9.17 • hI-index: 1.56 • Cites/paper: 4.23 • hI-norm: 3 • Cites/author: 21.02 • AWCR: 13.67 • Papers/author: 6.07 • AW-index: 3.70 • Authors/paper: 2.77 • AWCRpA: 5.62
  51. 51. Thank you for your attention Questions? 51

×