Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Towards Automatic Evaluation of Learning Object Metadata Quality

1,524 views

Published on

Presentation at ER2006 about how to automatically calculate the quality of a learning object metadata

Published in: Economy & Finance, Education
  • Be the first to comment

Towards Automatic Evaluation of Learning Object Metadata Quality

  1. 1. Towards Automatic Evaluation of Learning Object Metadata Quality Xavier Ochoa, ESPOL, Ecuador Erik Duval, KULeuven, Belgium QoIS 2006
  2. 2. Learning Objects are … <ul><li>Any entity, digital or non-digital, that can be used, re-used or referenced during technology-supported learning. </li></ul>IEEE LOM Standard
  3. 3. Learning Object Metadata Learning Object Metadata Standard
  4. 4. Initial growth has been slow ARIADNE
  5. 5. Standardization, Interoperability of Repositories and Automatic Generation of Metadata had solved the scarcity problem… …but had created new “good” ones.
  6. 6. The production, management and consumption of Learning Object Metadata is vastly surpassing the human capacity to review or process these metadata.
  7. 7. Currently there is NOT scalable Quality Evaluation of Learning Object Metadat a
  8. 8. Quality of Metadata <ul><li>&quot;high quality metadata supports the functional requirements of the system it is designed to support&quot; </li></ul><ul><li>(Guy at al, 2004) </li></ul>
  9. 9. Quality of Metadata Title: “The Time Machine” Author: “Wells, H. G.” Publisher: “L&M Publishers, UK” Year: “1965” Location: ----
  10. 10. Quality of Metadata
  11. 11. Quality of Metadata
  12. 12. Why Measuring Quality? <ul><li>The quality of the metadata record that describes a learning object affects directly the chances of the object to be found, reviewed or reused. </li></ul><ul><li>An object with the title “Lesson 1 – Course 201” and no description, could not be found in a “Introduction to Java” query, even if it is about that subject. </li></ul>
  13. 13. How to measure Metadata Quality? <ul><li>Manually check a statistical sample of records to evaluate their quality. </li></ul><ul><ul><li>Use graphical tools to improve the task </li></ul></ul><ul><li>Use simple statistics from the repository </li></ul><ul><li>Usability studies </li></ul>
  14. 14. Metrics <ul><li>A good system needs both characteristics: </li></ul><ul><ul><li>Been mostly automated </li></ul></ul><ul><ul><li>Predict with certain amount of precision the fitness of the metadata instance for its task </li></ul></ul><ul><li>Other fields had attacked similar problems through the use of metrics </li></ul><ul><ul><li>Software Engineering </li></ul></ul><ul><ul><li>Bibliographical Studies (Scientometrics) </li></ul></ul><ul><ul><li>Search engines (Eg.: PageRank) </li></ul></ul>
  15. 15. We cannot measure the quality manually anymore…
  16. 16. … but is a good idea to follow the same quality characteristics.
  17. 17. Quality Characteristics <ul><li>Framework proposed by Bruce and Hillman: </li></ul><ul><ul><li>Completeness </li></ul></ul><ul><ul><li>Accuracy </li></ul></ul><ul><ul><li>Provenance </li></ul></ul><ul><ul><li>Conformance to expectations </li></ul></ul><ul><ul><li>Consistency & logical coherence </li></ul></ul><ul><ul><li>Timeliness </li></ul></ul><ul><ul><li>Accessability </li></ul></ul>
  18. 18. Our Proposal: Use Metrics <ul><li>Small calculation performed over the values of the different fields of the metadata record in order to gain insight on a quality characteristics. </li></ul><ul><li>For example we can count the number of fields that have been filled with information (metric) to assess the completeness of the metadata record (quality characteristic). </li></ul>
  19. 19. Quality Metrics <ul><li>C ompleteness </li></ul><ul><ul><li>Simple Completeness: </li></ul></ul><ul><ul><ul><li>What percentage of the fields has been filled </li></ul></ul></ul><ul><ul><li>Weighted Completeness: </li></ul></ul><ul><ul><ul><li>Not all fields are equally important. Use a weighted sum . </li></ul></ul></ul>
  20. 20. Quality Metrics <ul><li>Conformance to Expectations </li></ul><ul><ul><li>Nominal Information Content: </li></ul></ul><ul><ul><ul><li>How different is the value of field in the metadata record from the values in the repository (Entropy) </li></ul></ul></ul><ul><ul><li>Textual Information Content: </li></ul></ul><ul><ul><ul><li>What is the relevance of the words contained in free text fields (TFIDF) </li></ul></ul></ul>
  21. 21. Quality Metrics <ul><li>Accesability </li></ul><ul><ul><li>Readability: </li></ul></ul><ul><ul><ul><li>How easy is to read the text of free text fields. </li></ul></ul></ul>
  22. 22. Quality Metrics
  23. 23. Evaluation of the Metrics <ul><li>Online Experiment: </li></ul><ul><ul><li>http://ariadne.cti.espol.edu.ec/Metrics </li></ul></ul><ul><li>22 Human Reviewers </li></ul><ul><li>20 Learning object metadata records </li></ul><ul><ul><li>(10 manual, 10 automated) </li></ul></ul><ul><li>7 characteristics used for evaluation </li></ul><ul><li>5 quality Metrics </li></ul>
  24. 24. Evaluation Results Textual Information Content correlates highly (0.842) with human-assigned quality score
  25. 25. Analysis of Results <ul><li>The quality of the title and description is perceived as the quality of the record. </li></ul><ul><li>One of the metrics captured a complex human evaluation. </li></ul><ul><li>This artificial measurement of quality is not an effective evaluation for the metrics </li></ul>
  26. 26. Applications: Repository Evaluation
  27. 27. Applications: Quality Visualization
  28. 28. Automated Evaluation of Quality
  29. 29. Further Work <ul><li>Evaluate metrics as predictors of “real” quality. </li></ul><ul><li>Quality as Fitness to fulfill a given purpose </li></ul><ul><ul><li>Quality for Retrieval </li></ul></ul><ul><ul><li>Quality for Evaluation </li></ul></ul><ul><ul><li>Accessibility Quality </li></ul></ul><ul><ul><li>Re-use Quality </li></ul></ul>
  30. 30. Further Work <ul><li>But more important… Measure the Quality of the Learning Object itself </li></ul><ul><li>LearnRank </li></ul><ul><ul><li>Analysis of the Object itself </li></ul></ul><ul><ul><li>Analysis of Contextual Attention Metadata </li></ul></ul><ul><ul><li>Social Networking </li></ul></ul><ul><li>Learnometrics </li></ul><ul><ul><li>Measuring the Impact of Learning Object in the Learning/Teaching Community </li></ul></ul>
  31. 31. Thank you, Gracias Comments, Suggestions, Critics… are Welcome! More Information: http://ariadne.cti.espol.edu.ec/M4M

×