Towards Automatic Evaluation of Learning Object Metadata Quality Xavier Ochoa, ESPOL, Ecuador Erik Duval, KULeuven, Belgium QoIS 2006
Learning Objects are … Any entity, digital or non-digital, that can be used, re-used or referenced during technology-supported learning.  IEEE LOM Standard
Learning Object Metadata Learning Object Metadata Standard
Initial growth has been slow ARIADNE
Standardization, Interoperability of Repositories and Automatic Generation of Metadata had solved the scarcity problem… …but had created new “good”  ones.
The production, management and consumption of Learning Object Metadata is vastly surpassing the human capacity to review or process these metadata.
Currently there is NOT scalable Quality Evaluation of Learning Object Metadat a
Quality of Metadata "high quality metadata supports the  functional requirements   of the system it is designed to support"  (Guy at al, 2004)
Quality of Metadata Title: “The Time Machine” Author: “Wells, H. G.” Publisher: “L&M Publishers, UK” Year: “1965” Location: ----
Quality of Metadata
Quality of Metadata
Why Measuring Quality? The quality of the metadata record that describes a learning object affects directly the chances of the object to be found, reviewed or reused. An object with the title “Lesson 1 – Course 201” and no description, could not be found in a “Introduction to Java” query, even if it is about that subject.
How to measure Metadata Quality? Manually check a statistical sample of records to evaluate their quality.  Use graphical tools to improve the task Use simple statistics from the repository Usability studies
Metrics A good system needs both characteristics: Been mostly automated Predict with certain amount of precision the fitness of the metadata instance for its task Other fields had attacked similar problems through the use of metrics Software Engineering Bibliographical Studies (Scientometrics) Search engines (Eg.: PageRank)
We cannot measure the quality  manually anymore…
… but is a good idea to follow the same quality characteristics.
Quality Characteristics Framework proposed by Bruce and Hillman: Completeness Accuracy Provenance Conformance to expectations Consistency & logical coherence Timeliness Accessability
Our Proposal: Use Metrics Small calculation performed over the values of the different fields of the metadata record in order to gain insight on  a quality characteristics.   For example we can count the number of fields that have been filled with information (metric) to assess the completeness of the metadata record (quality characteristic).
Quality Metrics C ompleteness Simple Completeness:  What percentage of the fields has been filled Weighted Completeness:  Not all fields are equally important.  Use a weighted sum .
Quality Metrics Conformance to Expectations Nominal Information Content: How different is the value of field in the metadata record from the values in the repository (Entropy) Textual Information Content:  What is the relevance of the words contained in free text fields (TFIDF)
Quality Metrics Accesability Readability:  How easy is to read the text of free text fields.
Quality Metrics
Evaluation of the Metrics Online Experiment: http://ariadne.cti.espol.edu.ec/Metrics 22 Human Reviewers  20 Learning object metadata records  (10 manual, 10 automated) 7 characteristics used for evaluation 5 quality Metrics
Evaluation Results Textual Information Content  correlates highly (0.842) with human-assigned quality score
Analysis of Results The quality of the title and description is perceived as the quality of the record. One of the metrics captured a complex human evaluation. This artificial measurement of quality is not an effective evaluation for the metrics
Applications: Repository Evaluation
Applications: Quality Visualization
Automated Evaluation of Quality
Further Work Evaluate metrics as predictors of “real”  quality.  Quality as Fitness to fulfill a given purpose Quality for Retrieval  Quality for Evaluation  Accessibility Quality Re-use Quality
Further Work But more important… Measure the Quality of the Learning Object itself LearnRank Analysis of the Object itself Analysis of Contextual Attention Metadata Social Networking Learnometrics Measuring the Impact of Learning Object in the Learning/Teaching Community
Thank you, Gracias Comments, Suggestions, Critics… are Welcome! More Information: http://ariadne.cti.espol.edu.ec/M4M

Towards Automatic Evaluation of Learning Object Metadata Quality

  • 1.
    Towards Automatic Evaluationof Learning Object Metadata Quality Xavier Ochoa, ESPOL, Ecuador Erik Duval, KULeuven, Belgium QoIS 2006
  • 2.
    Learning Objects are… Any entity, digital or non-digital, that can be used, re-used or referenced during technology-supported learning. IEEE LOM Standard
  • 3.
    Learning Object MetadataLearning Object Metadata Standard
  • 4.
    Initial growth hasbeen slow ARIADNE
  • 5.
    Standardization, Interoperability ofRepositories and Automatic Generation of Metadata had solved the scarcity problem… …but had created new “good” ones.
  • 6.
    The production, managementand consumption of Learning Object Metadata is vastly surpassing the human capacity to review or process these metadata.
  • 7.
    Currently there isNOT scalable Quality Evaluation of Learning Object Metadat a
  • 8.
    Quality of Metadata"high quality metadata supports the functional requirements of the system it is designed to support" (Guy at al, 2004)
  • 9.
    Quality of MetadataTitle: “The Time Machine” Author: “Wells, H. G.” Publisher: “L&M Publishers, UK” Year: “1965” Location: ----
  • 10.
  • 11.
  • 12.
    Why Measuring Quality?The quality of the metadata record that describes a learning object affects directly the chances of the object to be found, reviewed or reused. An object with the title “Lesson 1 – Course 201” and no description, could not be found in a “Introduction to Java” query, even if it is about that subject.
  • 13.
    How to measureMetadata Quality? Manually check a statistical sample of records to evaluate their quality. Use graphical tools to improve the task Use simple statistics from the repository Usability studies
  • 14.
    Metrics A goodsystem needs both characteristics: Been mostly automated Predict with certain amount of precision the fitness of the metadata instance for its task Other fields had attacked similar problems through the use of metrics Software Engineering Bibliographical Studies (Scientometrics) Search engines (Eg.: PageRank)
  • 15.
    We cannot measurethe quality manually anymore…
  • 16.
    … but isa good idea to follow the same quality characteristics.
  • 17.
    Quality Characteristics Frameworkproposed by Bruce and Hillman: Completeness Accuracy Provenance Conformance to expectations Consistency & logical coherence Timeliness Accessability
  • 18.
    Our Proposal: UseMetrics Small calculation performed over the values of the different fields of the metadata record in order to gain insight on a quality characteristics. For example we can count the number of fields that have been filled with information (metric) to assess the completeness of the metadata record (quality characteristic).
  • 19.
    Quality Metrics Completeness Simple Completeness: What percentage of the fields has been filled Weighted Completeness: Not all fields are equally important. Use a weighted sum .
  • 20.
    Quality Metrics Conformanceto Expectations Nominal Information Content: How different is the value of field in the metadata record from the values in the repository (Entropy) Textual Information Content: What is the relevance of the words contained in free text fields (TFIDF)
  • 21.
    Quality Metrics AccesabilityReadability: How easy is to read the text of free text fields.
  • 22.
  • 23.
    Evaluation of theMetrics Online Experiment: http://ariadne.cti.espol.edu.ec/Metrics 22 Human Reviewers 20 Learning object metadata records (10 manual, 10 automated) 7 characteristics used for evaluation 5 quality Metrics
  • 24.
    Evaluation Results TextualInformation Content correlates highly (0.842) with human-assigned quality score
  • 25.
    Analysis of ResultsThe quality of the title and description is perceived as the quality of the record. One of the metrics captured a complex human evaluation. This artificial measurement of quality is not an effective evaluation for the metrics
  • 26.
  • 27.
  • 28.
  • 29.
    Further Work Evaluatemetrics as predictors of “real” quality. Quality as Fitness to fulfill a given purpose Quality for Retrieval Quality for Evaluation Accessibility Quality Re-use Quality
  • 30.
    Further Work Butmore important… Measure the Quality of the Learning Object itself LearnRank Analysis of the Object itself Analysis of Contextual Attention Metadata Social Networking Learnometrics Measuring the Impact of Learning Object in the Learning/Teaching Community
  • 31.
    Thank you, GraciasComments, Suggestions, Critics… are Welcome! More Information: http://ariadne.cti.espol.edu.ec/M4M