16. Quantitative Analysis of the
Publication of LO
• What is the size of Repositories?
• How do repositories grow?
• How many objects per contributor?
• Can it be modeled?
16
25. Analysis Conclusions
– Few big repositories concentrate most of the
material
– Repositories are not growing as they should
– There is not such thing as an average
contributor
– Differences between repositories are based
on the engagement of the contributor
– Results point to a possible lack of “value
proposition”
26. Quantitative Analysis of the
Reuse of Learning Objects
• Which percentage of learning objects is
reused?
• Does the granularity affect reuse?
• How many times a learning object is
reused?
26
32. Analysis Conclusions
– Learning Objects are being reuse with or
without the help of Learning Object
technologies
– Reuse paradox need to be re-evaluated
– Reuse seems to be the results of a chain of
successful events.
34. Quality of Metadata
Title: “The Time Machine”
Author: “Wells, H. G.”
Publisher: “L&M Publishers, UK”
Year: “1965”
Location: ----
35. Metrics for Metadata Quality
– How the quality of the metadata can be
measured? (metrics)
– Does the metrics work?
• Does the metrics correlate with human
evaluation?
• Does the metrics separate between good and
bad quality metadata?
• Can the metrics be used to filter low quality
records?
38. Study Conclusions
– Humans and machines have different needs
for metadata
– Metrics can be used to easily establish some
characteristics of the metadata
– The metrics can be used to automatically
filter or flag low quality metadata
40. Relevance Ranking Metrics
– What means relevance in the context of
Learning Objects?
– How existing ranking techniques can be used
to produce metrics to rank learning objects?
– How those metrics can be combined to
produce a single ranking value?
– Can the proposed metrics outperform simple
text based ranking?
43. Relevance Ranking Metrics
• Implications
– Even basic techniques can improve the
ranking of learning objects
– Metrics are scalable and easy to implement
• Warning:
– Preliminary results: not based in real world
observation
47. General Conclusions
• Publication and reuse is dominated by
heavy-tailed distributions
• LMSs have the potential bootstrap LOE
• Models/Metrics sets a baseline against
which new models/metrics can be
compared and improvement measured
• More questions are raised than answered
46
48. Publications
• Chapter 2
– Quantitative Analysis of User-Generated Content on the
Web. Proceedings of the First International Workshop on
Understanding Web Evolution (WebEvolve2008) at
WWW2008. 2008, 19-26
– Quantitative Analysis of Learning Object Repositories.
Proceedings of World Conference on Educational
Multimedia, Hypermedia and Telecommunications ED-
Media 2008, 2008, 6031-6040
• Chapter 3
– Measuring the Reuse of Learning Objects. Third
European Conference on Technology Enhanced Learning
(ECTEL 2008), 2008, Accepted.
49. Publications
• Chapter 4
– Towards Automatic Evaluation of Learning Object
Metadata Quality. LNCS: Advances in Conceptual
Modeling - Theory and Practice, Springer, 2006, 4231,
372-381
– SAmgI: Automatic Metadata Generation v2.0. Proceedings
of World Conference on Educational Multimedia,
Hypermedia and Telecommunications ED-Media 2007,
AACE, 2007, 1195-1204
– Quality Metrics for Learning Object Metadata. World
Conference on Educational Multimedia, Hypermedia and
Telecommunications 2006, AACE, 2006, 1004-1011
50. Publications
• Chapter 5
– Relevance Ranking Metrics for Learning Objects. IEEE
Transactions on Learning Technologies. 2008. 1(1), 14
– Relevance Ranking Metrics for Learning Objects.
LNCS: Creating New Learning Experiences on a Global
Scale, Springer, 2007, 4753, 262-276
– Use of contextualized attention metadata for ranking and
recommending learning objects. CAMA '06: Proceedings
of the 1st international workshop on Contextualized
attention metadata at CIKM 2006, ACM Press, 2006, 9-16