Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Towards Integration of Web Data into a coherent Educational Data Graph


Published on

Paper presented at LILE Workshop at WWW 2013 (Rio de Janeiro, Brasil).

Full paper URL:

Published in: Technology, Education
  • Be the first to comment

  • Be the first to like this

Towards Integration of Web Data into a coherent Educational Data Graph

  1. 1. MotivationData on the Web18/06/13Lile 2013 – Rio de JaneiroSome eyecatching opener illustrating growth and or diversity of web dataTowards Integration of Web Data into acoherent Educational Data GraphLILE 2013 : 3rd International Workshop on Learning and Education with the Web of Data14 May 2013, Rio de Janeiro, BrazilDavide Taibi – Besnik Fetahu – Stefan Dietze(CNR – ITD, IT) (L3S Research Center, DE)
  2. 2. Outline• Linked Open Data serving data-intensive applications• Heterogeneity of datasets and schemas• Is it all that easy to use Linked Open Data and what are they all about?– Interlinking of datasets only at a superficial level– Different schemas for similar resource classes accross datasets– Non-structured resource descriptions– Best-case scenario: very abstract topic definitions– Difficult to query for a subset of resources and datasets for a specific topic• Our approach– Schema level integration– Enhanced dataset & resource descriptions– Instance level integration– Scalable annotation extraction– Clustering and correlation of datasets18/06/13 Lile 2013 – Rio de Janeiro
  3. 3. Introduction• Large amounts of publicly available Linked Open Data of educational relevance• Difficulties on providing large-scale integration• Dataset and resource description annotation• Clustering and dataset interlinking18/06/13 Lile 2013 – Rio de JaneiroEducational Data
  4. 4. Steps towards a Linked Education Data Graph18/06/13 Lile 2013 – Rio de Janeiro
  5. 5. Schema Level Integration18/06/13 Lile 2013 – Rio de Janeiro
  6. 6. Schema Level Integration18/06/13 Lile 2013 – Rio de Janeiro Dataset
  7. 7. Schema Level Integration• VoID based schema:–– Dataset cataloging and classification– Mappings (types, properties)• Datasets:– LinkedUniversities Dataset– mEducator– Europeana• Imported resources for clustering experiments:– 6 millions of distinct resources– 97 millions of RDF triples– 21.6 GB of data• SPARQL endpoint:– Lile 2013 – Rio de Janeiro DBLP-L3S BBC programmes ACM publications
  8. 8. Instance-level integration18/06/13 Lile 2013 – Rio de Janeiro<><><>• DBpedia Spotlight as NER & NED tool• Annotation of unstructured content• Selective & Scalable annotation• Annotate tokens of different size
  9. 9. Instance-level integrationCharacteristics of enrichments•Disambiguation•Acronyms detection (e.g. “dns”, “gmt”)•Synonyms detection (e.g. “globe”, “earth”)•Context detection (e.g. “apple” fruits, “apple” computer)18/06/13 Lile 2013 – Rio de Janeiro<>
  10. 10. Correlation and Clustering18/06/13 Lile 2013 – Rio de JaneiroGravitationEquationsEarth• Annotations used to construct a network of resources, with edges based on commonresource annotations.
  11. 11. Correlation and Clustering• Methods used for clustering• Based on the shared enrichments• Naïve• Based on the ef-irf (Enrichment Frequency-Inverse Resource Frequency) index• Jaccard• CosineDifferent threshold have been used to generate clusters18/06/13 Lile 2013 – Rio de Janeiro
  12. 12. EvaluationThree evaluation stages:•Quantitative & Qualitative• Assess annotation accuracy for exhaustive and scalable approaches• Measure standard precision/recall metrics• 250 resources for each dataset used for assessment•Performance• Gains in terms of scalability18/06/13 Lile 2013 – Rio de Janeiro
  13. 13. Quantitative EvaluationContext #Resources #Annotations #Entity TypesACM 249 200 239mEducator 250 495 355BBC 250 1364 769LinkedUniversities 243 166 283DBLP 250 295 161Europeana 249 938 672Total 1491 3458 93718/06/13 Lile 2013 – Rio de Janeiro• Number of extracted entities is related to the length of a textual description in aresource• For long texts up to 87 distinct entities and more than 200 entity type associations
  14. 14. Qualitative Evaluation18/06/13 Lile 2013 – Rio de Janeiro• Human evaluators to measure annotation accuracy• 2000 annotations for both (exhaustive and scalable) approaches wereassessed• Number of evaluators for the first approach was 32, with an average of 63tasks per user, while for the second, there were 23 users with an averageof 87 completed tasksPrecision RecallExhaustive 0.82 0.429Scalable 0.77 0.687∆[E-S] -0.05 +0.26
  15. 15. Performance EvaluationSize-k No Filtering Filtered:resource level Filtered: dataset level1 53089 24850 74642 51346 17919 132813 49603 11800 96074 47871 7793 64325 46153 5184 42896 44480 3529 292218/06/13 Lile 2013 – Rio de Janeiro• Reduction of textual content to be analyzed for the annotation phase:• Terms of tags {NN,NNP,NNPS}, reduce the amount of text by almost 40%.• For various token sizes, the reduced amount goes up to 86%• NER complexity task from DBpedia Spotlight:• Reduction of HTTP requests.• Avoid annotating similar chunks of text.• Significant gains in terms of execution time: 3.5hrs vs. 20mins
  16. 16. Conclusion• Large-scale educational data-graph• Well-interlinked datasets at schema and instance level• Enhanced dataset and resource description• Scalable annotation procedure• EF-IRF clustering approach• Clusters and correlated datasets18/06/13 Lile 2013 – Rio de Janeiro
  17. 17. Thank you!Questions?18/06/13 Lile 2013 – Rio de Janeiro