Towards Integration of Web Data into a coherent Educational Data Graph
Upcoming SlideShare
Loading in...5
×
 

Towards Integration of Web Data into a coherent Educational Data Graph

on

  • 264 views

Paper presented at LILE Workshop at WWW 2013 (Rio de Janeiro, Brasil).

Paper presented at LILE Workshop at WWW 2013 (Rio de Janeiro, Brasil).

Full paper URL: http://www2013.org/companion/p419.pdf

Statistics

Views

Total Views
264
Views on SlideShare
264
Embed Views
0

Actions

Likes
0
Downloads
1
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • http://okkam.l3s.uni-hannover.de:8880/openrdf-workbench/repositories/linked-learning-rdf/summary Previous Work: LinkedEducation 0.5+ - VoiD based schema: URL etc (dataset description and classification, alignments of types and properties) - Datasets: list (=subset of current linked education datasets) - But also imported resources for clustering experiments - Size: 6 million triples etc... - SPARQL endpoint, initial clustering results

Towards Integration of Web Data into a coherent Educational Data Graph Towards Integration of Web Data into a coherent Educational Data Graph Presentation Transcript

  • MotivationData on the Web18/06/13Lile 2013 – Rio de JaneiroSome eyecatching opener illustrating growth and or diversity of web dataTowards Integration of Web Data into acoherent Educational Data GraphLILE 2013 : 3rd International Workshop on Learning and Education with the Web of Data14 May 2013, Rio de Janeiro, BrazilDavide Taibi – Besnik Fetahu – Stefan Dietze(CNR – ITD, IT) (L3S Research Center, DE)
  • Outline• Linked Open Data serving data-intensive applications• Heterogeneity of datasets and schemas• Is it all that easy to use Linked Open Data and what are they all about?– Interlinking of datasets only at a superficial level– Different schemas for similar resource classes accross datasets– Non-structured resource descriptions– Best-case scenario: very abstract topic definitions– Difficult to query for a subset of resources and datasets for a specific topic• Our approach– Schema level integration– Enhanced dataset & resource descriptions– Instance level integration– Scalable annotation extraction– Clustering and correlation of datasets18/06/13 Lile 2013 – Rio de Janeiro
  • Introduction• Large amounts of publicly available Linked Open Data of educational relevance• Difficulties on providing large-scale integration• Dataset and resource description annotation• Clustering and dataset interlinking18/06/13 Lile 2013 – Rio de JaneiroEducational Data
  • Steps towards a Linked Education Data Graph18/06/13 Lile 2013 – Rio de Janeiro
  • Schema Level Integration18/06/13 Lile 2013 – Rio de Janeirohttp://data.linkededucation.org/ns/linked-education.rdf
  • Schema Level Integration18/06/13 Lile 2013 – Rio de Janeirohttp://data.linkededucation.org/ns/linked-education.rdfLinkedUniversities Dataset
  • Schema Level Integration• VoID based schema:– http://data.linkededucation.org/ns/linked-education.rdf– Dataset cataloging and classification– Mappings (types, properties)• Datasets:– LinkedUniversities Dataset– mEducator– Europeana• Imported resources for clustering experiments:– 6 millions of distinct resources– 97 millions of RDF triples– 21.6 GB of data• SPARQL endpoint:– http://okkam.l3s.uni-hannover.de:8880/openrdf-workbench/repositories/linked-learning-rdf18/06/13 Lile 2013 – Rio de Janeiro DBLP-L3S BBC programmes ACM publications
  • Instance-level integration18/06/13 Lile 2013 – Rio de Janeiro<http://dbpedia.org/page/Gravitation><http://dbpedia.org/page/Strong><http://dbpedia.org/page/Dense>• DBpedia Spotlight as NER & NED tool• Annotation of unstructured content• Selective & Scalable annotation• Annotate tokens of different size
  • Instance-level integrationCharacteristics of enrichments•Disambiguation•Acronyms detection (e.g. “dns”, “gmt”)•Synonyms detection (e.g. “globe”, “earth”)•Context detection (e.g. “apple” fruits, “apple” computer)18/06/13 Lile 2013 – Rio de Janeiro<http://dbpedia.org/page/Gravitation>
  • Correlation and Clustering18/06/13 Lile 2013 – Rio de JaneiroGravitationEquationsEarth• Annotations used to construct a network of resources, with edges based on commonresource annotations.
  • Correlation and Clustering• Methods used for clustering• Based on the shared enrichments• Naïve• Based on the ef-irf (Enrichment Frequency-Inverse Resource Frequency) index• Jaccard• CosineDifferent threshold have been used to generate clusters18/06/13 Lile 2013 – Rio de Janeiro
  • EvaluationThree evaluation stages:•Quantitative & Qualitative• Assess annotation accuracy for exhaustive and scalable approaches• Measure standard precision/recall metrics• 250 resources for each dataset used for assessment•Performance• Gains in terms of scalability18/06/13 Lile 2013 – Rio de Janeiro
  • Quantitative EvaluationContext #Resources #Annotations #Entity TypesACM 249 200 239mEducator 250 495 355BBC 250 1364 769LinkedUniversities 243 166 283DBLP 250 295 161Europeana 249 938 672Total 1491 3458 93718/06/13 Lile 2013 – Rio de Janeiro• Number of extracted entities is related to the length of a textual description in aresource• For long texts up to 87 distinct entities and more than 200 entity type associations
  • Qualitative Evaluation18/06/13 Lile 2013 – Rio de Janeiro• Human evaluators to measure annotation accuracy• 2000 annotations for both (exhaustive and scalable) approaches wereassessed• Number of evaluators for the first approach was 32, with an average of 63tasks per user, while for the second, there were 23 users with an averageof 87 completed tasksPrecision RecallExhaustive 0.82 0.429Scalable 0.77 0.687∆[E-S] -0.05 +0.26
  • Performance EvaluationSize-k No Filtering Filtered:resource level Filtered: dataset level1 53089 24850 74642 51346 17919 132813 49603 11800 96074 47871 7793 64325 46153 5184 42896 44480 3529 292218/06/13 Lile 2013 – Rio de Janeiro• Reduction of textual content to be analyzed for the annotation phase:• Terms of tags {NN,NNP,NNPS}, reduce the amount of text by almost 40%.• For various token sizes, the reduced amount goes up to 86%• NER complexity task from DBpedia Spotlight:• Reduction of HTTP requests.• Avoid annotating similar chunks of text.• Significant gains in terms of execution time: 3.5hrs vs. 20mins
  • Conclusion• Large-scale educational data-graph• Well-interlinked datasets at schema and instance level• Enhanced dataset and resource description• Scalable annotation procedure• EF-IRF clustering approach• Clusters and correlated datasets18/06/13 Lile 2013 – Rio de Janeiro
  • Thank you!Questions?18/06/13 Lile 2013 – Rio de Janeiro