This presentation discusses approaches to integrating ontologies based on an EDM (Europeana Data Model) framework. It outlines four methods for reusing ontologies - direct adoption, reuse with new URIs, reuse by establishing sub-relations, and reuse through equivalence properties. The presentation then examines how these methods have been applied in the EDM and in a related project (DM2E), emphasizing a mixture of approaches is often best to avoid contradictory descriptions while maximizing reuse.
Generating Lexical Information for Terminologyin a Bioinformatics OntologyHammad Afzal
This document discusses generating lexical information for terms in a bioinformatics ontology. It proposes a model called LexInfo for associating linguistic information with ontologies. The authors lexicalize a bioinformatics ontology called myGrid by creating a LexInfo-based lexicon that captures morphological, syntactic and semantic properties of terms. They generate lexicons both semi-automatically using domain resources and automatically using LexInfo tools. The automatic lexicon has some errors due to POS tagging and tokenization issues that could be addressed using domain knowledge. The enriched ontology may help with automatic annotation of bioinformatics services.
The Distributed Ontology Language (DOL): Use Cases, Syntax, and ExtensibilityChristoph Lange
The document discusses the Distributed Ontology Language (DOL) which aims to support semantic integration and interoperability across heterogeneous ontologies. DOL allows for logically heterogeneous ontologies, modular ontologies, and formal and informal links between ontologies. It has a formal semantics and can be serialized in XML, RDF, and text. Examples of applications that could benefit from DOL include an ontology repository engine and a multilingual map user interface driven by aligned ontologies.
A Mathematical Approach to Ontology Authoring and DocumentationChristoph Lange
This document proposes using OMDoc, a framework for representing formal knowledge, to improve ontology authoring and documentation. It describes how OMDoc can:
1) Provide better support for modularity, documentation at different granularities, and linking documentation to formal representations compared to languages like OWL.
2) Model existing ontologies and translate between OMDoc and OWL/RDF formats to leverage existing tools.
3) Allow comprehensive, integrated documentation of ontologies through features like literate programming. The approach is evaluated by reimplementing the FOAF ontology in OMDoc.
The document discusses using the Semantic Web as a knowledge base for artificial intelligence applications. It describes how the Semantic Web publishes data on the web in a standardized, linked format. This vast amount of distributed knowledge could be mined by AI in various ways, such as linking data mining to find patterns, using reasoning to analyze and understand raw data, and assessing agreement between ontologies. The Semantic Web represents a large, collaborative base of formally represented knowledge that provides many opportunities for future AI research and applications.
Semantic Web, Linked Data and Education: A Perfect Fit?Mathieu d'Aquin
This document discusses how semantic web technologies like linked data are a perfect fit for education. It provides examples of how the Open University has applied linked data to connect educational resources and data from across the university. Linked data allows for flexibility, accessibility, and the ability to combine and interpret different sources of knowledge. However, challenges remain around representing rich metadata about educational purpose and interpreting resources in an educational context.
The document summarizes a seminar on ontology mapping presented by Samhati Soor. The seminar covered the need for ontology mapping due to the proliferation of ontologies, and the purpose of mapping ontologies to achieve interoperability and sharing knowledge. It defined ontologies and ontology mapping and discussed categories of mapping including between global and local ontologies, between local ontologies, and for merging ontologies. Tools for ontology mapping discussed included GLUE and SAM. Evaluation criteria and challenges of ontology mapping were also summarized along with conclusions and references.
Ontologisms have been applied to many applications in recent years, especially on Sematic Web, Information
Retrieval, Information Extraction, and Question and Answer. The purpose of domain-specific ontology
is to get rid of conceptual and terminological confusion. It accomplishes this by specifying a set of generic
concepts that characterizes the domain as well as their definitions and interrelationships. This paper will
describe some algorithms for identifying semantic relations and constructing an Information Technology
Ontology, while extracting the concepts and objects from different sources. The Ontology is constructed
based on three main resources: ACM, Wikipedia and unstructured files from ACM Digital Library. Our
algorithms are combined of Natural Language Processing and Machine Learning. We use Natural Language
Processing tools, such as OpenNLP, Stanford Lexical Dependency Parser in order to explore sentences.
We then extract these sentences based on English pattern in order to build training set. We use a
random sample among 245 categories of ACM to evaluate our results. Results generated show that our
system yields superior performance.
Generating Lexical Information for Terminologyin a Bioinformatics OntologyHammad Afzal
This document discusses generating lexical information for terms in a bioinformatics ontology. It proposes a model called LexInfo for associating linguistic information with ontologies. The authors lexicalize a bioinformatics ontology called myGrid by creating a LexInfo-based lexicon that captures morphological, syntactic and semantic properties of terms. They generate lexicons both semi-automatically using domain resources and automatically using LexInfo tools. The automatic lexicon has some errors due to POS tagging and tokenization issues that could be addressed using domain knowledge. The enriched ontology may help with automatic annotation of bioinformatics services.
The Distributed Ontology Language (DOL): Use Cases, Syntax, and ExtensibilityChristoph Lange
The document discusses the Distributed Ontology Language (DOL) which aims to support semantic integration and interoperability across heterogeneous ontologies. DOL allows for logically heterogeneous ontologies, modular ontologies, and formal and informal links between ontologies. It has a formal semantics and can be serialized in XML, RDF, and text. Examples of applications that could benefit from DOL include an ontology repository engine and a multilingual map user interface driven by aligned ontologies.
A Mathematical Approach to Ontology Authoring and DocumentationChristoph Lange
This document proposes using OMDoc, a framework for representing formal knowledge, to improve ontology authoring and documentation. It describes how OMDoc can:
1) Provide better support for modularity, documentation at different granularities, and linking documentation to formal representations compared to languages like OWL.
2) Model existing ontologies and translate between OMDoc and OWL/RDF formats to leverage existing tools.
3) Allow comprehensive, integrated documentation of ontologies through features like literate programming. The approach is evaluated by reimplementing the FOAF ontology in OMDoc.
The document discusses using the Semantic Web as a knowledge base for artificial intelligence applications. It describes how the Semantic Web publishes data on the web in a standardized, linked format. This vast amount of distributed knowledge could be mined by AI in various ways, such as linking data mining to find patterns, using reasoning to analyze and understand raw data, and assessing agreement between ontologies. The Semantic Web represents a large, collaborative base of formally represented knowledge that provides many opportunities for future AI research and applications.
Semantic Web, Linked Data and Education: A Perfect Fit?Mathieu d'Aquin
This document discusses how semantic web technologies like linked data are a perfect fit for education. It provides examples of how the Open University has applied linked data to connect educational resources and data from across the university. Linked data allows for flexibility, accessibility, and the ability to combine and interpret different sources of knowledge. However, challenges remain around representing rich metadata about educational purpose and interpreting resources in an educational context.
The document summarizes a seminar on ontology mapping presented by Samhati Soor. The seminar covered the need for ontology mapping due to the proliferation of ontologies, and the purpose of mapping ontologies to achieve interoperability and sharing knowledge. It defined ontologies and ontology mapping and discussed categories of mapping including between global and local ontologies, between local ontologies, and for merging ontologies. Tools for ontology mapping discussed included GLUE and SAM. Evaluation criteria and challenges of ontology mapping were also summarized along with conclusions and references.
Ontologisms have been applied to many applications in recent years, especially on Sematic Web, Information
Retrieval, Information Extraction, and Question and Answer. The purpose of domain-specific ontology
is to get rid of conceptual and terminological confusion. It accomplishes this by specifying a set of generic
concepts that characterizes the domain as well as their definitions and interrelationships. This paper will
describe some algorithms for identifying semantic relations and constructing an Information Technology
Ontology, while extracting the concepts and objects from different sources. The Ontology is constructed
based on three main resources: ACM, Wikipedia and unstructured files from ACM Digital Library. Our
algorithms are combined of Natural Language Processing and Machine Learning. We use Natural Language
Processing tools, such as OpenNLP, Stanford Lexical Dependency Parser in order to explore sentences.
We then extract these sentences based on English pattern in order to build training set. We use a
random sample among 245 categories of ACM to evaluate our results. Results generated show that our
system yields superior performance.
The document summarizes and compares schema matching and ontology mapping. It discusses how schema matching approaches can be applied to ontology mapping given the similarities between schemas and ontologies. The document outlines different categories of schema matching techniques (element-based, structure-based) and provides examples. It also summarizes several ontology mapping tools and approaches that utilize different matching strategies like string, structure, and semantic similarity.
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
Translating Ontologies in Real-World SettingsMauro Dragoni
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. The main challenge in translating ontologies is to find the right term with respect to the domain modeled by ontology itself. Machine translation services may help in this task; however, a crucial requirement is to have translations validated by experts before the ontologies are deployed. Real-world applications must implement a support system addressing this task for relieve experts work in validating all translations. In this paper, we present ESSOT, an Expert Supporting System for Ontology Translation. The peculiarity of this system is to exploit semantic information of the concept's context for improving the quality of label translations. The system has been tested both within the Organic.Lingua project by translating the modeled ontology in three languages and on other multilingual ontologies in order to evaluate the effectiveness of the system in other contexts. The results have been compared with the translations provided by the Microsoft Translator API and the improvements demonstrated the viability of the proposed approach.
Melinda: Methods and tools for Web Data InterlinkingFrançois Scharffe
This document presents a framework for interlinking web datasets. It discusses publishing principles for datasets on the web, including using URIs to identify resources and including links to other datasets. It introduces tools for interlinking datasets through explicit links, implicit ontology alignment, or by matching datasets that share a common ontology. Six specific interlinking tools are analyzed based on their degree of automation, matching techniques used, ability to handle ontologies, and output. The document concludes by providing an example of applying a link specification to interlink datasets between DBPedia and Geo.
Linked Data at the Open University: From Technical Challenges to Organization...Mathieu d'Aquin
The document discusses how the Knowledge Media Institute at the Open University in the UK has developed a linked data platform, called data.open.ac.uk, to provide open access to various types of data from across the university, including course information, research publications, podcasts, videos, and more. It describes some of the technical and organizational challenges in developing the platform, and highlights how it has enabled new uses of the university's data and inspired innovation both within the university and more broadly in open education.
Linking Universities - A broader look at the application of linked data and s...Mathieu d'Aquin
This document discusses applying linked data and semantic web technologies at universities. It provides examples of how the Open University in the UK publishes various types of data as linked open data, including course information, research publications, podcasts and videos. This enables new applications for resource discovery, social networking and supporting research activities. The document also outlines challenges in linking educational data across institutions and supporting new research methodologies through linked data approaches.
Application of Ontology in Semantic Information Retrieval by Prof Shahrul Azm...Khirulnizam Abd Rahman
Application of Ontology in Semantic Information Retrieval
by Prof Shahrul Azman from FSTM, UKM
Presentation for MyREN Seminar 2014
Berjaya Hotel, Kuala Lumpur
27 November 2014
1) This document discusses stemming algorithms that have been used for the Odia language. Stemming is the process of reducing inflected words to their root or stem for purposes like information retrieval.
2) It reviews different stemming algorithms that have been applied to Odia text, including suffix stripping, affix removal, and stochastic algorithms. It also discusses common errors in stemming like over-stemming and under-stemming.
3) Applications of stemming discussed include information retrieval, text summarization, machine translation, indexing, and question answering systems. The document concludes by surveying prior work on stemming algorithms for Odia.
Object-oriented analysis and design is an evolutionary development method built upon past proven concepts. The document discusses object-oriented systems development processes including use case driven analysis, the Object Modeling Technique (OMT), class diagrams, relationships between classes, and object-oriented modeling. It provides examples of class diagrams showing classes, attributes, operations, and relationships. It also explains the four views of OMT - the object model, dynamic model, functional model, and how OMT separates modeling.
This document discusses ontology mapping. It begins with an introduction to the semantic web and ontologies. Ontology mapping is important for allowing different ontologies to be aligned and related. There are different types of ontology mapping including alignment, merging, and mapping. The document then surveys some popular ontology mapping techniques including GLUE, PROMPT, and QOM. It evaluates these techniques and discusses their inputs, outputs, and approaches. The document concludes that semantic web research is important for advancing web technologies and realizing the goals of web 3.0. Future work could involve developing new ontology mapping techniques and publishing research on existing mapping methods.
Lect6-An introduction to ontologies and ontology developmentAntonio Moreno
The document provides an overview of ontologies and ontology development:
1. It defines ontologies as explicit specifications of conceptualizations in a domain that define concepts, properties, attributes, and relationships to enable knowledge sharing.
2. Ontology components include concepts, properties, restrictions, and individuals. Ontologies can range from single large ontologies to several specialized smaller ones.
3. OWL is introduced as the standard language for representing ontologies, with features like classes, properties, restrictions, and logical operators.
4. A general methodology for ontology development is outlined, including determining scope, reusing existing ontologies, enumerating terms, and defining classes, properties, and other components in an iterative
Yang Yu is proposing research on improving machine learning based ontology mapping by automatically obtaining training samples from the web. The proposed system would parse two input ontologies to generate queries to search engines and collect documents to use as samples for each ontology class. These samples would then be used to train text classifiers, which would produce probabilistic mappings between classes in the two ontologies. The results would be evaluated by comparing to mappings from human experts. Current work involves exploring alternative text classification tools and ways to utilize the probabilistic mapping values generated by the classifiers.
The document discusses ontology matching, which is the process of finding relationships between entities in different ontologies. It describes various techniques for ontology matching including basic techniques that operate at the element-level or structure-level, as well as classifications of matching techniques based on the type of input used and level of interpretation. The document also provides examples of commonly used methods for ontology matching like string-based, language-based, and structure-based techniques.
Map of the CETIS metadata and digital repository interoperability domainPhil Barker
Slides used at various CETIS metadata and digital repository SIG meetings to describe the area of interest of the SIG. Shows topics and specifications relevant to metadata digital repository interoperability.
WP3 Further specification of Functionality and Interoperability - Gradmann / ...Europeana
This document outlines the objectives and tasks of Work Package 3 (WP3) in further specifying functionality and interoperability for Europeana version 1. WP3 will have three main working groups focusing on the object model and metadata, semantic and multilingual aspects, and architecture/components and external interactions. The working groups will gather requirements, build consensus, and make recommendations. WP3 will also establish a "sandbox" called EuropeanaLabs to prototype and test new functionalities at scale before delivering specifications to WP4 for development.
This document summarizes a workshop on data integration using ontologies. It discusses how data integration is challenging due to differences in schemas, semantics, measurements, units and labels across data sources. It proposes that ontologies can help with data integration by providing definitions for schemas and entities referred to in the data. Core challenges discussed include dealing with multiple synonyms for entities and relationships between biological entities that depend on context. The document advocates for shared community ontologies that can be extended and integrated to facilitate flexible and responsive data integration across multiple sources.
Summary Models for Routing Keywords to Linked Data SourcesThanh Tran
This document proposes using summary models to compactly represent the search space for routing keyword queries to linked data sources. It introduces the problems of identifying valid combinations of data sources that could produce answers to keyword queries from large, interconnected linked data. The authors define the novel problem of keyword query routing and present multi-level relationship graphs to model the search space. They then propose different types of summary models at the element, schema, and source levels to reduce complexity while maintaining result validity. The paper contributes by introducing this new problem and investigating tradeoffs between result quality and efficiency using the proposed summary models.
This document summarizes the crystal structures of four sulfapyridine solvates determined at 173 K: sulfapyridine polymorph III, sulfapyridine dioxane solvate, sulfapyridine tetrahydrofuran solvate, and sulfapyridine piperidine solvate. The structures were analyzed and compared to examine conformational polymorphism. Sulfapyridine and the dioxane and tetrahydrofuran solvates crystallized in the imide form, while the piperidine solvate crystallized as the piperidinium salt. Hydrogen bonding networks existed between sulfone oxygen and aniline nitrogen atoms in all structures. Solvent molecules in the dioxane and tetrahydrof
Zipcar was established in 2000 in Cambridge, Massachusetts by Robin Chase and Antje Danielson. It has since expanded across North America and Europe. Key events included acquiring competitors Flexcar in 2007 and Streetcar in the UK in 2011. Zipcar hired Scott Griffith as CEO in 2003 to focus on growth in major cities. The company focuses on innovative technologies like RFID to improve the customer experience and went public in 2011.
The document summarizes and compares schema matching and ontology mapping. It discusses how schema matching approaches can be applied to ontology mapping given the similarities between schemas and ontologies. The document outlines different categories of schema matching techniques (element-based, structure-based) and provides examples. It also summarizes several ontology mapping tools and approaches that utilize different matching strategies like string, structure, and semantic similarity.
Linked Open (Geo)Data and the Distributed Ontology Language – a perfect matchChristoph Lange
The Distributed Ontology Language is a meta-language for integrating
ontologies written in different languages. Our notion of “distributed”
comprises logical heterogeneity within ontologies, modularity and reuse,
and links across ontologies in different places of the Web. Not only
can ontologies be distributed across the Web, but DOL's supply of
supported ontology languages can also be extended in a decentral way.
For this functionality, DOL builds on the Linked Open Data (LOD)
principles. But DOL also contributes to LOD use cases. Many current
LOD applications are limited by the weak expressivity of the RDF and
RDFS languages commonly used to express data and vocabularies.
Completely switching to a more expressive language would impair
scalability to big datasets. DOL addresses the scalability and
expressivity requirements by allowing to represent each aspect of a
dataset in the most suitable language and keeping these different
representations connected. This is particularly useful in geographic
information systems, where big datasets (e.g. Linked Geo Data, the LOD
version of OpenStreetMap) need to be integrated with formalisations of
complex spatial notions (e.g. in the first-order language Common Logic).
Translating Ontologies in Real-World SettingsMauro Dragoni
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. The main challenge in translating ontologies is to find the right term with respect to the domain modeled by ontology itself. Machine translation services may help in this task; however, a crucial requirement is to have translations validated by experts before the ontologies are deployed. Real-world applications must implement a support system addressing this task for relieve experts work in validating all translations. In this paper, we present ESSOT, an Expert Supporting System for Ontology Translation. The peculiarity of this system is to exploit semantic information of the concept's context for improving the quality of label translations. The system has been tested both within the Organic.Lingua project by translating the modeled ontology in three languages and on other multilingual ontologies in order to evaluate the effectiveness of the system in other contexts. The results have been compared with the translations provided by the Microsoft Translator API and the improvements demonstrated the viability of the proposed approach.
Melinda: Methods and tools for Web Data InterlinkingFrançois Scharffe
This document presents a framework for interlinking web datasets. It discusses publishing principles for datasets on the web, including using URIs to identify resources and including links to other datasets. It introduces tools for interlinking datasets through explicit links, implicit ontology alignment, or by matching datasets that share a common ontology. Six specific interlinking tools are analyzed based on their degree of automation, matching techniques used, ability to handle ontologies, and output. The document concludes by providing an example of applying a link specification to interlink datasets between DBPedia and Geo.
Linked Data at the Open University: From Technical Challenges to Organization...Mathieu d'Aquin
The document discusses how the Knowledge Media Institute at the Open University in the UK has developed a linked data platform, called data.open.ac.uk, to provide open access to various types of data from across the university, including course information, research publications, podcasts, videos, and more. It describes some of the technical and organizational challenges in developing the platform, and highlights how it has enabled new uses of the university's data and inspired innovation both within the university and more broadly in open education.
Linking Universities - A broader look at the application of linked data and s...Mathieu d'Aquin
This document discusses applying linked data and semantic web technologies at universities. It provides examples of how the Open University in the UK publishes various types of data as linked open data, including course information, research publications, podcasts and videos. This enables new applications for resource discovery, social networking and supporting research activities. The document also outlines challenges in linking educational data across institutions and supporting new research methodologies through linked data approaches.
Application of Ontology in Semantic Information Retrieval by Prof Shahrul Azm...Khirulnizam Abd Rahman
Application of Ontology in Semantic Information Retrieval
by Prof Shahrul Azman from FSTM, UKM
Presentation for MyREN Seminar 2014
Berjaya Hotel, Kuala Lumpur
27 November 2014
1) This document discusses stemming algorithms that have been used for the Odia language. Stemming is the process of reducing inflected words to their root or stem for purposes like information retrieval.
2) It reviews different stemming algorithms that have been applied to Odia text, including suffix stripping, affix removal, and stochastic algorithms. It also discusses common errors in stemming like over-stemming and under-stemming.
3) Applications of stemming discussed include information retrieval, text summarization, machine translation, indexing, and question answering systems. The document concludes by surveying prior work on stemming algorithms for Odia.
Object-oriented analysis and design is an evolutionary development method built upon past proven concepts. The document discusses object-oriented systems development processes including use case driven analysis, the Object Modeling Technique (OMT), class diagrams, relationships between classes, and object-oriented modeling. It provides examples of class diagrams showing classes, attributes, operations, and relationships. It also explains the four views of OMT - the object model, dynamic model, functional model, and how OMT separates modeling.
This document discusses ontology mapping. It begins with an introduction to the semantic web and ontologies. Ontology mapping is important for allowing different ontologies to be aligned and related. There are different types of ontology mapping including alignment, merging, and mapping. The document then surveys some popular ontology mapping techniques including GLUE, PROMPT, and QOM. It evaluates these techniques and discusses their inputs, outputs, and approaches. The document concludes that semantic web research is important for advancing web technologies and realizing the goals of web 3.0. Future work could involve developing new ontology mapping techniques and publishing research on existing mapping methods.
Lect6-An introduction to ontologies and ontology developmentAntonio Moreno
The document provides an overview of ontologies and ontology development:
1. It defines ontologies as explicit specifications of conceptualizations in a domain that define concepts, properties, attributes, and relationships to enable knowledge sharing.
2. Ontology components include concepts, properties, restrictions, and individuals. Ontologies can range from single large ontologies to several specialized smaller ones.
3. OWL is introduced as the standard language for representing ontologies, with features like classes, properties, restrictions, and logical operators.
4. A general methodology for ontology development is outlined, including determining scope, reusing existing ontologies, enumerating terms, and defining classes, properties, and other components in an iterative
Yang Yu is proposing research on improving machine learning based ontology mapping by automatically obtaining training samples from the web. The proposed system would parse two input ontologies to generate queries to search engines and collect documents to use as samples for each ontology class. These samples would then be used to train text classifiers, which would produce probabilistic mappings between classes in the two ontologies. The results would be evaluated by comparing to mappings from human experts. Current work involves exploring alternative text classification tools and ways to utilize the probabilistic mapping values generated by the classifiers.
The document discusses ontology matching, which is the process of finding relationships between entities in different ontologies. It describes various techniques for ontology matching including basic techniques that operate at the element-level or structure-level, as well as classifications of matching techniques based on the type of input used and level of interpretation. The document also provides examples of commonly used methods for ontology matching like string-based, language-based, and structure-based techniques.
Map of the CETIS metadata and digital repository interoperability domainPhil Barker
Slides used at various CETIS metadata and digital repository SIG meetings to describe the area of interest of the SIG. Shows topics and specifications relevant to metadata digital repository interoperability.
WP3 Further specification of Functionality and Interoperability - Gradmann / ...Europeana
This document outlines the objectives and tasks of Work Package 3 (WP3) in further specifying functionality and interoperability for Europeana version 1. WP3 will have three main working groups focusing on the object model and metadata, semantic and multilingual aspects, and architecture/components and external interactions. The working groups will gather requirements, build consensus, and make recommendations. WP3 will also establish a "sandbox" called EuropeanaLabs to prototype and test new functionalities at scale before delivering specifications to WP4 for development.
This document summarizes a workshop on data integration using ontologies. It discusses how data integration is challenging due to differences in schemas, semantics, measurements, units and labels across data sources. It proposes that ontologies can help with data integration by providing definitions for schemas and entities referred to in the data. Core challenges discussed include dealing with multiple synonyms for entities and relationships between biological entities that depend on context. The document advocates for shared community ontologies that can be extended and integrated to facilitate flexible and responsive data integration across multiple sources.
Summary Models for Routing Keywords to Linked Data SourcesThanh Tran
This document proposes using summary models to compactly represent the search space for routing keyword queries to linked data sources. It introduces the problems of identifying valid combinations of data sources that could produce answers to keyword queries from large, interconnected linked data. The authors define the novel problem of keyword query routing and present multi-level relationship graphs to model the search space. They then propose different types of summary models at the element, schema, and source levels to reduce complexity while maintaining result validity. The paper contributes by introducing this new problem and investigating tradeoffs between result quality and efficiency using the proposed summary models.
This document summarizes the crystal structures of four sulfapyridine solvates determined at 173 K: sulfapyridine polymorph III, sulfapyridine dioxane solvate, sulfapyridine tetrahydrofuran solvate, and sulfapyridine piperidine solvate. The structures were analyzed and compared to examine conformational polymorphism. Sulfapyridine and the dioxane and tetrahydrofuran solvates crystallized in the imide form, while the piperidine solvate crystallized as the piperidinium salt. Hydrogen bonding networks existed between sulfone oxygen and aniline nitrogen atoms in all structures. Solvent molecules in the dioxane and tetrahydrof
Zipcar was established in 2000 in Cambridge, Massachusetts by Robin Chase and Antje Danielson. It has since expanded across North America and Europe. Key events included acquiring competitors Flexcar in 2007 and Streetcar in the UK in 2011. Zipcar hired Scott Griffith as CEO in 2003 to focus on growth in major cities. The company focuses on innovative technologies like RFID to improve the customer experience and went public in 2011.
Este documento presenta una sentencia judicial que resuelve un caso de nulidad contractual y reclamación de cantidad interpuesto por un accionista contra Bankia S.A. El demandante alega que adquirió acciones de Bankia en 2011 tras un folleto informativo que ocultaba la verdadera situación financiera de la entidad. La sentencia analiza si hubo vicio en el consentimiento del accionista y concluye que Bankia no ofreció una imagen fiel de su situación contable, violando su deber de información.
The document is a student's evaluation of a music magazine they created. The student discusses how their magazine differs from others by not having a dark tone. They aimed to attract a wide audience, including those interested in rock and R&B. The main audience is seen as teenagers and young adults. The student learned new skills in Photoshop and enjoyed the project, though they reflect on changes they could make, such as different cover colors or more photos on pages. Overall, the project was a positive learning experience.
This document contains the evaluation of a media music magazine created by Nigel Russell. It discusses several aspects of the magazine, including how it uses or challenges conventions of real media products. It represents the rock music genre and aims to appeal to teenagers and older generations. The target audience was attracted through coverage of rock festivals and bands from different genres on the cover. Bauer Media is proposed as a potential publisher due to their experience with similar magazines. Photoshop and Publisher skills were developed in creating the magazine.
The document presents three initial ideas for promotion packages:
1) A music video for the song "The Past Six Years" showing how a town has evolved over time.
2) A teaser trailer for the horror film "The Cult" about a girl who is possessed by the spirit of an elderly woman through a pair of earrings.
3) A promotion package for an energy bar product aimed at fitness enthusiasts, including TV and radio advertisements.
Peer feedback indicated that idea two for the film trailer was the most developed and interesting concept, while the energy bar idea had been overdone but had good branding elements.
O documento descreve o novo programa da MTV "MTV Sports Apresenta: Deco e Lucas na Rota Explosiva", no qual os apresentadores viajam pelo Brasil praticando esportes radicais. Ele fornece detalhes sobre a rota, datas de exibição, veículos digitais de divulgação e potencial de alcance de mais de 40 milhões de pessoas.
Este documento resume las enseñanzas de Balthasar Gracian sobre el arte de la prudencia en 25 puntos. Algunos de los puntos clave son: mantener el misterio sobre tus planes para generar expectativa; rodearte de personas sabias que te aconsejen; variar tus acciones para no ser predecible; y conocer las pasiones de los demás para poder influenciarlos. El objetivo general es alcanzar el éxito mediante la sagacidad y el uso estratégico de la inteligencia y el carácter.
This document discusses different systems of fit for shafts and holes, types of gauges, and gauge tolerances. It describes two systems - the hole basis system where the hole size is fixed and the shaft is varied, and the shaft basis system where the shaft is fixed and the hole is varied. It also outlines different types of gauges like standard, limit, workshop, and inspection gauges. Limit gauges in particular have two ends, one for maximum and one for minimum limits. The document concludes by noting that gauges have tolerances to account for manufacturing imperfections, and that unilateral tolerances are preferred.
The document provides an update on a marketing automation project for North America. It outlines the roles and responsibilities of various teams involved in the project including executive sponsors, marketing leads, digital marketing leads, and sales operations leads. It also describes the goals of implementing a closed-loop process to convert more leads into customers. Finally, it lists various CRM solutions, tools, and resources that will be used to support the marketing automation environment and closed-loop process.
Este documento presenta los objetivos y contenidos de un curso sobre funciones de variable real. Los objetivos incluyen introducir conceptos como funciones, dominio y recorrido, funciones inversas y composición de funciones. También cubre tipos comunes de funciones y su representación gráfica. Explica las políticas del curso sobre el uso de celulares, conversaciones y asistencia. Finalmente, detalla el software GeoGebra que se utilizará.
Booz Allen Hamilton offers an integrated suite of cloud capabilities, deep subject matter expertise, and unparalleled hands-on experience with a broad range of cloud technology products.
This document appears to be from the website of a company that provides power backup solutions and systems, including inverters, uninterruptible power supply (UPS) systems, and batteries. The company offers the latest technology in inverters, energy efficient series, deep cycle batteries for frequent power cuts, and highly reliable UPS systems. As an innovative leader in design, sales, and distribution, the company provides digital inverters, online and line interactive UPS systems, and lead acid or VRLA inverter batteries.
Semantic Similarity and Selection of Resources Published According to Linked ...Riccardo Albertoni
The position paper aims at discussing the potential of exploiting linked data best practice to provide metadata documenting domain specific resources created through verbose acquisition-processing pipelines. It argues that resource selection, namely the process engaged to choose a set of resources suitable for a given analysis/design purpose, must be supported by a deep comparison of their metadata. The semantic similarity proposed in our previous works is discussed for this purpose and the main issues to make it scale up to the web of data are introduced. Discussed issues contribute beyond the re-engineering of our similarity since they largely apply to every tool which is going to exploit information made available as linked data. A research plan and an exploratory phase facing the presented issues are described remarking the lessons we have learnt so far.
Tutorial: Building and using ontologies - E.Simperl - ESWC SS 2014eswcsummerschool
This document discusses building and using ontologies. It defines an ontology as defining a domain of interest in terms of things, attributes, and relationships. Ontologies are used to share a common understanding of a domain among people and machines. The document then discusses ontology engineering processes, examples of ontologies like DBpedia, and semantic technologies for creating intelligent applications.
This document discusses building and using ontologies. It defines an ontology as defining a domain of interest in terms of things, attributes, and relationships. Ontologies are used to share a common understanding of a domain among people and machines. The document then discusses ontology engineering processes, examples of ontologies like DBpedia, and semantic technologies used to create intelligent applications.
OODBMS Concepts - National University of Singapore.pdfssuserd5e338
This document discusses object-oriented database management systems (OODBMS). It covers basic OO concepts like objects, classes, attributes, methods, encapsulation, inheritance and polymorphism. It describes the two approaches to OODBMS - object-oriented databases and object-relational databases. It discusses some issues with the OO data model like inheritance conflicts. It also provides examples of queries in an object-relational query language and compares the OO data model to other data models.
Resource description, discovery, and metadata for Open Educational ResourcesR. John Robertson
This document discusses tensions in describing open educational resources for discovery and sharing. It describes the key stakeholders involved, including academics, institutions, and aggregators. Different projects in the UKOER program have taken different approaches to metadata standards and packaging, with no clear consensus yet. Going forward, usage statistics and tracking work will provide insights into the most effective approaches to resource description.
This document discusses converting metadata to linked open data. It provides an overview of the process of mapping metadata fields and their values to URIs and standardized vocabularies. This involves selecting existing terms where possible, cleaning up field values, and manually mapping values that don't match existing terms. It also discusses tools for working with linked data and principles for publishing open data online.
Learning Resource Metadata Initiative: Vocabulary Development Best PracticesMike Linksvayer
This document discusses best practices for developing learning resource metadata vocabularies based on guidelines from the Dublin Core Metadata Initiative. It recommends defining clear use cases, selecting an appropriate domain model, reviewing existing vocabularies to reuse terms, designing detailed metadata records, providing usage guidelines, and engaging relevant communities to ensure long-term stewardship of the vocabulary. The Learning Resource Metadata Initiative (LRMI) could benefit from following these best practices in its development.
Presentation on the DM2E data model, a specialisation of the EDM for the domain of (handwritten) manuscripts. Held at the EDM-Tutorial (22.09.) at the TPDL 2013 on Malta.
This presentation discusses the following topics:
Object Oriented Databases
Object Oriented Data Model(OODM)
Characteristics of Object oriented database
Object, Attributes and Identity
Object oriented methodologies
Benefit of object orientation in programming language
Object oriented model vs Entity Relationship model
Advantages of OODB over RDBMS
Object-Oriented Database Model For Effective Mining Of Advanced Engineering M...cscpconf
Materials have become a very important aspect of our daily life and the search for better and
new kind of engineered materials has created some opportunities for the Information science
and technology fraternity to investigate in to the world of materials. Hence this combination of
materials science and Information science together is nowadays known as Materials
Informatics. An Object-Oriented Database Model has been proposed for organizing advanced engineering materials datasets.
In tech application-of_data_mining_technology_on_e_learning_material_recommen...Enhmandah Hemeelee
The document describes a recommendation system that applies data mining techniques to recommend e-learning materials. It proposes using LDAP for fast searching of materials across systems, JAXB for parsing content, and association rule mining and collaborative filtering to generate recommendations. The system collects user activity data, analyzes it using Apriori algorithm to find related search terms and content, and stores results in an LDAP database to provide recommendations to users.
In tech application-of_data_mining_technology_on_e_learning_material_recommen...Enhmandah Hemeelee
The document describes a recommendation system that applies data mining techniques to recommend e-learning materials. It proposes using LDAP for fast searching of materials across systems, JAXB for parsing content, and association rule mining and collaborative filtering for recommendations. A web spider collects content indexes from learning management systems and stores data in an LDAP directory. Users can search for related materials, and the system mines log data to associate frequently searched terms and recommend additional resources.
EMPLOYING THE CATEGORIES OF WIKIPEDIA IN THE TASK OF AUTOMATIC DOCUMENTS CLUS...IJCI JOURNAL
In this paper we describe a new unsupervised algorithm for automatic documents clustering with the aid of Wikipedia. Contrary to other related algorithms in the field, our algorithm utilizes only two aspects of Wikipedia, namely its categories network and articles titles. We do not utilize the inner content of the articles in Wikipedia or their inner or inter links. The implemented algorithm was evaluated in an
experiment for documents clustering. The findings we obtained indicate that the utilized features from
Wikipedia in our framework can give competing results especially when compared against other models in
the literature which employ the inner content of Wikipedia articles.
Ontology-based Semantic Approach for Learning Object RecommendationIDES Editor
The main focus of this paper is to apply an ontologybased
approach for semantic learning object recommendation
towards personalized e-learning systems. Ontologies for
learner model, learning objects and semantic mapping rules
are proposed. The recommender can be able to provide
individually learning object by taking the learner preferences
and styles, which used to adjust or fine-tune in learning object
recommending process. In the proposed framework, we
demonstrated how the ontologies can be used to enable
machines to interpret and process learning resources in
recommendation system. The recommendation consists of four
steps: semantic mapping between learner and learning
objects, preference score calculation, learning object ranking
and recommending the learning object. As a result, a
personalized and most suitable learning object is
recommended to the learner.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses using personalized ontologies to improve web information gathering by representing user profiles. It proposes a model that constructs personalized ontologies by adopting user feedback from a world knowledge base. The model also uses users' local instance repositories to discover background knowledge and populate the ontologies. The proposed ontology model is evaluated against benchmark models through experiments using a large standard dataset.
II-SDV 2013 Large scale Application of Text Mining and Visualization in the E...Dr. Haxel Consult
The document discusses text analytics capabilities for the EU Fusepool project. It introduces Treparel, a software provider partnering on the project. The Fusepool project aims to develop an adaptive system for pooling and linking data to help small businesses. Key capabilities discussed include user profiling, data sourcing and linking, text analysis, and machine learning algorithms to improve matching and recommendations.
Text Analytics in the EU Fusepool project (at II-SDV 2013 conference)Treparel
Fusepool refines and enriches raw data using common standards and provides tools for analyzing and visualizing data so that end users and other software receive timely, context-aware and relevant information.
To ensure that data and results are of high quality, Fusepool combines the well-defined but error-prone (semantic) Web 3.0 with controlled supervision and the collaborative but often messy (social) Web 2.0.
The document discusses interoperability of metadata across government departments and proposes aggregation, rationalization, and harmonization (ARH) as a process to improve it. Key points from the ARH process include identifying differences in element names, values, and granularity across departments' metadata and working to standardize these through choosing appropriate formats and definitions. The results could include a single metadata application profile and technologies like registries and repositories to preserve local control while enabling discovery across departments.
Open Archives Initiative Object Reuse and Exchangelagoze
This document discusses infrastructure to support new models of scholarly publication by enabling interoperability across repositories through common data modeling and services. It proposes building blocks like repositories, digital objects, a common data model, serialization formats, and core services. This would allow components like publications and data to move across repositories and workflows, facilitating reuse and new value-added services that expose the scholarly communication process.
Similar to Towards Integrating Ontologies An EDM-Based Approach (20)
Reasoning with Reasoning, Semantic technologies for research in the humanities and social sciences (STRiX) Göteborg, 24 November 2014 Kristin Dill, Austrian National Library (ONB) Gerold Tschumpel, Steffen Hennicke, Christian Morbidoni, Klaus Thoden, Alois Pichler
The document summarizes the tasks and results of Work Package 1 (WP1) of the DM2E project. Key points include:
- WP1 involved collecting metadata formats and requirements, testing interfaces for mapping and linking content, and setting up test scenarios for the prototype platform.
- Final content integration took longer than expected due to complex data modeling, issues mapping content, and Europeana's policy changes. Not all promised content was delivered.
- User testing found that interfaces were useful for basic tasks but complex work was done "under the hood". Guidelines were created to represent metadata and define annotatable content.
- While not all content goals were met, over 19 million pages were delivered, with
DM2E Community building (Lieke Ploeger – Open Knowledge) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Open Humanities Awards DM2E track: finderapp WITTfind (Maximilian Hadersbeck – LMU University of Munich) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Humanists and Linked Data (Steffen Hennicke – Humboldt Universität) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Open Humanities Awards Open track: SEA CHANGE (Rainer Simon – AIT Austrian Institute of Technology) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
DM2E Linked Data for Digital Scholars (with talks by Christian Morbidoni – Università Politecnica delle Marche / Net7, Steffen Hennicke – Humboldt Universität and Alessio Piccioli – Net7)
DM2E Interoperability infrastructure (Kai Eckert – University of Mannheim) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Open Humanities Awards Open track: Early Modern European Peace Treaties Online (Michael Piotrowski – IEG Leibniz Institute of European History) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
DM2E Content (Doron Goldfarb – ONB Austrian National Library) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Europeana and the relevance of the DM2E results (Antoine Isaac – Europeana) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Keynote : Beyond DM2E: towards sustainable digital services for humanities research communities in Europe? (Sally Chambers – DARIAH-EU, Göttingen Centre for Digital Humanities) at Enabling humanities research in the Linked Open Web – DM2E final event (11 December 2014, Navacchio, Italy)
Welcome and short introduction to DM2E (Violeta Trkulja – Humboldt University) - Enabling humanities research in the Linked Open Web – DM2E final event
Susanne Müller, EUROCORR project: Burckhardtsource - Presentation given at DM2E event 'Putting Linked Library Data to Work: the DM2E Showcase' (18 Nov 2014, ONB, Vienna)
1. The DM2E project aggregates metadata and content about digitized manuscripts from several European libraries and archives.
2. It develops an interoperability infrastructure using the Europeana Data Model and a DM2E extension to integrate heterogeneous metadata into a linked open data cloud.
3. The project also builds digital humanities applications like Pundit to showcase the usefulness of linked open data for research.
Marko Knepper, University Library Frankfurt am Main: From Library Data to Linked Open Data - Presentation given at DM2E event 'Putting Linked Library Data to Work: the DM2E Showcase' (18 Nov 2014, ONB, Vienna)
The document discusses a project called DM2E that is researching scholarly practices in the humanities and building digital humanities tools. It focuses on the Scholarly Domain Model (SDM) that DM2E is using to model the entities and relationships of the digital scholarship domain. The SDM identifies areas, primitives, activities, and operations of scholarly work. It also describes the Pundit suite of tools for annotating, linking, comparing, and visualizing scholarly sources that were developed based on the SDM.
Towards Integrating Ontologies An EDM-Based Approach
1. Towards Integrating Ontologies
An EDM-Based Approach
Evelyn Dröge, Julia Iwanowa, Violeta Trkulja, Steffen Hennicke, Stefan Gradmann
Berlin School for Library and Information Science, Humboldt-Universität zu Berlin
Presentation on the 13th International Symposium of Information Science
Potsdam, 21.03.2013
co-funded by the European Union
2. DM2E project
Digitised Manuscripts to Europeana (DM2E)
• EU-funded Europeana satelite project
• Duration: Three years (2012 – 2015)
• Partners from Germany, Austria, Norway, Greece, UK and Italy
• Primary aims: To enable as many content providers as possible
to get their data into Europeana and to stimulate the creation of
new tools and services for reuse of Europeana data in the
Digital Humanities
IBI at the Humboldt-Universität zu Berlin
• Coordinates the project
• Is further involved in modeling and in evolving the technical
infrastructure
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 2
3. DM2E: Interoperability approach
• Base: Semantic Web and Linked Data
– Enable and facilitate data interoperability
– Share and reuse ontologies
• Build on common data models
– EDM, DC and DCTerms, OAI-ORE, CIDOC-CRM, FOAF, SKOS
– BIBO, VOID, FABIO
• Uses W3C standards
– RDF(S), OWL
Enable data interoperability
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 3
4. Why should we reuse ontologies?
• Nature of Linked Data
• Reduce multiple likewise resources
• Better visibility of the ontology
• Better quality of the ontology
• Better integration into the Linked Open Data Cloud
• Easy access for more applications
• Make it easier for others to reuse the vocabulary
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 4
5. Reuse practice
• Three steps workflow
1. Ontology retrieval
2. Integration-oriented ontology evaluation
Identification of suitable classes and properties
Analysis of missing elements
3. Ontology integration
Different methods
Simperl, 2010
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 5
7. Step 1: Ontology retrieval
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 7
8. Step 2: Integration-oriented ontology evaluation
DC • Which elements are
missing or can be
rdaGr2 DCTerms replaced?
• What are suitable
EDM SKOS classes or properties
in other standards or
FOAF vocabularies?
ORE
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 8
9. Step 3: Ontology integration
Four different methods can be used for reusing ontologies:
1. Direct adoption of external resources
• External classes or properties are directly used in the ontology buildup.
• Can be used if the external class or property exactly matches the own resource.
• Definitions or labels should not be adjusted!
2. Reuse of external resources without original URIs
• The name of a foreign property or class is used without its original namespace.
3. Reuse of external resources with integration into ontology hierarchy
• Properties or classes are created in the own namespace.
• Subproperty or subclass relations are built between them and the external
resource.
4. Referring to external resources via equivalence properties
•owl:equivalentClass or owl:equivalentProperty are used to refer to
equivalent external resources.
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 9
10. Integrating ontologies:
Method comparison
3
Indirect
Integration
4
Direct 2
1
Integration
Original URI New URI
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 10
12. 1. Method:
Direct adoption of external resources
• Example: dm2edata:exampleItem
edm:type text ;
bibo:isbn „978-3-86680-192-9“ .
Integrating bibo properties for describing text objects
• When should the method be used?
– If the meaning and definition of the external resource is identical
to the meaning of the own resource.
• Advantages and Disadvantages
+ Pro: Reduces the amount of resources that describe the same
thing in other words.
− Con: The method is also used when the meaning is not exactly the
same, which can lead to conflicting descriptions.
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 12
13. 1. Method:
Direct adoption of external resources
Example: foaf:Document
Document
Physical Thing ?
Dokument
Abstract Class ?
Document@en
Electronic?
Documents
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 13
14. 1. Method:
Direct adoption of external resources
Example: Some properties of foaf:Document
rdfs:label Document
Document@en Retrieved from the Linked Open
Documentation@en Vocabularies SPARQL endpoint:
document@en
http://lov.okfn.org/endpoint/lov
dc:title Document@en
dcterms:identifier foaf:Documentation @en-gb
rdfs:comment A document.
An abstract class defining any kinds of publishing work.@en
The foaf:Document class fully represents the ADMS concept of documentation.@en
Similar to the Agent concept, we have again decided to include a concept from the
popular FOAF ontology. The FOAF Vocabulary Specification currently defines
Document in a very loose way: The foaf:Document class represents those things
which are, broadly conceived, 'documents'. ... We do not (currently) distinguish
between physical and electronic documents, or between copies of a work and the
abstraction those copies embody." […]
vann:usageNote Used in ADMS specifically for the class of documents that further describe a Semantic
Asset or give guidelines for its use. ADMS expects all documents to have a title (use
dcterms:title).^^
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 14
15. 2. Method: Reuse with new URIs
• Example: Different URIs for the class thing
owl:Thing
gold:Thing
Are they identical or equivalent?
• Advantages and Disadvantages
+ Pro: Can be used as a first step in the construction process if other
vocabularies are not known.
− Con: It is not clear how the different classes can be set into
relation and it makes it hard to query or reuse the vocabulary.
• This method should not be used!
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 15
16. 3. Method: Reuse with subrelations
• Example: dm2e:Agent rdfs:subClassOf foaf:Agent .
dm2e:publishedAt rdfs:subPropertyOf
dcterms:spatial .
New dm2e property and class as subelements
• When should this method be used?
– If an existing class or property is found which is broader defined
than the resource that should be created.
• Advantages and Disadvantages
+ Pro: The new property or class can have its own description.
+ Pro: Broader properties or classes are easy to be found in upper
ontologies .
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 16
17. 4. Method: Reuse with equivalence properties
• Example: edm:Event owl:equivalentClass
crm:E4.Period .
prism:issn owl:equivalentProperty
bibo:issn .
Reference to equivalent resources
• When should this method be used?
– If exactly the same individuals can be part of all equivalent classes
or can be connected with all equivalent properties.
• Advantages and Disadvantages
+ Pro: The elements can have different descriptions.
− Con: Not every tool can interpret the equivalence properties.
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 17
19. Background: The EDM and its specialisations
The EDM (Europeana Data Model)
… builds the backbone of Europeana
… unites several standards and vocabularies
… covers the representation of cultural heritage objects
from libraries, archives and museums
… is as generic as possible
… can be specialised for different domains
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 19
20. Reusing practice in EDM and DM2E
• Reuse in EDM Primarily used
– Method 1: Direct integration
• Problem: Definitions may differ
– Method 4: Use of equivalence properties
Used for additional classes
• Reuse in DM2E
– Method 1: Direct integration
• Here: Use of additional DM2E scope notes that add information about
the reuse of the integrated element in DM2E
– Method 3: Adding resources with subproperties
• If the new resource has a narrower definition
– Method 4: Use of equivalence properties
• Analogue to the EDM for additional classes or properties
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 20
21. Reusing practice in EDM and DM2E II
• Example: Methods 1 and 3
DC EDM DM2E
dc:contributor dc:contributor dm2e:contributor
rdfs:subPropertyOf
dc:contributor
Range: Not restricted. Range: Person, Range: Person as an URI
organisation or service. of type edm:Agent.
Definition: „An entity that Definition: „An agent that Definition: „A person that
is responsible for making is responsible for making is responsible for making
contributions to the contributions to the contributions to the
resource.“ resource.“ resource.“
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 21
22. Conclusion
• Reuse in DM2E: Mixture of all methods
(excluding method 2)
Making use of the method that suits best
• There is not one „best method“ for all reuse cases…
… but contradictions in descriptions can be
avoided even if resources are reused!
• What we did not (yet) cover: „When owl:sameAs is not the
same…“
Problems that might occur when there are too many
different understandings of resource meanings
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 22
23. Thank you for your attention!
Evelyn Dröge
Julia Iwanowa
Berlin School for Library and
Information Science
Humboldt-Universität zu Berlin
www.ibi.hu-berlin.de
Digitised Manuscripts to Europeana
www.dm2e.eu
evelyn.droege@ibi.hu-berlin.de
julia.iwanowa@ibi.hu-berlin.de
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 23
24. References
Literature
• Berners-Lee, T. (2006). Linked Data - Design Issues. W3C Website.
http://www.w3.org/DesignIssues/LinkedData. [03.03.2013]
• Halpin, H., & Hayes, P. J. (2010). When owl:sameAs isn’t the Same: An Analysis of Identity
Links on the Semantic Web. Proceedings of the WWW2010 Workshop on Linked Data on
the Web, LDOW 2010, Raleigh, USA, April 27, 2010, CEUR Workshop Proceedings (Bd.
628).
• Heath, T., & Bizer, C. (2011). Linked Data: Evolving the Web into a Global Data Space.
Synthesis Lectures on the Semantic Web: Theory and Technology (Bd. 1). Morgan &
Claypool.
• Simperl, E. (2010). Guidelines for Reusing Ontologies on the Semantic Web. International
Journal of Semantic Computing, 04(02), 239–283.
Images
• Footstep (Slide 6): http://openclipart.org/detail/left-footprint-by-anonymous
• Magnifier (Slide 11): http://openclipart.org/detail/159469/web-search-grayscale-by-
sibskull
• Document (Slide 13): http://info.docuvantage.com/Portals/61671/images/stack%20of%
20files%20photo_istock.jpg
• Ontology (Slide 19): http://openclipart.org/detail/133363/ontology-by-imad
• IBI (Slide 23): http://commons.wikimedia.org/wiki/File:Berlin,_Mitte,_Dorotheenstrasse,
_Handelskammer_Berlin_02.jpg
21.03.2013 Towards Integrating Ontologies: An EDM-Based Approach 24