Here are the steps I would suggest for aligning the ontologies:
1. Representatives present their ontology and explain key concepts and relationships.
2. Editor records all concepts and relationships on a whiteboard in a concept map format without evaluation.
3. Representatives discuss each concept and relationship to reach agreement on meaning and resolve any conflicts or ambiguities.
4. Editor incorporates agreed upon concepts and relationships into a single ontology, resolving any structural issues.
5. Representatives review the aligned ontology and provide feedback.
6. Editor incorporates final changes to produce the aligned ontology for use by all groups.
The goal is to understand each perspective, identify areas of overlap and conflict, and work together iteratively
Introduction to Ontology Engineering with Fluent Editor 2014Cognitum
An introductory course for Ontology Engineering using Controlled Natural Language. Fluent Editor (FE) is an ontology editor that is a tool for editing and manipulating ontologies. The main feature of Fluent Editor is that it uses controlled natural language (CNL) to communicate with a user. Communication with CNL is a more suitable for human users alternative to XML-based OWL editors.
Dr Alessandro Seganti from Cognitum presented basics of Semantic Technologies, OntorionCNL, Ontorion Semantic Framework and Fluent Editor during International Conference on Computer Science -- Research and Applications IBIZA 2014, UMCS Lublin.
To learn more visit: http://www.cognitum.eu/semantics/
The document discusses scaling up information extraction to large collections by focusing on efficiency. It describes approaches such as using simple rules to process the majority of documents, filtering irrelevant documents without full processing, sharing annotations across tasks, and exploiting keyword indexes and specialized indexes to retrieve only relevant documents in an efficient manner. The goal is to apply information extraction techniques to massive web-scale data.
The document provides an overview of the state of natural language processing (NLP) and Amazon's NLP offering Amazon Comprehend. It discusses the evolution of NLP from rule-based systems to modern neural models like BERT and Transformer and the increasing complexity of NLP tasks. The document also describes Amazon Comprehend's capabilities in areas like sentiment analysis, named entity recognition, keyphrase extraction, and language detection.
Lect6-An introduction to ontologies and ontology developmentAntonio Moreno
The document provides an overview of ontologies and ontology development:
1. It defines ontologies as explicit specifications of conceptualizations in a domain that define concepts, properties, attributes, and relationships to enable knowledge sharing.
2. Ontology components include concepts, properties, restrictions, and individuals. Ontologies can range from single large ontologies to several specialized smaller ones.
3. OWL is introduced as the standard language for representing ontologies, with features like classes, properties, restrictions, and logical operators.
4. A general methodology for ontology development is outlined, including determining scope, reusing existing ontologies, enumerating terms, and defining classes, properties, and other components in an iterative
Categorization of Semantic Roles for Dictionary DefinitionsAndre Freitas
Understanding the semantic relationships between terms is a fundamental task in natural language
processing applications. While structured resources that can express those relationships in
a formal way, such as ontologies, are still scarce, a large number of linguistic resources gathering
dictionary definitions is becoming available, but understanding the semantic structure of natural
language definitions is fundamental to make them useful in semantic interpretation tasks. Based
on an analysis of a subset of WordNet’s glosses, we propose a set of semantic roles that compose
the semantic structure of a dictionary definition, and show how they are related to the definition’s
syntactic configuration, identifying patterns that can be used in the development of information
extraction frameworks and semantic models.
Presented January 18, 2010 to the ALCTS Committee on Cataloging: Description and Access (CC:DA) as an introduction to RDF data, and application profiles. Presenters were Jon Phipps, Karen Coyle and Diane Hillmann.
Introduction to Ontology Engineering with Fluent Editor 2014Cognitum
An introductory course for Ontology Engineering using Controlled Natural Language. Fluent Editor (FE) is an ontology editor that is a tool for editing and manipulating ontologies. The main feature of Fluent Editor is that it uses controlled natural language (CNL) to communicate with a user. Communication with CNL is a more suitable for human users alternative to XML-based OWL editors.
Dr Alessandro Seganti from Cognitum presented basics of Semantic Technologies, OntorionCNL, Ontorion Semantic Framework and Fluent Editor during International Conference on Computer Science -- Research and Applications IBIZA 2014, UMCS Lublin.
To learn more visit: http://www.cognitum.eu/semantics/
The document discusses scaling up information extraction to large collections by focusing on efficiency. It describes approaches such as using simple rules to process the majority of documents, filtering irrelevant documents without full processing, sharing annotations across tasks, and exploiting keyword indexes and specialized indexes to retrieve only relevant documents in an efficient manner. The goal is to apply information extraction techniques to massive web-scale data.
The document provides an overview of the state of natural language processing (NLP) and Amazon's NLP offering Amazon Comprehend. It discusses the evolution of NLP from rule-based systems to modern neural models like BERT and Transformer and the increasing complexity of NLP tasks. The document also describes Amazon Comprehend's capabilities in areas like sentiment analysis, named entity recognition, keyphrase extraction, and language detection.
Lect6-An introduction to ontologies and ontology developmentAntonio Moreno
The document provides an overview of ontologies and ontology development:
1. It defines ontologies as explicit specifications of conceptualizations in a domain that define concepts, properties, attributes, and relationships to enable knowledge sharing.
2. Ontology components include concepts, properties, restrictions, and individuals. Ontologies can range from single large ontologies to several specialized smaller ones.
3. OWL is introduced as the standard language for representing ontologies, with features like classes, properties, restrictions, and logical operators.
4. A general methodology for ontology development is outlined, including determining scope, reusing existing ontologies, enumerating terms, and defining classes, properties, and other components in an iterative
Categorization of Semantic Roles for Dictionary DefinitionsAndre Freitas
Understanding the semantic relationships between terms is a fundamental task in natural language
processing applications. While structured resources that can express those relationships in
a formal way, such as ontologies, are still scarce, a large number of linguistic resources gathering
dictionary definitions is becoming available, but understanding the semantic structure of natural
language definitions is fundamental to make them useful in semantic interpretation tasks. Based
on an analysis of a subset of WordNet’s glosses, we propose a set of semantic roles that compose
the semantic structure of a dictionary definition, and show how they are related to the definition’s
syntactic configuration, identifying patterns that can be used in the development of information
extraction frameworks and semantic models.
Presented January 18, 2010 to the ALCTS Committee on Cataloging: Description and Access (CC:DA) as an introduction to RDF data, and application profiles. Presenters were Jon Phipps, Karen Coyle and Diane Hillmann.
Formal and Computational Representations
The Semantics of First-Order Logic
Event Representations
Description Logics & the Web Ontology Language
Compositionality
Lamba calculus
Corpus-based approaches:
Latent Semantic Analysis
Topic models
Distributional Semantics
The document describes Lydia, a system for named entity recognition and text analysis that was adapted for question answering at TREC 2005. It summarizes Lydia's pipeline for entity recognition and relationship analysis. It then describes the question answering system, which takes questions as input, extracts targets, collects candidate answers from Lydia's database, scores and ranks candidates, and produces a single answer or list of answers. The system handles factoid, list, and other questions by analyzing the question type and scoring candidates based on features like target juxtaposition and question term matching.
In this presentation we discuss several concepts that include Word Representation using SVD as well as neural networks based techniques. In addition we also cover core concepts such as cosine similarity, atomic and distributed representations.
Representing and Reasoning with Modular OntologiesJie Bao
The document discusses representing and reasoning with modular ontologies. It introduces the need for modularity in large ontologies to enable reuse and selective knowledge hiding. It presents package-based description logics (P-DL) as a formalism for representing and reasoning with modular ontologies through package extension and importing. P-DL defines local interpretations and model projection to provide unambiguous semantics for modular ontologies while supporting both inter-module subsumption and role relations. Scope limitation modifiers and concealable reasoning are discussed to enable selective knowledge hiding across module boundaries without compromising soundness.
13. Constantin Orasan (UoW) Natural Language Processing for TranslationRIILP
This document discusses how natural language processing (NLP) techniques can help improve machine translation (MT). It describes some of the linguistic challenges in MT, such as ambiguity at the lexical, syntactic, semantic and pragmatic levels. It then discusses how various NLP tasks, such as tokenization, word sense disambiguation, and handling of named entities could enhance MT systems. Several studies that have successfully integrated NLP techniques like word sense disambiguation into statistical machine translation systems are also summarized.
In this talk I intend to review some basic and high-level concepts like formal languages, grammars and ontologies. Languages to transmit knowledge from a sender to a receiver; grammars to formally specify languages; ontologies as formals specifications of specific knowledge domains. After this introductory revision, enhancing the role of each of those elements in the context of computer-based problem solving (programming), I will talk about a project aimed at automatically infer and generate a Grammar for a Domain Specific Language (DSL) from a given ontology that describes this specific domain. The transformation rules will be presented and the system, Onto2Gra, that fully implements that "Ontological approach for DSL development" will be introduced.
The document proposes adapting OWL as a more modular ontology language by addressing weaknesses in its current modularity. Specifically, OWL lacks:
1) Semantic modularity as it only supports global semantics between imported ontologies.
2) Syntactic modularity as imports can lead to tangled definitions between modules.
The paper suggests approaches to enhance OWL's modularity while maintaining backwards compatibility, such as giving imports a localized semantics or defining explicit syntactic rules to avoid nested definitions across modules.
Improvement in Quality of Speech associated with Braille codes - A Reviewinscit2006
J. Anurag, P. Nupur and Agrawal, S.S.
School of Information Technology, Guru Gobind Singh Indraprastha University, Delhi, India
Centre for Development of Advanced Computing, Noida, India
Divide and Conquer Semantic Web with ModularJie Bao
This document provides a brief review of modular ontology language formalisms. It discusses the need for modular ontologies to address issues with large, monolithic ontologies. Several approaches to modular ontologies are summarized, including Distributed Description Logics (DDL), E-Connections, and Package-based Description Logics (P-DL). Key challenges with modular ontologies are also outlined, such as reasoning across modules and ensuring interoperability while preserving local semantics.
An Intuitive Natural Language Understanding Systeminscit2006
The document describes the development of a natural language understanding system with 6 modules for morphological analysis, synonym matching, syntax analysis, semantic analysis, and knowledge base interaction to understand commands in English sentences and execute the corresponding shell command. It discusses the methodology used in building the modules and evaluates the system's performance on 50 test sentences, achieving a 94% precision in generating the correct responses.
This document summarizes a final year presentation for an Arabic grammar mobile application called M-Nahu. The application aims to help users learn Arabic grammar easily on their mobile phones by allowing them to search words, answer quizzes, and request new words. It also allows administrators to add new words, topics, and questions to the application database. The presentation covers the application background, objectives, scope, process model, design diagrams, and discusses the complexity of modeling Arabic verb and noun grammar rules.
Prolog is a logic programming language based on mathematical logic. It was invented in 1971 and allows programmers to model human logic and decision making. Prolog uses Horn clauses to express statements and performs backward chaining to prove goals by working backwards from what it is trying to prove to the facts. It is commonly used for intelligent data retrieval, expert systems, and other artificial intelligence applications that require symbolic reasoning.
This document is a lecture on tokenization and word counts in natural language processing. It discusses concepts like types and tokens, Zipf's law and Heap's law which relate the number of word types to the number of tokens in a text. The document also covers challenges in tokenization like sentence segmentation and provides examples of rule-based and machine learning approaches to tokenization. It introduces word normalization techniques like lemmatization and stemming and provides exercises for students to practice word counting, lemmatization, stemming and removing stop words from texts.
ESR10 Joachim Daiber - EXPERT Summer School - Malaga 2015RIILP
The document discusses using syntactic preordering models to delimit the morphosyntactic search space for machine translation of morphologically rich languages. It explores preordering dependency trees of the source language to reduce word order variations and predicting morphological attributes on the source side to inform target language word selection. Experimental results show that non-local features and jointly learning which attributes to predict can improve translation performance over baselines. The work aims to combine preordering and morphology prediction to better exploit interactions between syntactic structure and inflectional properties.
This document provides an introduction to natural language processing (NLP). It discusses the brief history of NLP, major NLP tasks such as machine translation and text classification, common NLP techniques like part-of-speech tagging and parsing, main problems in NLP including ambiguity, and an overview of the topics to be covered in the course such as tokenization, parsing, and topic modeling. The course aims to use Python and R to complete various NLP tasks.
Artificial intelligence and first order logicparsa rafiq
The document discusses knowledge representation and first order logic. It defines knowledge representation as how knowledge is encoded in artificial systems. It discusses representing objects, events, performance, meta-knowledge and facts. It also discusses types of knowledge like meta knowledge, heuristic knowledge, procedural knowledge and declarative knowledge. The document then discusses first order logic syntax including logical symbols, terms, formulas, quantifiers and predicates. It also discusses semantics and the uses and history of first order logic.
Prolog is a logic programming language based on first-order predicate logic. Some key points:
- Prolog programs consist of facts and rules defined with predicates and clauses. Predicates take arguments that can be variables, constants, or terms.
- Prolog is declarative rather than procedural - programs specify relations between objects rather than algorithms or steps. Prolog uses backtracking to try different substitutions for variables to determine if queries are true.
- Prolog was invented in the 1970s and was inspired by logic programming research. It is well-suited for symbolic processing, knowledge representation, and natural language processing.
The document discusses creating and using ontologies. It defines an ontology as a representation of things in a domain, their characteristics and relationships. Ontologies are used to share a common understanding of a domain among people and machines. They make domain assumptions and knowledge explicit and separate domain knowledge from operational knowledge. The document provides an overview of the ontology development process including requirements analysis, conceptualization, and implementation. It discusses finding existing ontologies and provides examples of competency questions for requirements analysis.
The document discusses the basics of ontologies, including their origin in philosophy, definitions, types, benefits and application areas. Some key points are:
- An ontology is a formal specification of a conceptualization used to help humans and programs share knowledge. It establishes a shared vocabulary for exchanging information.
- Ontologies describe domain knowledge and provide an agreed-upon understanding of a domain through concepts and relations. They help solve problems of ambiguity and enable knowledge sharing.
- Ontologies benefit applications like information retrieval, digital libraries, knowledge engineering and natural language processing by facilitating semantic search and integration of data.
Formal and Computational Representations
The Semantics of First-Order Logic
Event Representations
Description Logics & the Web Ontology Language
Compositionality
Lamba calculus
Corpus-based approaches:
Latent Semantic Analysis
Topic models
Distributional Semantics
The document describes Lydia, a system for named entity recognition and text analysis that was adapted for question answering at TREC 2005. It summarizes Lydia's pipeline for entity recognition and relationship analysis. It then describes the question answering system, which takes questions as input, extracts targets, collects candidate answers from Lydia's database, scores and ranks candidates, and produces a single answer or list of answers. The system handles factoid, list, and other questions by analyzing the question type and scoring candidates based on features like target juxtaposition and question term matching.
In this presentation we discuss several concepts that include Word Representation using SVD as well as neural networks based techniques. In addition we also cover core concepts such as cosine similarity, atomic and distributed representations.
Representing and Reasoning with Modular OntologiesJie Bao
The document discusses representing and reasoning with modular ontologies. It introduces the need for modularity in large ontologies to enable reuse and selective knowledge hiding. It presents package-based description logics (P-DL) as a formalism for representing and reasoning with modular ontologies through package extension and importing. P-DL defines local interpretations and model projection to provide unambiguous semantics for modular ontologies while supporting both inter-module subsumption and role relations. Scope limitation modifiers and concealable reasoning are discussed to enable selective knowledge hiding across module boundaries without compromising soundness.
13. Constantin Orasan (UoW) Natural Language Processing for TranslationRIILP
This document discusses how natural language processing (NLP) techniques can help improve machine translation (MT). It describes some of the linguistic challenges in MT, such as ambiguity at the lexical, syntactic, semantic and pragmatic levels. It then discusses how various NLP tasks, such as tokenization, word sense disambiguation, and handling of named entities could enhance MT systems. Several studies that have successfully integrated NLP techniques like word sense disambiguation into statistical machine translation systems are also summarized.
In this talk I intend to review some basic and high-level concepts like formal languages, grammars and ontologies. Languages to transmit knowledge from a sender to a receiver; grammars to formally specify languages; ontologies as formals specifications of specific knowledge domains. After this introductory revision, enhancing the role of each of those elements in the context of computer-based problem solving (programming), I will talk about a project aimed at automatically infer and generate a Grammar for a Domain Specific Language (DSL) from a given ontology that describes this specific domain. The transformation rules will be presented and the system, Onto2Gra, that fully implements that "Ontological approach for DSL development" will be introduced.
The document proposes adapting OWL as a more modular ontology language by addressing weaknesses in its current modularity. Specifically, OWL lacks:
1) Semantic modularity as it only supports global semantics between imported ontologies.
2) Syntactic modularity as imports can lead to tangled definitions between modules.
The paper suggests approaches to enhance OWL's modularity while maintaining backwards compatibility, such as giving imports a localized semantics or defining explicit syntactic rules to avoid nested definitions across modules.
Improvement in Quality of Speech associated with Braille codes - A Reviewinscit2006
J. Anurag, P. Nupur and Agrawal, S.S.
School of Information Technology, Guru Gobind Singh Indraprastha University, Delhi, India
Centre for Development of Advanced Computing, Noida, India
Divide and Conquer Semantic Web with ModularJie Bao
This document provides a brief review of modular ontology language formalisms. It discusses the need for modular ontologies to address issues with large, monolithic ontologies. Several approaches to modular ontologies are summarized, including Distributed Description Logics (DDL), E-Connections, and Package-based Description Logics (P-DL). Key challenges with modular ontologies are also outlined, such as reasoning across modules and ensuring interoperability while preserving local semantics.
An Intuitive Natural Language Understanding Systeminscit2006
The document describes the development of a natural language understanding system with 6 modules for morphological analysis, synonym matching, syntax analysis, semantic analysis, and knowledge base interaction to understand commands in English sentences and execute the corresponding shell command. It discusses the methodology used in building the modules and evaluates the system's performance on 50 test sentences, achieving a 94% precision in generating the correct responses.
This document summarizes a final year presentation for an Arabic grammar mobile application called M-Nahu. The application aims to help users learn Arabic grammar easily on their mobile phones by allowing them to search words, answer quizzes, and request new words. It also allows administrators to add new words, topics, and questions to the application database. The presentation covers the application background, objectives, scope, process model, design diagrams, and discusses the complexity of modeling Arabic verb and noun grammar rules.
Prolog is a logic programming language based on mathematical logic. It was invented in 1971 and allows programmers to model human logic and decision making. Prolog uses Horn clauses to express statements and performs backward chaining to prove goals by working backwards from what it is trying to prove to the facts. It is commonly used for intelligent data retrieval, expert systems, and other artificial intelligence applications that require symbolic reasoning.
This document is a lecture on tokenization and word counts in natural language processing. It discusses concepts like types and tokens, Zipf's law and Heap's law which relate the number of word types to the number of tokens in a text. The document also covers challenges in tokenization like sentence segmentation and provides examples of rule-based and machine learning approaches to tokenization. It introduces word normalization techniques like lemmatization and stemming and provides exercises for students to practice word counting, lemmatization, stemming and removing stop words from texts.
ESR10 Joachim Daiber - EXPERT Summer School - Malaga 2015RIILP
The document discusses using syntactic preordering models to delimit the morphosyntactic search space for machine translation of morphologically rich languages. It explores preordering dependency trees of the source language to reduce word order variations and predicting morphological attributes on the source side to inform target language word selection. Experimental results show that non-local features and jointly learning which attributes to predict can improve translation performance over baselines. The work aims to combine preordering and morphology prediction to better exploit interactions between syntactic structure and inflectional properties.
This document provides an introduction to natural language processing (NLP). It discusses the brief history of NLP, major NLP tasks such as machine translation and text classification, common NLP techniques like part-of-speech tagging and parsing, main problems in NLP including ambiguity, and an overview of the topics to be covered in the course such as tokenization, parsing, and topic modeling. The course aims to use Python and R to complete various NLP tasks.
Artificial intelligence and first order logicparsa rafiq
The document discusses knowledge representation and first order logic. It defines knowledge representation as how knowledge is encoded in artificial systems. It discusses representing objects, events, performance, meta-knowledge and facts. It also discusses types of knowledge like meta knowledge, heuristic knowledge, procedural knowledge and declarative knowledge. The document then discusses first order logic syntax including logical symbols, terms, formulas, quantifiers and predicates. It also discusses semantics and the uses and history of first order logic.
Prolog is a logic programming language based on first-order predicate logic. Some key points:
- Prolog programs consist of facts and rules defined with predicates and clauses. Predicates take arguments that can be variables, constants, or terms.
- Prolog is declarative rather than procedural - programs specify relations between objects rather than algorithms or steps. Prolog uses backtracking to try different substitutions for variables to determine if queries are true.
- Prolog was invented in the 1970s and was inspired by logic programming research. It is well-suited for symbolic processing, knowledge representation, and natural language processing.
The document discusses creating and using ontologies. It defines an ontology as a representation of things in a domain, their characteristics and relationships. Ontologies are used to share a common understanding of a domain among people and machines. They make domain assumptions and knowledge explicit and separate domain knowledge from operational knowledge. The document provides an overview of the ontology development process including requirements analysis, conceptualization, and implementation. It discusses finding existing ontologies and provides examples of competency questions for requirements analysis.
The document discusses the basics of ontologies, including their origin in philosophy, definitions, types, benefits and application areas. Some key points are:
- An ontology is a formal specification of a conceptualization used to help humans and programs share knowledge. It establishes a shared vocabulary for exchanging information.
- Ontologies describe domain knowledge and provide an agreed-upon understanding of a domain through concepts and relations. They help solve problems of ambiguity and enable knowledge sharing.
- Ontologies benefit applications like information retrieval, digital libraries, knowledge engineering and natural language processing by facilitating semantic search and integration of data.
Keystone Summer School 2015: Mauro Dragoni, Ontologies For Information RetrievalMauro Dragoni
The presentation provides an overview of what an ontology is and how it can be used for representing information and for retrieving data with a particular focus on the linguistic resources available for supporting this kind of task. Overview of semantic-based retrieval approaches by highlighting the pro and cons of using semantic approaches with respect to classic ones. Use cases are presented and discussed
This document discusses the state-of-the-art of Internet of Things (IoT) ontologies. It begins by defining ontology and describing important design criteria for ontologies including clarity, coherence, extendibility, and minimal encoding bias. It then discusses the challenges of IoT, including large scale networks, deep heterogeneity, and unknown topology. Several existing IoT ontologies are described, including SWAMO, MMI Device Ontology, and SSN. The document concludes that while no single global IoT ontology currently exists, ontologies are needed to address the semantic interoperability challenges of heterogeneous IoT devices and domains.
study or concern about what kinds of things exist
what entities there are in the universe.
the ontology derives from the Greek onto (being) and logia (written or spoken). It is a branch of metaphysics , the study of first principles or the root of things.
Keynote presentation for the International Semantic Web Conference in Athens Greece, on November 9, 2023. The talk addresses the generative AI explosion and its potential impacts on the Semantic Web and Knowledge Graph communities and, in fact, may spark a research Renaissance.
Abstract:
We are living in an age of rapidly advancing technology. History may view this period as one in which generative artificial intelligence is seen as reshaping the landscape and narrative of many technology-based fields of research and application. Times of disruptions often present both opportunities and challenges. We will discuss some areas that may be ripe for consideration in the field of Semantic Web research and semantically-enabled applications. Semantic Web research has historically focused on representation and reasoning and enabling interoperability of data and vocabularies. At the core are ontologies along with ontology-enabled (or ontology-compatible) knowledge stores such as knowledge graphs. Ontologies are often manually constructed using a process that (1) identifies existing best practice ontologies (and vocabularies) and (2) generates a plan for how to leverage these ontologies by aligning and augmenting them as needed to address requirements. While semi-automated techniques may help, there is typically a significant portion of the work that is often best done by humans with domain and ontology expertise. This is an opportune time to rethink how the field generates, evolves, maintains, and evaluates ontologies. We consider how hybrid approaches, i.e., those that leverage generative AI components along with more traditional knowledge representation and reasoning approaches to create improved processes. The effort to build a robust ontology that meets a use case can be large. Ontologies are not static however and they need to evolve along with knowledge evolution and expanded usage. There is potential for hybrid approaches to help identify gaps in ontologies and/or refine content. Further, ontologies need to be documented with term definitions and their provenance. Opportunities exist to consider semi-automated techniques for some types of documentation, provenance, and decision rationale capture for annotating ontologies. The area of human-AI collaboration for population and verification presents a wide range of areas of research collaboration and impact. Ontologies need to be populated with class and relationship content. Knowledge graphs and other knowledge stores need to be populated with instance data in order to be used for question answering and reasoning. Population of large knowledge graphs can be time consuming. Generative AI holds the promise to create candidate knowledge graphs that are compatible with the ontology schema. The knowledge graph should contain provenance information identifying how the content was populated and its source and correctness and currency should be checked. A human-AI assistant approach is presented.
The document discusses stepwise methodologies for building ontologies. It outlines common steps such as identifying the purpose and scope, capturing concepts and relationships, coding the ontology formally, integrating existing ontologies, evaluation, and documentation. It emphasizes starting with a middle-out approach to capture definitions and discusses reaching consensus among those involved in building the ontology. Modularization of ontologies into reusable components is also presented as an important aspect of the methodology.
The document introduces ontology and describes what it is from both philosophical and computer science perspectives. An ontology in computers consists of a vocabulary to describe a domain, specifications of the meaning of terms, and constraints capturing additional knowledge about the domain. It then provides an example ontology and discusses applications of ontologies such as for the semantic web. It also discusses important considerations for building ontologies such as collaboration, versioning, and ease of use.
SWSN UNIT-3.pptx we can information about swsn professionalgowthamnaidu0986
Ontology engineering involves constructing ontologies through various methods. It begins with defining the scope and evaluating existing ontologies for reuse. Terms are enumerated and organized in a taxonomy with defined properties, facets, and instances. The ontology is checked for anomalies and refined iteratively. Popular tools for ontology development include Protege and WebOnto. Methods like Meth ontology and On-To-Knowledge methodology provide processes for building ontologies from scratch or reusing existing ones. Ontology sharing requires mapping between ontologies to allow interoperability, and libraries exist for storing and accessing ontologies.
The document discusses methods for evaluating ontologies. It proposes developing objective metrics to evaluate ontologies based on three criteria: correctness, completeness, and utility. Correctness evaluates how well an ontology expresses its design objectives. Completeness evaluates how fully an ontology captures required semantic components. Utility combines correctness and completeness and evaluates an ontology's usefulness for its intended use case. Examples are provided to illustrate evaluating ontologies based on the proposed metrics. The goal is to develop standardized evaluation methods to facilitate ontology development and reuse across different domains.
The document discusses the use of ontologies in ubiquitous computing. It defines what an ontology is and describes ontology languages. It then presents a taxonomy for classifying ontologies used in ubiquitous computing into two main categories: ontologies of the ubiquitous computing domain and ontologies as software artifacts. Examples are given for each category including generic and specific domain ontologies as well as ontology-driven, ontology-aware, and ontology use at development time applications. The conclusion states that many works propose using ontologies in ubiquitous computing and the presented taxonomy can help organize these works.
A Comparative Study Ontology Building Tools for Semantic Web Applications IJwest
This document provides a comparative study of four popular ontology building tools: Protégé 3.4, IsaViz, Apollo, and SWOOP. It discusses the features and functionalities of each tool, including their capabilities for ontology editing, browsing, documentation, import/export of formats, and visualization. The document aims to identify existing ontology tools that are freely available and can be used to develop ontologies for various application domains such as transport, tourism, health, and natural language. It evaluates the tools based on criteria like interoperability, openness, ease of updating/maintaining ontologies, and market penetration.
A Comparative Study Ontology Building Tools for Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.
A Comparative Study of Ontology building Tools in Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
Proposal of an Ontology Applied to Technical Debt on PL/SQL DevelopmentJorge Barreto
The document proposes an ontology for technical debt in PL/SQL development. It discusses relevant concepts like ontologies, technical debt, and PL/SQL. An initial model is developed in Protégé with five types of technical debt - documentation, requirements, tests, design, and code. The model could be expanded to include more debt types and relationships. The ontology provides a standardized vocabulary to describe technical debt for PL/SQL developers.
Representation of ontology by Classified Interrelated object modelMihika Shah
1. The document discusses representing ontology using the Classified Interrelated Object Model (CIOM) data modeling technique. CIOM represents ontology components like classes, subclasses, attributes, and relationships between classes.
2. Key components of an ontology like classes, subclasses, attributes, and inter-class relationships are described and examples are given of how each would be represented using CIOM notation.
3. CIOM provides a general purpose methodology for representing ontologies using existing database technologies and overcomes limitations of specialized ontology languages and tools.
The objective of this webinar is to provide a brief overview of the Knowledge Organization Systems (KOS) and the tools used for managing them. The presentation will focus on the management of the multilingual Organic.Edunet ontology as a case study. In this context it will present aspects such as the collaborative work, multilinguality needs and update of the concepts using an online KOS management tool (MoKi).
1) The document presents a new ontology-based question answering method using query templates for the dining domain.
2) A dining ontology is developed to represent concepts like cuisine, facilities, meals, and their relationships.
3) Query templates are generated from the dining ontology and stored to enable faster retrieval of answers from the ontology compared to using SPARQL queries. This improves reusability.
Presentation made in the context of the FAO AIMS Webinar titled “Knowledge Organization Systems (KOS): Management of Classification Systems in the case of Organic.Edunet” (http://aims.fao.org/community/blogs/new-webinaraims-knowledge-organization-systems-kos-management-classification-systems)
21/2/2014
Similar to ESWC SS 2012 - Tuesday Tutorial Elena Simperl: Creating and Using Ontologies (20)
The document describes a semantic recommendation system for helping customers select fish for an aquarium. The system takes into account various criteria like temperature, predator/prey relationships between fish, food requirements, ecosystem needs, size, and color preferences. It integrates data from multiple sources and uses semantic technologies like ontologies and linked data to make personalized recommendations based on a user's needs and preferences. The system aims to connect people interested in fish keeping through a social network application.
SyrtAPI is a new entertainment platform that combines music and book data from multiple sources like MusicBrainz and NYTimes reviews. It uses Linked Data and SPARQL queries to extract lyrics and reviews and recommend songs that match the content of the books. The team learned about using Linked Data vocabularies and linking datasets while building the prototype, which currently retrieves 25 lyrics and 10 book reviews through its pipeline. Future work includes adding more data sources, developing a mobile app, and using natural language processing to better analyze texts.
Keep fit (a bit) - ESWC SSchool 14 - Student projecteswcsummerschool
The document presents a web-based project called "Keep Fit(a Bit) in Kalamaki" which aims to make Kalamaki, Greece a smart city by developing a personalized health planner. The planner integrates data on restaurants, dishes, energy/calorie content, prices, and walking distances to provide personalized recommendations to help users like Fred stay fit on holidays in Kalamaki. The project team collected data from various sources and implemented a prototype interface that allows users to view personalized recommendations. Future steps include publishing more Kalamaki data, adding social features, and integrating additional health and weather data.
This document discusses the creation of an Arabic sentiment lexicon and finding related entities from Arabic text. It involves processing Arabic financial text data by tagging parts of speech, removing stop words, translating verbs and adjectives to English using Google Translate, stemming the words, and using an existing English sentiment lexicon like SentiWordNet to assign positive and negative sentiment scores. Related entities are extracted using a window-based approach to find nouns occurring near sentiment words. The process aims to create an Arabic sentiment lexicon and identify related entities to help with sentiment analysis on Arabic text.
FIT-8BIT An activity music assistant - ESWC SSchool 14 - Student projecteswcsummerschool
The document discusses the advantages of music in sports. It outlines five key ways music can influence preparation and performance: 1) dissociation to lower effort perception, 2) arousal regulation as a stimulant or sedative, 3) synchronization with exercise for increased output, 4) positive impact on acquiring motor skills, and 5) attainment of flow state. It also discusses links between sport and music, defining tempo rhythms, and providing scenarios for a music application interface and workflow.
Personal Tours at the British Museum - ESWC SSchool 14 - Student projecteswcsummerschool
This document discusses creating personal tours at the British Museum using a mobile app. It would allow visitors to choose a starting point and then be suggested next artifacts to view based on their interests, time constraints, and what they liked or disliked. Challenges included data issues and changing requirements, but enriching descriptions, collecting visitor analytics, and adding game elements could improve the experience.
Empowering fishing business using Linked Data - ESWC SSchool 14 - Student pro...eswcsummerschool
This document describes a project to create a visualization of current fish population and fishing legislation around the world. The project, called PYTHEIA, will provide information to fishing businesses to help them choose suitable new locations by linking data on fish populations, laws, and management from various sources. It outlines the user scenario, workflow, ontology developed to represent the data, and plans for the user interface and enhancing the system in the future.
Tutorial: Social Semantic Web and Crowdsourcing - E. Simperl - ESWC SS 2014 eswcsummerschool
This document discusses combining the social web and semantic web through crowdsourcing. It defines key concepts like the social web, crowdsourcing, and semantic technologies. It then provides examples of how semantic tasks can be crowdsourced, such as annotating research papers, mapping topics to ontologies, and curating linked data. Challenges with crowdsourcing semantic tasks are also explored, such as how to optimally structure tasks and validate crowd responses.
Keynote: Global Media Monitoring - M. Grobelnik - ESWC SS 2014eswcsummerschool
This document discusses tools and techniques for monitoring global media data and events. It introduces several systems developed at the Jozef Stefan Institute for collecting news articles from around the world, enriching documents with semantic annotations, linking information across languages, and analyzing news reporting bias. It also addresses representing events with structured and semantic descriptions and tracking how topics evolve over time through an event registry system. The overall goal is to establish an integrated real-time pipeline for processing multilingual media, identifying events, and providing visualization of global event dynamics.
Hands On: Amazon Mechanical Turk - M. Acosta - ESWC SS 2014 eswcsummerschool
This document provides an overview of Amazon Mechanical Turk (MTurk) and how it can be used for crowdsourcing projects. It discusses key MTurk concepts like requesters, workers, HITs, assignments, and qualifications. It then walks through the steps to create an MTurk project, including defining the HIT properties, previewing templates, creating batches, publishing HITs, and reviewing results. Finally, it discusses best practices like testing HITs in the sandbox environment and monitoring worker forums.
Tutorial: Querying a Marine Data Warehouse Using SPARQL - I. Fundulaki - ESWC...eswcsummerschool
This document describes querying a marine data warehouse using SPARQL. It discusses the MarineTLO ontology used to integrate data about marine species from multiple sources. Examples are provided of SPARQL queries against the MarineTLO warehouse to retrieve information about species, their distributions, relationships and more. A series of 21 example queries are also listed that demonstrate different ways of interrogating the semantic data in the warehouse.
This document discusses different data formats for representing cultural data on the web and their pros and cons, including CSV, RDBMS, XML/SOAP, and JSON/REST. It advocates for using URIs, HTTP, and semantic web standards like RDF and SPARQL to represent cultural data in a way that is distributed, extensible, and links related resources on the web.
The document outlines the schedule and activities for a summer school on semantic web technologies. The summer school will include tutorials on topics such as linked data, ontologies, and data publishing/preservation. Students will work in groups on mini-projects with guidance from tutors. There will be keynote speakers each day and social events planned. The goal is for students to learn practical skills through hands-on experience while interacting with peers and experts in the field.
This document discusses querying cultural heritage data stored as graphs using SPARQL. It provides examples of retrieving single and sets of triples from the graph and explains how a SPARQL server can perform additional reasoning. Exercises demonstrate querying for object owners and their names, exporting query results to CSV, and counting objects made of different materials.
This document outlines the goals and instructions for a hands-on session to publish a dataset as linked data. The session will divide participants into three groups to work on creating, interlinking, and publishing the RDF dataset. Each group will have 40 minutes to select vocabularies, design URIs, transform tabular data into RDF, select target datasets to link to, create metadata using VoID, and select a license. Then each group will present their work in 1 minute without slides. The overall goal is to accomplish the tasks of creating, interlinking, and publishing the RDF dataset.
The document discusses processing linked data at high speeds using the Signal/Collect graph algorithm framework. It provides examples of how Signal/Collect can be used to perform tasks like RDFS subclass inference and PageRank calculation on semantic graphs. It also summarizes performance results showing that TripleRush, an implementation of Signal/Collect, outperforms other graph processing systems on benchmark datasets. Finally, it discusses ongoing work on graph partitioning with TripleRush.
This document discusses knowledge engineering and the use of knowledge on the web. It covers web data representation using standards like RDF, HTML5 and SKOS. It discusses categorizing knowledge from different sources and aligning categories. It also discusses using knowledge through techniques like visualization, graph-based search across linked data, and improving search through vocabulary alignment and location-based queries.
This document provides an overview of querying linked data using SPARQL. It begins with an introduction and motivation for querying linked data. It then covers the basics of SPARQL including its components like prefixes, query forms, and solution modifiers. Several examples are provided demonstrating how to construct ASK, SELECT, and other types of SPARQL queries. The document also discusses SPARQL algebra and updating linked data with SPARQL 1.1.
This document provides an overview of SPARQL, the query language for retrieving and manipulating data stored in RDF format. It describes the basic components of SPARQL including triple patterns, basic graph patterns, group graph patterns, filters, and how these patterns are matched against RDF data to retrieve variable bindings. It also gives a brief introduction to SPARQL 1.1 features for querying and updating RDF stores.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
Beyond Degrees - Empowering the Workforce in the Context of Skills-First.pptxEduSkills OECD
Iván Bornacelly, Policy Analyst at the OECD Centre for Skills, OECD, presents at the webinar 'Tackling job market gaps with a skills-first approach' on 12 June 2024
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
ESWC SS 2012 - Tuesday Tutorial Elena Simperl: Creating and Using Ontologies
1. Creating and using ontologies
Elena Simperl AIFB, Karlsruhe Institute of Technology, Germany
With contributions from “Linked Data: Survey of Adoption”, Tutorial at the 3rd Asian Semantic Web School ASWS 2011, Incheon,
South Korea, July 2011 by Aidan Hogan, DERI, IE
Session 2, Day 2, 14:30 – 17:00
3. 3
Ontologies in Computer Science
An ontology defines a domain of interest
… in terms of the things you talk about in the domain, their
features and characteristics, as well as relationships
between them
4. 4
Ontologies in Computer Science (2)
Ontologies are used to
Share a common understanding about a domain among people and/or
machines
Enable reuse of domain knowledge
This is achieved by
Agreeing on meaning and representation of domain knowledge
Making domain assumptions explicit
Separating domain knowledge from the operational knowledge
They are used (under different names) in various areas
Data management and integration
Digital libraries
Multimedia analysis
Software engineering
Machine learning
Natural language processing
…
5. 5
Are Semantic Web ontologies just UML?
Ontologies vs ER schemas
Semantic Web ontologies represented in Web-compatible languages, using
Web technologies
They represent a shared view over a domain
Ontologies vs UML diagrams
Formal semantics of ontology languages defined, languages with feasible
computational complexity available
Ontologies vs thesauri
Formal semantics, domain-specific relationships
Ontologies vs taxonomies
Richer property types, formal semantics of the is-a relationship
Ontologies vs Linked Data vocabularies
Well…
6. 6
HOW TO BUILD AN ONTOLOGY
Natalya F. Noy and Deborah L. McGuinness. ``Ontology Development 101: A Guide to
Creating Your First Ontology''. Stanford Knowledge Systems Laboratory Technical Report
KSL-01-05 and Stanford Medical Informatics Technical Report SMI-2001-0880, March 2001.
7. 7
Requirements analysis
motivating scenarios, use cases, existing solutions,
effort estimation, competency questions, application requirements
Conceptualization
conceptualization of the model, integration and extension of
existing solutions
Implementation
implementation of the formal model in a representation language
Knowledgeacquisition
Test(Evaluation)
Documentation
Process overview
8. 8
Requirements analysis (1): Domain and scope
What is the ontology going to be used for?
Who will use the ontology?
How it will be maintained and by whom?
What kind of data will refer to it? And how will these references be
created and maintained?
Are there any information sources available that could be reused?
What questions should the ontology be able to answer?
To answer these questions, talk to domain experts, users, and software
designers
Domain experts don‘t need to be technical, they need to know about the
domain, and help you understand its subtleties
Users teach you about the terminology that is actually used and the
information needs they have
Software designers tell you tell you about the type of use cases you need
to handle, including the data to be described via the ontology
9. 9
Semantic technologies at BestBuy
Goal: “to provide more
visibility to products,
services and locations
to humans and
machines”
Search engines identify
the data more easily
and put it into context
(30% increase in
search traffic)
Improved consumer
experience
Due to “Increasing product and service visibility through front-end semantic web” by Jan Myers, SemTech 2010
10. 10
Semantic technologies at BestBuy(2)
Data is marked-up
using RDFa and refers
to concepts from a
pre-defined
eCommerce ontology.
Markup is entered by
BestBuy staff via
online forms that
produce RDFa.
Due to “Increasing product and service
visibility through front-end semantic web”
by Jan Myers, SemTech 2010
11. 11
Requirements analysis (2): Domain vs task-
oriented ontologies
Domain-oriented
Ontology models the types
of entities in the domain of
the application
Example: content and
features of movies, points
of interest in a city,
different types of digital
camera’s…
Cover the terminology of
the application domain
Example: classifications,
taxonomies, folksonomies,
text corpora
Used for annotation and
retrieval
Task-oriented
Ontology serves a
purpose in the context of
an application
Example: finding movies
with certain features,
recommending
sightseeing tours matching
my interests, finding and
comparing products
matching user preferences
Define the structure to a
knowledge base that can
be used to answer
competency questions
Used for automated
reasoning and querying
Content due to Valentina Pressuti and Eva Blomqvist
12. 12
Requirements analysis (3):
Competency questions
A set of queries which place demands on the underlying ontology
Ontology must be able to represent the questions using its terminology
and the answers based on the axioms
Ideally, in a staged manner, where consequent questions require the
input from the preceeding ones
A rationale for each competency question should be given
13. 13
Requirements analysis (4): Examples
Concepts in the ontology
should be bi-lingual.
The ontology should not
have more than 10
inheritance levels.
The ontology should be
extended and maintained
by non-experts.
The ontology should be
used to build an online
restaurant guide.
The ontology should be
usable on an available
collection of restaurant
descriptions written in
German.
Other requirements
Which wine characteristics
should I consider when
choosing a wine?
Is Bordeaux a red or white
wine?
Does Cabernet Sauvignon
go well with seafood?
What is the best choice of
wine for grilled meat?
Which characteristics of a
wine affect its
appropriateness for a dish?
Does a flavor or body of a
specific wine change with
vintage year?
Competency Questions
An ontology reflects an
abstracted view of a
domain of interest. You
should not model all
possible views upon a
domain of interest, or to
attend to capture all
knowledge potentially
available about the
respective domain.
Even after the scope of the
ontology has been defined,
the number of competency
questions can grow very
quickly modularization,
prioritization.
Requirements are often
contradictory
prioritization.
Issues
14. 14
Requirements analysis (5): Finding
existing ontologies
Where to find ontologies
Swoogle: over 10 000 documents, across domains
http://swoogle.umbc.edu/
Protégé Ontologies: several hundreds of ontologies, across domains
http://protegewiki.stanford.edu/index.php/Protege_Ontology_Library#OWL_ontologies
Open Ontology Repository: work in progress, life sciences, but also other domains
http://ontolog.cim3.net/cgi-bin/wiki.pl?OpenOntologyRepository
Tones: 218 ontologies, life sciences and core ontologies.
http://owl.cs.manchester.ac.uk/repository/browser
Watson: several tens of thousands of documents, across domains
http://watson.kmi.open.ac.uk/Overview.html
Talis repository
http://schemacache.test.talis.com/Schemas/
Ontology Yellow Pages: around 100 ontologies, across domains
http://wg.sti2.org/semtech-onto/index.php/The_Ontology_Yellow_Pages
OBO Foundation Ontologies
http://www.obofoundry.org/
AIM@SHAPE
http://dsw.aimatshape.net/tutorials/ont-intro.jsp
VoCamps
http://vocamp.org/wiki/Main_Page
28. 28
Requirements analysis (5): Selecting relevant
ontologies
What will the ontology be used for?
Does it need a natural language interface and if yes in which language?
Do you have any knowledge representation constraints (language,
reasoning)?
What level of expressivity is required?
What level of granularity is required?
What will you reuse from it?
Vocabulary++
How will you reuse it?
Imports: transitive dependency between ontologies
Changes in imported ontologies can result in inconsistencies and changes
of meanings and interpretations, as well as computational aspects
30. 30
Conceptualization (1): Vocabulary
What are the terms we would like to talk about?
What properties do those terms have?
What would we like to say about those terms?
Competency questions provide a useful starting point
Goint out too far vs. going down too far
Investigate homonyms and synonims
31. 31
Conceptulization (2): Classes
Select the terms that describe objects having independent
existence rather than terms that describe these objects
These terms will be classes in the ontology
Classes represent concepts in the domain and not the
words that denote these concepts
Synonyms for the same concept do not represent different classes
Typically nouns and nominal phrases, but not restricted to
them
Verbs can be modeled as classes, if the emphasis is on the
process as a whole rather than the actual execution
Visit as an event rather than an action performed by an actor
32. 32
Conceptualization (3): Class hierarchy
A subclass of a class represents a
concept that is a “kind of” the
concept that the superclass
represents
It has
Additional properties
Restrictions different from those of the
superclass, or
Participates in different relationships
than the superclasses
All the siblings in the hierarchy
(except for the ones at the root) must
be at the same level of generality
If a class has only one direct
subclass there may be a modeling
problem or the ontology is not
complete
If there are more than a dozen
subclasses for a given class then
additional intermediate categories
may be necessary
Functional inclusion
A chair is-a piece of furniture
A hammer is a tool
State inclusion
Polio is a disease
Hate is an emotion
Activity inclusion
Tennis is a sport
Murder is a crime
Action inclusion
Lecturing is a form of talking
Frying is a form of cooking
Perceptual inclusion
A cat is a mammal
An apple is a fruit
33. 33
Conceptualization(4): Properties
We selected classes from the list of terms in a previous step
Most of the remaining terms are likely to be properties of these
classes
For each property in the list, we must determine which class it
describes
Properties are inherited and should be attached to the most general
class in the hierarchy
Two types of principal characteristics
Measurable properties: attributes
Inter-class connections: relationships.
Use relationships to capture something with an identity
Arrest details as attribute of the suspect vs. arrest as an relationship
Do we measure degrees of arrestedness or do we want to be able to
distinguish between arrests?
Color of an image as attribute vs. class
A „pointing finger“ rather than a „ruler“ indicates identity
34. 34
Conceptualization (5): Domain and ranges
Refine the semantics of the properties
Cardinality
Domain and range
When defining a domain or a range for a slot, find the most
general classes or class that can be respectively the domain or
the range for the slots
Do not define a domain and range that is overly general
General patterns for domain and range
A class and a superclass – replace with the superclass
All subclasses of a class – replace with the superclass
Most subclasses of a class – consider replacing with the
superclass
38. 38
Ontologies and Linked Data
Model pre-defined through the
(semi-) structure of the data to
be published
Emphasis on alignment,
especially at the instance level
Stronger commitment to reuse
instead of development from
scratch
Human vs machine-oriented
consumption (using specific
technologies)
Trade-off between
acceptance/ease-of-use and
expressivity/usefulness
Publication according to Linked
Data principles
Content due to Chris Bizer
39. 39
Example: BBC
Various micro-sites built
and maintained manually.
No integration across sites
in terms of content and
metadata.
Use cases
Find and explore content on
specific (and related) topics.
Maintain and re-organize
sites.
Leverage external resources.
Ontology: One page per
thing, reusing DBpedia and
MusicBrainz IDs, different
labels…
„Design for a world where Google is your
homepage, Wikipedia is your CMS, and
humans, software developers and
machines are your users“
http://www.slideshare.net/reduxd/beyond-the-polar-bear
41. 41
Ontology engineering today
Various domains and application scenarios: life sciences, eCommerce,
Linked Open Data
Engineering by reuse for most domains based on existing data and
vocabularies
Alignment of data sets
Data curation
Human-aided computation (e.g., games, crowdsourcing)
Most of them much simpler and easier to understand than the often
cited examples from the 90s
However, still difficult to use (e.g., for mark-up)
42. 42
Open topics
Meanwhile we have a better understanding of the scenarios which
benefit from the usage of semantics and the technologies they typically
deploy
Guidelines and how-to‘s
Design principles and patterns
Schema-level alignment (data-driven)
Vocabulary evolution
Assessment and evaluation
Large-scale approaches to knowledge elicitation based on
combinations of human and computational intelligence
43. KIT – University of the State of Baden-Württemberg and
National Large-scale Research Center of the Helmholtz Association
Institut AIFB – Angewandte Informatik und Formale Beschreibungsverfahren
www.kit.edu
44. 44
Assignment: Modeling
The current configuration of the “Red Hot Chili Peppers” are: Anthony
Kiedis (vocals), Flea (bass, trumpet, keyboards, and vocals), John
Frusciante (guitar), and Chad Smith (drums). The line-up has changed
a few times during they years, Frusciante replaced Hillel Slovak in
1988, and when Jack Irons left the band he was briefly replaced by
D.H. Peligo until the band found Chad Smith. In addition to playing
guitars for Red hot Chili Peppers Frusciante also contributed to the
band “The Mars Volta” as a vocalist for some time.
From September 2004, the Red Hot Chili Peppers started recording the
album “Stadium Arcadium”. The album contains 28 tracks and was
released on May 5 2006. It includes a track of the song “Hump de
Bump”, which was composed in January 26, 2004. The critic Crian Hiatt
defined the album as "the most ambitious work in his twenty-three-year
career". On August 11 (2006) the band gave a live performance in
Portland, Oregon (US), featuring songs from Stadium Arcadium and
other albums.
45. 45
Assignment: Alignment
The aim is to reach a ‚shared conceptualization‘ of all participants at the
ESWC2011 summer school on the ontology developed in the previous
assigment.
Assumption: every group is committed to their conceptualization.
Procedure: each group selects a representative, representatives agree on
an editor, and on the actual steps to be followed.