The storing and the processing of OWL instances are important subjects in database modeling.
Many research works have focused on the way of managing OWL instances efficiently. Some
systems store and manage OWL instances using relational models to ensure their persistence.
Nevertheless, several approaches keep only RDF triplets as instances in relational tables
explicitly, and the manner of structuring instances as graph and keeping links between concepts
is not taken into account. In this paper, we propose an architecture that permits relational
tables behave as an OWL model by adapting relational tables to OWL instances and an OWL
hierarchy structure. Therefore, two kinds of tables are used: facts or instances relational tables.
The tables hold instances and the OWL table holds a specification of how the concepts are
structured. Instances tables should conform to OWLtable to be valid. A mechanism of
construction of OWLtable and instances tables is defined in order to enable and enhance
inference and semantic querying of OWL in relational model context.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScsandit
Relational Databases (RDB) are used as the backend database by most of information systems.
RDB encapsulate conceptual model and metadata needed in the ontology construction. Schema
mapping is a technique that is used by all existing approaches for ontology building from RDB.
However, most of those methods use poor transformation rules that prevent advanced database
mining for building rich ontologies. In this paper, we propose transformation rules for building
owl ontologies from RDBs. It allows transforming all possible cases in RDBs into ontological
constructs. The proposed rules are enriched by analyzing stored data to detect disjointness and
totalness constraints in hierarchies, and calculating the participation level of tables in n-ary
relations. In addition, our technique is generic; hence it can be applied to any RDB. The
proposed rules were evaluated using a normalized and open RDB. The obtained ontology is
richer in terms of non- taxonomic relationships.
Towards a Query Rewriting Algorithm Over Proteomics XML ResourcesCSCJournals
Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
semantic data integration the process of using a conceptual representation of the data and of their relationships to eliminate possible heterogeneities.
While the world is witnessing an information revolution unprecedented and great speed in the growth of databases in all aspects. Databases interconnect with their content and schema but use different elements and structures to express the same concepts and relations, which may cause semantic and structural conflicts. This paper proposes a new technique for integration the heterogeneous eXtensible Markup Language (XML) schemas, under the name XDEHD. The returned mediated schema contains all concepts and relations of the sources without duplication. Detailed technique divides into three steps; First, extract all subschemas from the sources by decompose the schemas sources, each subschema contains three levels, these levels are ancestor, root and leaf. Thereafter, second, the technique matches and compares the subschemas and return the related candidate subschemas, semantic closeness function is implemented to measures the degree how similar the concepts of subschemas are modelled in the sources. Finally, create the medicate schema by integration the candidate subschemas, and then obtain the minimal and complete unified schema, association strength function is developed to compute closely of pair in candidate subschema across all data sources, and elements repetition function is employed to calculate how many times each element repeated between the candidate subschema.
This document summarizes a workshop on data integration using ontologies. It discusses how data integration is challenging due to differences in schemas, semantics, measurements, units and labels across data sources. It proposes that ontologies can help with data integration by providing definitions for schemas and entities referred to in the data. Core challenges discussed include dealing with multiple synonyms for entities and relationships between biological entities that depend on context. The document advocates for shared community ontologies that can be extended and integrated to facilitate flexible and responsive data integration across multiple sources.
Many applications required integration of data from different sources, such as data mining and data / information fusion, etc., and the problem facing any project like this is that data structure different way and the terms and their meanings different from each other, and in this paper we will discuss the most important problems and how solve it using ontology.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScsandit
Relational Databases (RDB) are used as the backend database by most of information systems.
RDB encapsulate conceptual model and metadata needed in the ontology construction. Schema
mapping is a technique that is used by all existing approaches for ontology building from RDB.
However, most of those methods use poor transformation rules that prevent advanced database
mining for building rich ontologies. In this paper, we propose transformation rules for building
owl ontologies from RDBs. It allows transforming all possible cases in RDBs into ontological
constructs. The proposed rules are enriched by analyzing stored data to detect disjointness and
totalness constraints in hierarchies, and calculating the participation level of tables in n-ary
relations. In addition, our technique is generic; hence it can be applied to any RDB. The
proposed rules were evaluated using a normalized and open RDB. The obtained ontology is
richer in terms of non- taxonomic relationships.
Towards a Query Rewriting Algorithm Over Proteomics XML ResourcesCSCJournals
Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
semantic data integration the process of using a conceptual representation of the data and of their relationships to eliminate possible heterogeneities.
While the world is witnessing an information revolution unprecedented and great speed in the growth of databases in all aspects. Databases interconnect with their content and schema but use different elements and structures to express the same concepts and relations, which may cause semantic and structural conflicts. This paper proposes a new technique for integration the heterogeneous eXtensible Markup Language (XML) schemas, under the name XDEHD. The returned mediated schema contains all concepts and relations of the sources without duplication. Detailed technique divides into three steps; First, extract all subschemas from the sources by decompose the schemas sources, each subschema contains three levels, these levels are ancestor, root and leaf. Thereafter, second, the technique matches and compares the subschemas and return the related candidate subschemas, semantic closeness function is implemented to measures the degree how similar the concepts of subschemas are modelled in the sources. Finally, create the medicate schema by integration the candidate subschemas, and then obtain the minimal and complete unified schema, association strength function is developed to compute closely of pair in candidate subschema across all data sources, and elements repetition function is employed to calculate how many times each element repeated between the candidate subschema.
This document summarizes a workshop on data integration using ontologies. It discusses how data integration is challenging due to differences in schemas, semantics, measurements, units and labels across data sources. It proposes that ontologies can help with data integration by providing definitions for schemas and entities referred to in the data. Core challenges discussed include dealing with multiple synonyms for entities and relationships between biological entities that depend on context. The document advocates for shared community ontologies that can be extended and integrated to facilitate flexible and responsive data integration across multiple sources.
Many applications required integration of data from different sources, such as data mining and data / information fusion, etc., and the problem facing any project like this is that data structure different way and the terms and their meanings different from each other, and in this paper we will discuss the most important problems and how solve it using ontology.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
Systems Biology Model Semantics and IntegrationAllyson Lister
A short description of my research experiences using OWL to perform semantic data integration and, ultimately, the addition of annotation for systems biology models.
The document discusses two approaches to building information systems: the single-level methodology and the two-level model methodology. The two-level model separates domain knowledge from metadata about the structural representation of that knowledge. This allows knowledge to change without requiring software redesign. The openEHR two-level architecture separates information semantics from knowledge concepts. It follows the RM/ODP reference model approach and allows knowledge to be instantiated at runtime rather than hardcoded. The document argues this makes systems more adaptable, interoperable, and able to keep up with changing healthcare knowledge.
SEARCH OF INFORMATION BASED CONTENT IN SEMI-STRUCTURED DOCUMENTS USING INTERF...ijcsitcejournal
This paper proposes a semi-structured information retrieval model based on a new method for calculation
of similarity. We have developed CASISS (Calculation of Similarity of Semi-Structured documents)
method to quantify how two given texts are similar. This new method identifies elements of semi-structured
documents using elements descriptors. Each semi-structured document is pre-processed before the
extraction of a set of descriptors for each element, which characterize the contents of elements.It can be
used to increase the accuracy of the information retrieval process by taking into account not only the
presence of query terms in the given document but also the topology (position continuity) of these terms.
Effective Data Retrieval in XML using TreeMatch AlgorithmIRJET Journal
This document summarizes research on effective data retrieval from XML documents using the TreeMatch algorithm. It begins with an abstract that introduces the TreeMatch algorithm and its ability to provide fast data retrieval from XML documents by matching tree-shaped patterns. It then reviews related work on XML tree matching algorithms and their issues like suboptimality. The document proposes using the TreeMatch algorithm to overcome issues with wildcards, negation, and siblings when querying XML documents with XPath or XQuery. It provides details on the TreeMatch algorithm and its ability to process different types of XML tree pattern queries efficiently while avoiding intermediate results. In conclusion, it states that the TreeMatch algorithm can efficiently handle three types of XML tree pattern queries and overcome the problem of sub
The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...IJwest
This document describes a semantic-based approach for knowledge discovery and information extraction from multiple web pages using ontologies. It presents a model for storing web content in an organized, structured RDF format. Information extraction techniques and developed ontologies can then discover new knowledge with minimal time compared to manual efforts. The paper details two experiments applying this approach. Experiment 1 extracts staff profiles from web pages into RDF, discovering related research colleagues. Experiment 2 extracts student data from HTML tables into XML/RDF, enabling faster querying and analysis versus manual parsing. The approach effectively organizes unstructured web data for knowledge inference and acquisition.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
A PROPOSED MULTI-DOMAIN APPROACH FOR AUTOMATIC CLASSIFICATION OF TEXT DOCUMENTSijsc
Classification is an important technique used in information retrieval. Supervised classification suffers
from certain limitations concerning the collection and labeling of the training dataset. When facing Multi-
Domain classification, multiple training datasets and classifiers are needed which is relatively difficult. In
this paper an unsupervised classification system is proposed that can manage the Multi-Domain
classification problem as well. It is a multi-domain system where each domain represented by an ontology.
A document is mapped on each ontology based on the weights of the mutual tokens between them with the
help of fuzzy sets, resulting in a mapping degree of the document with each domain. An experiment carried
out showing satisfying classification results with an improvement in the evaluation results of the proposed
system compared to Apache Lucene.
This document discusses querying XML schemas using XSPath, a path language for XML schemas. It begins with an abstract stating that XML schemas are used to constrain the structure and content of XML documents, but previous methods of navigating schemas were not easy. XSPath allows navigating and selecting nodes in XML schemas. The document then provides background on XML schemas and XPath, and describes related work on schema exploration and querying. It explains how schemas can be represented and provides an abbreviated syntax and examples of XSPath.
Gellish A Standard Data And Knowledge Representation Language And OntologyAndries_vanRenssen
Database structures should allow for the expression of any fact that can be expressed in natural languages. This means that they should allow for any expression of facts in a universal formal language. Constraints should be specified in a separate layer. This article describes the basic concepts of such a universal semantic database structure and associated formal subset of a natural language.
For efficient and innovative use of big data, it is important to integrate multiple data bases across domains. For example, various public data bases are developed in life science, and how to find a novel scientific result using them is an essential technique. In social and business areas, open data strategies in many countries promote diversity of public data, how to combine big data and open data is a big challenge. That is, diversity of dataset is a problem to be solved for big data.
Ontology gives a systematized knowledge to integrate multiple datasets across domains with semantics of them. Linked Data also provides techniques to interlink datasets based on semantic web technologies. We consider that combinations of ontology and Linked Data based on ontological engineering can contribute to solution of diversity problem in big data.
In this talk, I discuss how ontological engineering could be applied to big data with some trial examples.
The document discusses space efficient data structures for storing JSON documents in memory. It introduces JSON and compares it to XML. Succinct data structures are discussed as a way to represent trees in a space efficient manner close to theoretical minimum while still supporting efficient operations. The paper aims to apply succinct data structures to represent the tree structure of JSON documents to allow processing large documents in limited memory.
This article advocates that information storage requirements should not be expressed in the form of data models or conceptual schemas, but database structures should allow for any expression in a general purpose language, whereas implementation constraints should be expressed as constraints on the use of the general purpose language.
This document discusses relation extraction from biological text. It describes relation extraction as detecting and classifying relationships between entities in text. Various machine learning approaches are used, including kernel-based algorithms, regression, and neural networks. Features include sequences, parse trees, dependency graphs, and shallow parsing. Two approaches are described in detail: a string kernel using shortest dependency graph paths, and a global alignment kernel comparing semantic similarity. Both approaches improved performance when using syntactic and semantic information from linguistic annotation. Future work focuses on distant supervision to generate more training data without full manual annotation.
The document discusses different data models including relational, XML, RDF, and OWL models. It provides background on each model, explaining their logical structure and how semantics can be expressed. While the models are not directly inter-convertible, the NIF system is designed to work with multiple models by translating queries to the native format of each data source, augmenting models with semantic information, and developing a common integrated ontological graph.
Converting UML Class Diagrams into Temporal Object Relational DataBase IJECEIAES
Number of active researchers and experts, are engaged to develop and implement new mechanism and features in time varying database management system (TVDBMS), to respond to the recommendation of modern business environment.Time-varying data management has been much taken into consideration with either the attribute or tuple time stamping schema. Our main approach here is to try to offer a better solution to all mentioned limitations of existing works, in order to provide the nonprocedural data definitions, queries of temporal data as complete as possible technical conversion ,that allow to easily realize and share all conceptual details of the UML class specifications, from conception and design point of view. This paper contributes to represent a logical design schema by UML class diagrams, which are handled by stereotypes to express a temporal object relational database with attribute timestamping.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
The document provides an overview of ontology and its various aspects. It discusses the origin of the term ontology, which derives from Greek words meaning "being" and "science," so ontology is the study of being. It distinguishes between scientific and philosophical ontologies. Social ontology examines social entities. Perspectives on ontology include philosophy, library and information science, artificial intelligence, linguistics, and the semantic web. The goal of ontology is to encode knowledge to make it understandable to both people and machines. It provides motivations for developing ontologies such as enabling information integration and knowledge management. The document also discusses ontology languages, uniqueness of ontologies, purposes of ontologies, and provides references.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
Systems Biology Model Semantics and IntegrationAllyson Lister
A short description of my research experiences using OWL to perform semantic data integration and, ultimately, the addition of annotation for systems biology models.
The document discusses two approaches to building information systems: the single-level methodology and the two-level model methodology. The two-level model separates domain knowledge from metadata about the structural representation of that knowledge. This allows knowledge to change without requiring software redesign. The openEHR two-level architecture separates information semantics from knowledge concepts. It follows the RM/ODP reference model approach and allows knowledge to be instantiated at runtime rather than hardcoded. The document argues this makes systems more adaptable, interoperable, and able to keep up with changing healthcare knowledge.
SEARCH OF INFORMATION BASED CONTENT IN SEMI-STRUCTURED DOCUMENTS USING INTERF...ijcsitcejournal
This paper proposes a semi-structured information retrieval model based on a new method for calculation
of similarity. We have developed CASISS (Calculation of Similarity of Semi-Structured documents)
method to quantify how two given texts are similar. This new method identifies elements of semi-structured
documents using elements descriptors. Each semi-structured document is pre-processed before the
extraction of a set of descriptors for each element, which characterize the contents of elements.It can be
used to increase the accuracy of the information retrieval process by taking into account not only the
presence of query terms in the given document but also the topology (position continuity) of these terms.
Effective Data Retrieval in XML using TreeMatch AlgorithmIRJET Journal
This document summarizes research on effective data retrieval from XML documents using the TreeMatch algorithm. It begins with an abstract that introduces the TreeMatch algorithm and its ability to provide fast data retrieval from XML documents by matching tree-shaped patterns. It then reviews related work on XML tree matching algorithms and their issues like suboptimality. The document proposes using the TreeMatch algorithm to overcome issues with wildcards, negation, and siblings when querying XML documents with XPath or XQuery. It provides details on the TreeMatch algorithm and its ability to process different types of XML tree pattern queries efficiently while avoiding intermediate results. In conclusion, it states that the TreeMatch algorithm can efficiently handle three types of XML tree pattern queries and overcome the problem of sub
The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.
A SEMANTIC BASED APPROACH FOR KNOWLEDGE DISCOVERY AND ACQUISITION FROM MULTIP...IJwest
This document describes a semantic-based approach for knowledge discovery and information extraction from multiple web pages using ontologies. It presents a model for storing web content in an organized, structured RDF format. Information extraction techniques and developed ontologies can then discover new knowledge with minimal time compared to manual efforts. The paper details two experiments applying this approach. Experiment 1 extracts staff profiles from web pages into RDF, discovering related research colleagues. Experiment 2 extracts student data from HTML tables into XML/RDF, enabling faster querying and analysis versus manual parsing. The approach effectively organizes unstructured web data for knowledge inference and acquisition.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
A PROPOSED MULTI-DOMAIN APPROACH FOR AUTOMATIC CLASSIFICATION OF TEXT DOCUMENTSijsc
Classification is an important technique used in information retrieval. Supervised classification suffers
from certain limitations concerning the collection and labeling of the training dataset. When facing Multi-
Domain classification, multiple training datasets and classifiers are needed which is relatively difficult. In
this paper an unsupervised classification system is proposed that can manage the Multi-Domain
classification problem as well. It is a multi-domain system where each domain represented by an ontology.
A document is mapped on each ontology based on the weights of the mutual tokens between them with the
help of fuzzy sets, resulting in a mapping degree of the document with each domain. An experiment carried
out showing satisfying classification results with an improvement in the evaluation results of the proposed
system compared to Apache Lucene.
This document discusses querying XML schemas using XSPath, a path language for XML schemas. It begins with an abstract stating that XML schemas are used to constrain the structure and content of XML documents, but previous methods of navigating schemas were not easy. XSPath allows navigating and selecting nodes in XML schemas. The document then provides background on XML schemas and XPath, and describes related work on schema exploration and querying. It explains how schemas can be represented and provides an abbreviated syntax and examples of XSPath.
Gellish A Standard Data And Knowledge Representation Language And OntologyAndries_vanRenssen
Database structures should allow for the expression of any fact that can be expressed in natural languages. This means that they should allow for any expression of facts in a universal formal language. Constraints should be specified in a separate layer. This article describes the basic concepts of such a universal semantic database structure and associated formal subset of a natural language.
For efficient and innovative use of big data, it is important to integrate multiple data bases across domains. For example, various public data bases are developed in life science, and how to find a novel scientific result using them is an essential technique. In social and business areas, open data strategies in many countries promote diversity of public data, how to combine big data and open data is a big challenge. That is, diversity of dataset is a problem to be solved for big data.
Ontology gives a systematized knowledge to integrate multiple datasets across domains with semantics of them. Linked Data also provides techniques to interlink datasets based on semantic web technologies. We consider that combinations of ontology and Linked Data based on ontological engineering can contribute to solution of diversity problem in big data.
In this talk, I discuss how ontological engineering could be applied to big data with some trial examples.
The document discusses space efficient data structures for storing JSON documents in memory. It introduces JSON and compares it to XML. Succinct data structures are discussed as a way to represent trees in a space efficient manner close to theoretical minimum while still supporting efficient operations. The paper aims to apply succinct data structures to represent the tree structure of JSON documents to allow processing large documents in limited memory.
This article advocates that information storage requirements should not be expressed in the form of data models or conceptual schemas, but database structures should allow for any expression in a general purpose language, whereas implementation constraints should be expressed as constraints on the use of the general purpose language.
This document discusses relation extraction from biological text. It describes relation extraction as detecting and classifying relationships between entities in text. Various machine learning approaches are used, including kernel-based algorithms, regression, and neural networks. Features include sequences, parse trees, dependency graphs, and shallow parsing. Two approaches are described in detail: a string kernel using shortest dependency graph paths, and a global alignment kernel comparing semantic similarity. Both approaches improved performance when using syntactic and semantic information from linguistic annotation. Future work focuses on distant supervision to generate more training data without full manual annotation.
The document discusses different data models including relational, XML, RDF, and OWL models. It provides background on each model, explaining their logical structure and how semantics can be expressed. While the models are not directly inter-convertible, the NIF system is designed to work with multiple models by translating queries to the native format of each data source, augmenting models with semantic information, and developing a common integrated ontological graph.
Converting UML Class Diagrams into Temporal Object Relational DataBase IJECEIAES
Number of active researchers and experts, are engaged to develop and implement new mechanism and features in time varying database management system (TVDBMS), to respond to the recommendation of modern business environment.Time-varying data management has been much taken into consideration with either the attribute or tuple time stamping schema. Our main approach here is to try to offer a better solution to all mentioned limitations of existing works, in order to provide the nonprocedural data definitions, queries of temporal data as complete as possible technical conversion ,that allow to easily realize and share all conceptual details of the UML class specifications, from conception and design point of view. This paper contributes to represent a logical design schema by UML class diagrams, which are handled by stereotypes to express a temporal object relational database with attribute timestamping.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
The document provides an overview of ontology and its various aspects. It discusses the origin of the term ontology, which derives from Greek words meaning "being" and "science," so ontology is the study of being. It distinguishes between scientific and philosophical ontologies. Social ontology examines social entities. Perspectives on ontology include philosophy, library and information science, artificial intelligence, linguistics, and the semantic web. The goal of ontology is to encode knowledge to make it understandable to both people and machines. It provides motivations for developing ontologies such as enabling information integration and knowledge management. The document also discusses ontology languages, uniqueness of ontologies, purposes of ontologies, and provides references.
This document discusses feature selection algorithms, specifically branch and bound and beam search algorithms. It provides an overview of feature selection, discusses the fundamentals and objectives of feature selection. It then goes into more detail about how branch and bound works, including pseudocode, a flowchart, and an example. It also discusses beam search and compares branch and bound to other algorithms. In summary, it thoroughly explains branch and bound and beam search algorithms for performing feature selection on datasets.
The Six Highest Performing B2B Blog Post FormatsBarry Feldman
If your B2B blogging goals include earning social media shares and backlinks to boost your search rankings, this infographic lists the size best approaches.
1) The document discusses the opportunity for technology to improve organizational efficiency and transition economies into a "smart and clean world."
2) It argues that aggregate efficiency has stalled at around 22% for 30 years due to limitations of the Second Industrial Revolution, but that digitizing transport, energy, and communication through technologies like blockchain can help manage resources and increase efficiency.
3) Technologies like precision agriculture, cloud computing, robotics, and autonomous vehicles may allow for "dematerialization" and do more with fewer physical resources through effects like reduced waste and need for transportation/logistics infrastructure.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScscpconf
This document proposes transformation rules for building OWL ontologies from relational databases. It begins by classifying database tables into six categories based on their attributes and relationships. Transformation rules are then applied to each category to map the database schema into ontological components. The rules cover various database modeling constructs such as one-to-many relationships, simple and multiple inheritance, many-to-many relationships with and without attributes, and n-ary relationships. Additionally, the proposed approach analyzes stored data to detect disjointness and totalness constraints in class hierarchies and calculate participation levels in n-ary relations. The rules aim to generate richer ontologies than existing methods by handling more complex database cases and incorporating additional semantic information from data analysis.
An approach for transforming of relational databases to owl ontologyIJwest
Rapid growth of documents, web pages, and other types of text content is a huge challenge for the modern content management systems. One of the problems in the areas of information storage and retrieval is the lacking of semantic data. Ontologies can present knowledge in sharable and repeatedly usable manner and provide an effective way to reduce the data volume overhead by encoding the structure of a particular domain. Metadata in relational databases can be used to extract ontology from database in a special domain. According to solve the problem of sharing and reusing of data, approaches based on transforming relational database to ontology are proposed. In this paper we propose a method for automatic ontology construction based on relational database. Mining and obtaining further components from relational database leads to obtain knowledge with high semantic power and more expressiveness. Triggers are one of the database components which could be transformed to the ontology model and increase the amount of power and expressiveness of knowledge by presenting part of the knowledge dynamically.
Robust Module based data management systemRahul Roi
This document presents a robust module-based database management system. It summarizes an abstract, definitions of ontology, existing systems, proposed system, and goals for developing an ontology. The proposed system introduces novel properties of robustness that allow a module-based DMS to evolve safely with respect to the schema and data of a reference DMS. It focuses on using SQL-Lite to efficiently manage large datasets. The goal of developing an ontology is to share a common understanding of information among people or software agents.
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
Expression of Query in XML object-oriented databaseEditor IJCATR
This document discusses expressing queries in an XML object-oriented database. It proposes a method for querying an object-oriented database where the user can assign weights to restrictions in conjunctive or disjunctive queries based on importance. The queries and resulting objects are represented using XML labels to simplify them and make them closer to user needs. It also presents a case study of a book information registration system to evaluate the proposed query method on an XML object-oriented database.
Expression of Query in XML object-oriented databaseEditor IJCATR
Upon invent of object-oriented database, the concept of behavior in database was propounded. Before, relational database only provided a logical modeling of data and paid no attention to the operations applied on data in the system. In this paper, a method is presented for query of object-oriented database. This method has appropriate results when the user explains restrictions in a combinational matter (disjunctive and conjunctive) and assumes a weight for each one of restrictions based on their importance. Later, the obtained results are sorted based on their belonging rate to the response set. In continue, queries are explained using XML labels. The purpose is simplifying queries and objects resulted from queries to be very close to the user need and meet his expectation.
Expression of Query in XML object-oriented databaseEditor IJCATR
Upon invent of object-oriented database, the concept of behavior in database was propounded. Before, relational database
only provided a logical modeling of data and paid no attention to the operations applied on data in the system. In this paper, a method
is presented for query of object-oriented database. This method has appropriate results when the user explains restrictions in a
combinational matter (disjunctive and conjunctive) and assumes a weight for each one of restrictions based on their importance. Later,
the obtained results are sorted based on their belonging rate to the response set. In continue, queries are explained using XML labels.
The purpose is simplifying queries and objects resulted from queries to be very close to the user need and meet his expectation.
An Incremental Method For Meaning Elicitation Of A Domain OntologyAudrey Britton
This document describes MELIS (Meaning Elicitation and Lexical Integration System), a tool developed to support the annotation of data sources with conceptual and lexical information during the ontology generation process in MOMIS (Mediator envirOnment for Multiple Information Systems). MELIS improves upon MOMIS by enabling more automated annotation of data sources and by extracting relationships between lexical elements using lexical and domain knowledge to provide a richer semantics. The document outlines the MOMIS architecture integrated with MELIS and explains how MELIS supports key steps in the ontology creation process, including source annotation, common thesaurus generation, and relationship extraction.
Mapping of extensible markup language-to-ontology representation for effectiv...IAESIJAI
Extensible markup language (XML) is well-known as the standard for data exchange over the internet. It is flexible and has high expressibility to express the relationship between the data stored. Yet, the structural complexity and the semantic relationships are not well expressed. On the other hand, ontology models the structural, semantic and domain knowledge effectively. By combining ontology with visualization effect, one will be able to have a closer view based on respective user requirements. In this paper, we propose several mapping rules for the transformation of XML into ontology representation. Subsequently, we show how the ontology is constructed based on the proposed rules using the sample domain ontology in University of Wisconsin-Milwaukee (UWM) and mondial datasets. We
also look at the schemas, query workload, and evaluation, to derive the extended knowledge from the existing ontology. The correctness of the ontology representation has been proven effective through supporting various types of complex queries in simple protocol and resource description framework query language (SPARQL) language.
This document discusses ontology-based data access. It begins by defining ontology as a representation of concepts and relationships that define a domain. It then provides examples of ontology elements like concepts, attributes, and relations. It describes how ontologies can be used to share understanding, enable knowledge reuse, and separate domain from operational knowledge. The document outlines the process for developing ontologies including scope, capture, encoding, integration, and evaluation. It discusses using ontologies to provide a user-oriented view of data and facilitate query access across data sources. The document concludes by discussing ongoing work on semantic query analysis and graphical ontology mapping tools.
INFERENCE BASED INTERPRETATION OF KEYWORD QUERIES FOR OWL ONTOLOGYIJwest
This paper presents a model for interpreting keyword queries over OWL ontologies that considers OWL axioms and restrictions to provide more precise answers to user queries. The model maps keywords from user queries to ontological elements and identifies phrases to select an appropriate SPARQL query template. The template is populated using inferred results from applying OWL restrictions to generate a formal SPARQL query for execution over the ontology. By addressing OWL features, the model aims to leverage the full capabilities of OWL knowledge bases to better understand users' information needs.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
An Ontology-Based Information Extraction Approach For R Sum SRichard Hogue
This document discusses developing an ontology-driven information extraction system called the Ontology-based Résumé Parser (ORP) to extract information like experiences, qualifications, education, and personal details from millions of résumés in English and Turkish. The system uses various domain ontologies within its Ontology Knowledgebase to semantically parse résumés and match concepts. It aims to assist with expert finding and skills aggregation by analyzing data semantically rather than just syntactically matching keywords. The Résumé Ontology is described in detail to represent information typically included in résumés through semantic annotations.
Implementation of multidimensional databases with document-oriented NoSQL
Implémentation des entrepôts de données NoSQL dans les bases de données NoSQL orienté documents.
A Review on Evolution and Versioning of Ontology Based Information Systemsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document provides a review of existing approaches to ontology evolution and versioning. It begins by defining ontologies and discussing why evolution and versioning are needed as ontologies are used in information systems. It then outlines some existing solutions for ontology evolution and version management, noting different languages used to conceptualize ontologies. Challenges of ontology versioning and evolution are discussed. Increased usage of ontologies in different domains is reviewed. Finally, some available tools for ontology change management are mentioned.
SWSN UNIT-3.pptx we can information about swsn professionalgowthamnaidu0986
Ontology engineering involves constructing ontologies through various methods. It begins with defining the scope and evaluating existing ontologies for reuse. Terms are enumerated and organized in a taxonomy with defined properties, facets, and instances. The ontology is checked for anomalies and refined iteratively. Popular tools for ontology development include Protege and WebOnto. Methods like Meth ontology and On-To-Knowledge methodology provide processes for building ontologies from scratch or reusing existing ones. Ontology sharing requires mapping between ontologies to allow interoperability, and libraries exist for storing and accessing ontologies.
ONTOLOGY SERVICE CENTER: A DATAHUB FOR ONTOLOGY APPLICATIONIJwest
With the growth of data-oriented research in humanities, a large number of research datasets have been
created and published through web services. However, how to discover, integrate and reuse these distributed
heterogeneous research datasets is a challenging task. Ontology is the soul between series digital humanities
resources, which provides a good way for people to discover and understand these datasets. With the release
of more and more linked open data and knowledge bases, a large number of ontologies have been produced
at the same time. These ontologies have different publishing formats, consumption patterns, and interactions
ways, which are not conductive to the user’s understanding of the datasets and the reuse of the ontologies.
The Ontology Service Center platform consists of Ontology Query Center and Ontology Validation Center,
mainly using linked data and ontology-based technologies. The Ontology Query Center realizes the functions
of ontology publishing, querying, data interaction and online browsing, while the Ontology Validation
Center can verify the status of using certain ontologies in the linked datasets. The empirical part of the paper
uses the Confucius portrait as an example of how OSC can be used in the semantic annotation of images. In
a word, the purpose of this paper is to construct the applied ecology of ontology to promote the development
of knowledge graphs and the spread of ontology.
Similar to USING RELATIONAL MODEL TO STORE OWL ONTOLOGIES AND FACTS (20)
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
2. 84 Computer Science & Information Technology (CS & IT)
expressiveness. The combination of those two fields allows providing solutions to manage
knowledge semantically and efficiently with great performances.
The Web Ontology Language (OWL) [1] supplies for developers more than eXtensible Markup
Language (XML), Resource Description Framework (RDF) [6]. With a wide formal vocabulary,
OWL [5] is a formalism to build a knowledge base. However, if the quantity of the knowledge is
very important and the size of the OWL domain is voluminous, it will be complex to treat them
by managing a document in a simple file format. Thus, it would be beneficial to embed these
ontologies on a database management system and to manage them as databases. The proposed
approach, allows the storage of metadata and ontology instances into a persistent and optimal
way, while preserving semantics and constraints at an abstract level.
It is important in engineering to implement database based ontology onto an existing database
system. It is also interesting to integrate ontology features with an existing database model. Using
an existing infrastructure does not deprive the ontologies-based database to be native.
OWL contains three sublanguages [1]: OWL Lite, characterized by a hierarchy and a simple
mechanism of constraints; OWL DL, based on the description logic with offering a high degree of
expressiveness and ensures computational completeness and decidability; OWL Full, has a big
capacity of expressiveness, without ensuring completeness and decidability. OWL DL language
has been chosen here to express the knowledge base. It contains all the structures of OWL with
some limitations such as the separation of types (a class cannot be at the same time an individual
and a property), an important thing is to not confuse the data and the metadata.
A semantic web application requires storing and manipulating enormous data. Storing an
ontology and its instances and processing them by a user application need some mechanisms to
cope, on the one hand, with huge numbers and relations between concepts and, on the other hand,
with large amounts of instances. Relational database is a good support as repository to ensure the
persistence of ontology instances, due to its experience, performance and features. For this
reason, it has been chosen in this paper to embed OWL DL knowledge bases.
To take benefit of the structure expressed in OWL and at the same time of the database system of
which embeds the ontology instances, this paper provides a solution to explicit a graph structured
document as OWL into relational tables with preserving constraints and relationships. It aims at
ensuring the persistence of a huge number of OWL concepts and a large amount of instances in a
native way without using the classical way of triples embedded into relational table of three
columns [17]. The objective of this work is to present a mechanism of how to store OWL
concepts and instances in relational tables while preserving the semantic and without losing data.
The main advantage of this solution is to have a specification and conception under the OWL-DL
language, a logical modelling under relational model, and a physical storage into the relational
database. Therewith the distance is close between the conceptual model and the logical model in
matters of preserving data, link and constraint. For that purpose, it could be said that this work
enters in the field of the Ontology-Based Database.
This paper is organized as follows: The following section presents the works carried out on this
subject, section (3) shows the principles of the proposed approach of how storing OWL ontology
into relational tables and section (4) details the mechanism of storing OWL ontology concepts
and facts, followed by a conclusion and some perspectives.
3. Computer Science & Information Technology (CS & IT) 85
2. SOME RELATED WORKS
In order to manage ontologies efficiently and to ensure a robust persistence, several knowledge
management systems have been proposed. Each work in this context uses its own strategies to
embed ontologies in a persistent data model.
Pierra et al. have proposed a database architecture model called "Ontology-Based Database"
(OntoDB) [10]. The latter defines separately the implementation of the ontology and those of
data. The solution consists of two parts: representing the primitive of the ontology model in a
meta-schema and defining, once and for all, the physical storage of ontologies from RDF triplets
by using the relational model and the object-relational one. ONTOMS [3] developed by Myung
et al. is another OWL database management system architecture, whereby the data is physically
stored in classes modeled into relational tables. It also performs complex operation as inverseOf,
symmetric, transitive and reasoning for instances. ONTOMS also evaluates the requests
expressed by OWL-QL [4]. M. Shoaib has developed ERMOS [11], a solution that provides an
efficient transformation of the OWL concepts to relational tables to store ontology and to allow
easy and fast knowledge retrieval with semantic query. JENA [18] is a framework for building
Semantic Web applications. It allows processing RDF and OWL ontology by providing a
container for collections of RDF triples. The implementation of SPARQL [8] is used in JENA to
query RDF triples. Although JENA provides interaction with OWL ontology, its OWL instances,
as well as for RDF, are based on RDF triples.
The two architectures ONTOMS and ERMOS adopt a representation in OWL, which are richer in
concepts and semantics by report of what is chosen by other work presented above. The challenge
in this kind of work is to make it possible to store ontologies in a right, coherent, scalable and
efficient way in order to retrieve knowledge by semantic inference. In the literature, the near total
of work aiming at the management of ontologies as a database use the relational model as logical
and physical model to shelter the ontology-based data.
All the precedent systems are founded on an external middleware layer that is added on top of a
database management system engine. These systems keep users far from ontology instances and
do not provide ability to interact with stored ontology facts directly.
The triplet-based approach has been used by several systems, which aim to ensure persistence of
OWL ontologies. It solves, somehow, the issue of the integration of semantics in the field of
databases, and improves the way of storing and dealing with big amounts of OWL instances.
However, the triplet-based approach presents a number of drawbacks such as:
• Losing the power of expressiveness of OWL ontology.
• Storing all triples extract from OWL ontology in the same table. This make self-join
complex and with less performance when retrieving instances
• Scanning all table triples to reach the needed triples.
Several works based on NoSQL [15] models use the structure offered by the XML language to
store data and to locate them [16]. XML represents a significant evolution of the concept of
database to store large volumes of data or documents. An XML database defines a logical model
in a XML document, and stores and retrieves the documents according to that model. The
structure in graph of the ontology language, inspired from XML, is far from the tables of the
relational model. For this purpose, there exists some works that deal with the mapping or the
correspondence of ontology model with XML tree [2]. It seems more relevant than its
4. 86 Computer Science & Information Technology (CS & IT)
correspondence with relational table, by the fact that XML is the first pillar to build assertional
languages and ontology. Nevertheless, XML databases are built into relational engines and
applying mapping under "mapping" could lead to degrade the performances and cause data loss.
Besides, there exist other works, in the opposite direction, that aim to ensure the extraction of
ontology from relational database schema [12]. These works construct a local ontology from data
that already exist in a relational database. Their goal is not to solve problems of huge ontologies
and instances, but to deal with data stored in relational model semantically.
Summarizing, storing ontology instances using a file system makes querying those instances very
difficult and with poor performances. To cope with huge ontology, most of the works in the
literature use mapping mechanisms from the ontology into relational or objects databases. And
that would not be fluent and work perfectly in terms of performance due to large ontology
vocabulary and complexity of graph. The proposed approach in this paper allows storing OWL
concepts and instances in relational tables without losing semantic, nor data, and enables users to
interact with data directly. Compared with existing methods [13] [14], the proposed approach
keeps faithfully a class hierarchy and relationship between concepts. In addition, it provides
capabilities to interact directly with instances, unlike other existing methods.
3. OVERVIEW OF THE ARCHITECTURE
The architecture of storing OWL ontology into relational tables resulted after combining some
aspects of the field of databases and those of the semantic web and knowledge representation.
This architecture is designed to allow both the efficient management of the ontology concepts and
instances, and to ensure semantic search and update regardless of the physical data structure.
The essential functions of the proposed system are (see Figure 1):
- Storing and manipulating OWL Ontology in relational model. A meta-concept table named
“OWLtable” represents the classes and relationships between classes and the properties ensure
predicates between classes and individuals. In function of this table, the OWL ontology will be
constructed and represented by a graph or an OWL expression. In fact, the unique OWL ontology
domain that exists is that stored in relational table into “OWLtable”. This way allows approaching
the semantic conceptual model and the logical model, and avoiding losing concepts that suffer the
architectures based on transformation mechanism and mapping solution.
- Storing and manipulating ontology instances and facts. A set of relational tables represents
classes and some properties and holds individuals of the OWL ontology as tuples. The meaning of
tuples is preserved, and each instance is associated with an ontological concept stored into
“OWLtable” and referenced by both table name and attribute. Instances tables and instances have
to be conforming to the structure of OWL concepts and match with constraints. In other words,
any operation in the set of instances tables should be checked and validated with concepts
structure and constraints stored in “OWLtable”.
5. Computer Science & Information Technology (CS & IT) 87
Figure 1:Storing ontology architecture.
The proposed architecture consists of a native OWL database engine built on the system of a
relational database and integrated with the relational engine (definition and manipulation
language). This approach allows storing, accessing and searching on the OWL using relational
tables to ensure the persistence of data linked with OWL concepts and properties.
In fact, neither OWL document, nor OWL instances exist; what exists is their equivalent in
relational database that are “OWLtable” and instances tables. The purpose of “OWLtable” is to
define concepts of OWL ontology, and so instances tables will be an instance document to
describe an OWL individuals and facts that conforms to “OWLtable”.
The “OWLtable” contains kinds of elements like: class, object property, data type property and
constraints which give the content nature of instances tables and the hierarchy of element.
This approach allows manipulating OWL ontology in relational tables. Furthermore, accessing,
reasoning and querying OWL ontology are performed in persistence layer.
The next section details the part of how using relational tables to store ontology classes,
properties and individuals to make them ready for any actions later on.
4. STORING OWL ONTOLOGY INTO RELATIONAL TABLES
The mechanism adopted to reach the objective of managing OWL ontology in relational engine,
contains a number of actions, as is shown in Figure 2. Those actions allow having: a meta-concept
table named “OWLtable” that acts like OWL ontology; and a set of relational tables resulted from
ontological concept which embed the facts and instances of OWL ontology. For this purpose, an
ordered steps process is conceived here to create a set of relational tables that represent OWL
concepts and facts without losing hierarchy of concept, links, constraints and facts. In other
words, this set of ordered steps process ensures the correspondence between the semantic
structure of the ontology as a graph with relationships, and the syntactical logical structure in the
form of relational tables to ensure persistence of instances with the best performance possible.
Relational database
Instance table1
Instance table2
Instance tablen
Facts tablesMeta-concept table
OWLtableAnnotation
table
Manage OWL
ontology
Manage OWL facts
Relational engine (security, confidentiality,
privilege, indexation…)
Storing
OWL
ontology
process
Storing
OWL
facts
process
Updating
OWL
ontology
process
Updating
OWL
facts
process
6. 88 Computer Science & Information Technology (CS & IT)
Figure 2:A Mechanism of storing OWL ontology into relational tables.
As shown in Figure (2), the creation of the meta-concept table “OWLtable” is the first step, then
filling it with class hierarchy and properties, the next step is to create the database that is a set of
instance tables obtained by parsing classes, data type properties and object properties. The set of
instances tables hold facts.
In the following subsections, a number of processes are developed with rules to detail how storing
classes and properties into a meta-concept table that keep the hierarchy and the relationships
between concepts; and how storing facts and instances into relational tables that hold the database
of OWL knowledge base. To illustrate that, an example of OWL ontology domain is used.
4.1. Storing OWL Class into Relational Table
The basic concept of ontology is the class. So, the most important step and rule is to perform the
correspondent relational table to embed ontology classes. It is around class that any object
property or data type property is defined. So the first correspondence applies to the classes to
support the other concepts of OWL later on.
OWL ontology specification
Storing OWL classes
OWL graph or
OWL expression
Meta-concept OWLtable contains
class hierarchy
Storing OWL properties
Meta-concept OWLtable contains
class hierarchy and relationships
Creating relational database
Set of individual tables
OWLtable
Instance table1
Instance table2
Instance table
OWL DB
7. Computer Science & Information Technology (CS & IT) 89
Figure3:The process of storing OWL classes into relational table.
The process of storing OWL classes into relational table that under the title “OWLtable” focuses
on locating every OWL class, regardless of its nature and its position in OWL graph and on
saving it with some information in order to keep hierarchical structure and type of every
relationship with another class, as is shown in Figure (3). This process is based on the following
rule:
RULE 1: for each class in OWL ontology, a record in “OWLtable” is inserted. This record
defines: that is a class concept and the super class in relationship with this class. The super class
of root classes is considered in this process as “thing”, the super class of all classes in OWL
specification [1].
EXAMPLE: the following classes presented in OWL syntax:
<owl:Class rdf:ID="Human"> …</owl:Class>
<owl:Class rdf:ID="Man"><rdfs:subClassOf rdf:resource="#Human"/></owl:Class>
<owl:Class rdf:ID="Woman"><rdfs:subClassOf rdf:resource="#Human"/></owl:Class>
<owl:Class rdf:ID="Country"> …</owl:Class>
<owl:Class rdf:ID=”Town">…</owl:Class>
are inserted in OWLtable as shown in Table(1).
Locate root classes
Insert (class=class name,
subclass=“thing”) into the list
List is empty
End process Remove duple from list
Insert class into OWLTable:
ID= ‘class name’
Concept=’class’
Relationship=’subclass’
Check class has subclass
Read direct subclasses
Insert subclasses into the list:
Insert (subclass, subclass=class) into the list
Y
Y
N
N
8. 90 Computer Science & Information Technology (CS & IT)
Table 1:OWLtable contains hierarchical of classes.
ID Concept Relationship
Human Class Thing
Man Class Human
Woman Class Human
Country Class Thing
Town Class Thing
At the end of the process of storing OWL classes into relational table, a hierarchy of classes is
obtained from records of “OWLtable”, and it will be considered as a meta-concept table for
knowledge base. The next section is devoted to enrich the meta-concept “OWLtable” by another
important ontological concept which is “OWL property”.
4.2. Storing OWL Property into Relational Table
OWL property gives a way of describing a kind of relationship of individuals to individuals and
of individuals to data values [1]. To complete OWL graph in relational field, on the other hand, to
enrich class hierarchy already stored in the meta-concept “OWLtable”, linked classes by
predicates are treated by the process shown in Figure (4) to have a complete OWL graph in the
meta-concept table. The process deals with a number of ontological concepts: object property,
data type property, cardinality constraint.
Figure 4:The process of storing OWL property into relational table.
The process of storing OWL predicates into “OWLtable” affects the two types of relationship of
individuals: Object Property and DatatypeProperty. A number of restrictions that define property
are preserved in “OWLtable”, that are: domain, range and cardinalities. In the case of
Locate properties (onProperty) from classes
Insert property resources and its cardinality constraint as a
triple (Onproperty, mincardinalaty, maxcardinality) into the
list
List is empty
End process Remove property triple from
list
Insert object property into OWLTable:
ID= ‘propertyname’
Concept=’ObjectProperty’
Relationship= {domain} {range}
{mincardinality}{maxcardinality}{supproperty
}
Y N
Check if property is object
property
Y N
Insert data type property into OWLTable:
ID= ‘propertyname’
Concept=’DatatypeProperty’
Relationship= {domain} {type}
{mincardinality}{maxcardinality}
9. Computer Science & Information Technology (CS & IT) 91
ObjectProperty the generalization is preserved by sup property in meta-concept table. The
property definition is formulated by the expression:
Relationship= {domain} {range} {mincardinality} {maxcardinality} [{supproperty}], as is shown
in Figure (4). This process is based on these three rules:
RULE 1: for each onProperty in OWL class, a record in “OWLtable” is inserted. This record
defines: that is an onProperty concept; the relationship that details predicates by a set a restriction:
domain, range, cardinalities.
RULE 2: the type of onProperty is procured from property statement by checking each ID. There
are two types: ObjectProperty and DatatypeProperty.
RULE 3: The super property of ObjectProperty is considered in this process and is added at the
end of relationship expression:
Relationship= {domain} {range} {mincardinality} {maxcardinality}{supproperty}.
As an example, consider the following set of statements about properties presented in OWL
syntax:
<!--classes definition-->
<owl:Classrdf:ID=" Human ">
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#HasFriend”>
<owl:mincardinality rdf:datatype="&xsd:int">0< /rdfs:mincardinality>
</owl:Restriction>< /rdfs:subClassOf>
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#HasFather”>
<owl:cardinality rdf:datatype="&xsd:int">1< /rdfs:cardinality>
</owl:Restriction>< /rdfs:subClassOf>
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#HasSpouse”>
<owl:maxcardinality rdf:datatype="&xsd:int">1< /rdfs:maxcardinality>
</owl:Restriction>< /rdfs:subClassOf>
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#name”>
<owl:cardinality rdf:datatype="&xsd:int">1< /rdfs:cardinality>
</owl:Restriction>< /rdfs:subClassOf>
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#Mail”>
<owl:mincardinality rdf:datatype="&xsd:int">0< /rdfs:mincardinality>
</owl:Restriction>< /rdfs:subClassOf></owl:Class>………………
<!--object properties definition-->
<owl:ObjectProperty rdf:ID=" HasFriend ">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="#Human"/></owl:ObjectProperty>
<owl:ObjectProperty rdf:ID="HasFather">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="#Man"/></owl:ObjectProperty>
<owl:ObjectProperty rdf:ID="HasSpouse">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="#Human"/></owl:ObjectProperty>…………
<!--data type properties definition-->
10. 92 Computer Science & Information Technology (CS & IT)
<owl:DatatypeProperty rdf:ID="name">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="&xsd:string"/></owl:DatatypeProperty>
<owl:DatatypeProperty rdf:ID="mail">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="&xsd:string"/></owl:DatatypeProperty>……….
Those properties are inserted in OWLtable as shown in Table(2).
Table 2:OWLtable contains hierarchical of classes.
ID Concept Relationship
Human Class Thing
Man Class Human
Woman Class Human
Country Class Thing
Town Class Thing
HasFriend ObjectProperty {Human}{Human}{0}{unlimited} {does
not exist}
HasFather ObjectProperty {Human}{Man}{1}{1} {does not exist}
HasSpouse ObjectProperty {Human}{Human}{0}{1} {does not exist}
Name DatatypeProperty {Human}{String}{1}{1}
Mail DatatypeProperty {Human}{String}{0} {unlimited}
At the end of the process of storing OWL property into meta-concept “OWLtable”, a complete
OWL graph is embedded in relational database with its class hierarchy and all predicates. The
next section presents the way of storing facts and defines instances relational tables that are
conform to “OWLtable”.
4.3. Storing OWL Facts into Relational Tables
OWL data type properties give “facts” that link individuals to data values [1]. Facts typically are
statements indicating class membership of individuals and property values of individuals. In
relational model [19], a table is a set of tuple (individual) that have the same attributes (table
membership). A tuple usually represents an object and information about that object. All the data
referenced by an attribute are in the same domain and conform to the same constraints. So by
analogy, for each class that has individuals and property values of individuals, a relational table,
given the same name of the class with attributes which are no longer than OWL data type
properties, is the suitable container to ensure persistence of OWL instances in relational models,
as is shown in Figure (5). Besides, to keep relationships and predicates between classes a
relationship table, given the same name of the property and foreign keys attribute will be
necessary to ensure the relationship between relational tables obtained from parsing OWL data
type properties, as is shown in Figure (6).
11. Computer Science & Information Technology (CS & IT) 93
Figure 5:Process of creating relational tables to embed data type properties.
Add all OWL datatype properties statements
into list
List is empty
End process Remove one data type property
statement form list
Extract domain class name from data
type property statement
Check the existence of relational
table has this class name
Create a relational table named as the domain
class name
Type the created attribute under the range of
data type property statement
Y
Y
N
N
Set an attribute in the domain class table
according to data type property
Set a CID:PK attribute in this new relational
table
Extract cardinality constraint from Onproperty
inherent of this data type property statement
cardinality=1 or
maxcardinality=1
maxcardinality>1
T
F
TCreate a relational table named as the data type
property ID
Set a FK attribute in this new relational table
linked to PK of domain class table
Type the created attribute under the range of
data type property statement
Set an attribute in this relational table
according to data type property
12. 94 Computer Science & Information Technology (CS & IT)
Figure 6:Process of creating relational tables to embed object properties.
After building the meta-concept table “OWLtable” that describe the ontology, the process of
storing facts into relational table is founded on how OWL data type properties are grouped then
embedded in relational table as attributes (Figure 5); and how OWL object properties are
embedded in relational table as attributes with foreign key feature (Figure 6). The outcome
relational tables are the repository of each part of fact. The semantic of each part is preserved in
“OWLtable” and its correspondent attribute in tables. The mechanism, proposed here, to
guarantee to have the appropriate environment to store and preserve OWL instances and facts in
relational domain is based on six rules:
RULE 1: for each data type property statement, a relational table is created according to the
domain class name if it has not been created by another data type property.
RULE 2: OWL classes must be unambiguously identifiable when using relational tables to store
their facts. A class identifier CID for each OWL class is added as a primary key in its
correspondent relational table.
The relational table obtained is shown in Table3:
Set an FK attribute in the domain class
table according to object property linked
to PK attribute of range class table
Extract cardinality constraint from Onproperty
inherent of this object property statement
cardinality=1 or
maxcardinality=1
maxcardinality>1
T
F
TCreate a relationship table named as the
object property ID
Set two foreign key attributes in this
relationship table.
FK1=PK of domain class table
FK2=PK of range class table
Add all OWL object properties
statements into list
List is empty
End process Remove one object property
statement form list
Extract domain and range class names from
object property statement
Y N
13. Computer Science & Information Technology (CS & IT) 95
Table 3:Domain class table.
Humain (table name)
Attribute Data type
CID PK
...... ......
RULE 3: If a cardinality constraint restricts an instance of a class to have at most one
semantically value for a data type property, an atomic elementary attribute is set in the relational
table that corresponds a domain class of this data type property statement.
EXAMPLE:
<owl:Classrdf:ID="Human">
<rdfs:subClassOf><owl:Restriction>
<owl:OnProperty rdf:resource="#name"/>
<owl:cardinality rdf:datatype="&xsd:int">1<rdfs:cardinality />
<owl:Restriction/><rdfs:subClassOf />……..
<owl:DatatypeProperty rdf:ID="name">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="&xsd:string"/></owl:DatatypeProperty>…..
The obtained relational table is shown in Table (4).
Table 4:Domain class table.
Human (table name)
Attribute Data type
CID PK
Name String
...... .......
RULE 4: If a cardinality constraint restricts an instance of a class to have semantically more than
one value for a data type property, a relational table is created and named as the data type
property ID. A Foreign Key attribute is added in this new relational table linked to Primary Key
(CID) of domain class table.
EXAMPLE:
<owl:Classrdf:ID="Human">
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#Mail”>
<owl:mincardinalityr df:datatype="&xsd:int">0< /rdfs:mincardinality>
</owl:Restriction>< /rdfs:subClassOf></owl:Class>………………
<owl:DatatypeProperty rdf:ID="mail">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="&xsd:string"/></owl:DatatypeProperty>…..
The relational tables obtained are shown in Figure (7).
14. 96 Computer Science & Information Technology (CS & IT)
Figure 7:Multi value data type property
RULE 5: If a cardinality constraint restricts individuals of a class to have semantically more than
individual for an object property, a relationship table named as the object property ID, is created.
Two foreign key attributes are added in this relationship table to link individuals of domain class
table to ones of range class table.
EXAMPLE:
<owl:Classrdf:ID="Human">
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#HasFriend”>
<owl:mincardinalityrdf:datatype="&xsd:int">0< /rdfs:mincardinality>
</owl:Restriction>< /rdfs:subClassOf></owl:Class>………………
<owl:ObjectProperty rdf:ID=" HasFriend ">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="#Human"/></owl:ObjectProperty>
The obtained relational tables are shown in Figure (8).
Figure 8:Relationship table
RULE 6: If a cardinality constraint restricts the individuals of a class to have at most one
semantically individual for an object property, a foreign key attribute is added in the domain class
table according to object property linked to the primary key attribute of range class table.
EXAMPLE:
<owl:Classrdf:ID="Human">
<rdfs:subClassOf><owl:Restriction><owl:onPropertyrdf:resource=”#live”>
<owl:cardinality rdf:datatype="&xsd:int">1< /rdfs:cardinality>
</owl:Restriction>< /rdfs:subClassOf></owl:Class>………………
<owl:ObjectProperty rdf:ID="live">
<rdfs:domain rdf:resource="#Human"/>
<rdfs:range rdf:resource="#Town"/></owl:ObjectProperty>
The obtained relational tables are shown in Figure (9).
Human
Attribute Data type
CID PK
name String
…… ….
HasFriend
Attribute Data type
CID FK
CID FK
mincardinality=0,
maxcardinality=unbounded
Human
Attribute Data type
CID PK
name String
…… ….
Mail
Attribute Data type
CID FK
mail String
mincardinality=0,
maxcardinality=unbounded
15. Computer Science & Information Technology (CS & IT) 97
Figure 9:Object property foreign key
5. CONCLUSIONS
In this paper, we have presented an approach that aims at storing a huge OWL ontology and big
amounts of instances in the relational engine efficiently without losing concepts neither instances.
This approach allows the use of the expressiveness power of the knowledge representation
language “OWL” as well as the facilities, and the efficiency that offer relational model. We have
defined the meta-concept table “OWLtable” that embeds faithfully class hierarchy and OWL
concepts with their relationships and constraints. A set of relational tables are created according
to class and property features in order to store OWL instances. In fact, only “OWLtable” and a set
of instances table are effective to represent natively the whole ontology. That is why users are
able to access, manage and manipulate instances directly. More complex OWL concepts such as
subproperty, value constraints and other class and property description [1] could be added into
meta-concept table to enrich the OWL specification in relational model.
In order to enable querying ontology instances that are stored in a relational database, our future
work will be focused on the refinement of the proposed architecture to integrate the mechanism of
searching both OWL instances and concepts with the mechanism of updating. The principle of
these mechanisms is based on parsing the meta-concept table “OWLtable” before any query run
on OWL instances tables. This leads to check on the one hand, the well-formed and the validity of
OWL instances tables according to “OWLtable”, and on the other hand, to check constraints and
locate properties that give a semantic dimension to the query.
REFERENCES
[1] S.Bechhofer, F.V.Harmelen, J.Hendler, I.Horrocks, D.L.McGuinness, P.F.Patel-Schneider &
L.A.Stein, (2004) “OWL Web Ontology Language Reference, W3C Recommendation”,
http://www.w3.org/TR/owl-ref.
[2] T.Bray, J.Paoli, C.M.Sperberg-McQueen, E.Maler & F.Yergeau, (2008) “Extensible Markup
Language (XML) Version 1.0. (fifth edition), W3C Recommendation”, http://www.w3.org/TR/REC-
xml.
[3] P.Myung, L.Ji-Hyun, L.Chun-Hee, L.Jiexi, O.Serres & C.Chin-Wan, (2007)“ONTOMS : An Efficient
and Scalable Ontology Management System”, Springer Advances in Databases: Concepts, Systems
and Applications, Vol. 4443/2007, pp 975-980.
[4] R.Fikes, P.Hayes & I.Horrocks, (2004) “OWL-QL-A Language for Deductive Query Answering on
the Semantic Web”, Journal of Web Semantics 2(1), pp 19–29.
[5] F.Gandon, C.F.Zucker & O.Corby, (2012) Le web sémantique: Comment lier les données et les
schémas sur le web?, Dunod, France, pp 83-131.
[6] G.Klyne, J.J.C aroll & B.McBride, (2004) “RDF 1.1 Concepts and Abstract Syntax, W3C
Recommendation”, http://www.w3.org/TR/2004/REC-rdf-concepts-20040210/.
[7] D.Brickley & B.McBride, (2004) “RDF Vocabulary Description Language 1.0: RDF Schema, W3C
Recommendation”, http://www.w3.org/TR/2004/REC-rdf-schema-20040210/.
[8] E.Prud'hommeaux & A.Seaborne, (2008) “SPARQL Query Language for RDF, W3C
Recommendation”, http://www.w3.org/TR/rdf-sparql-query.
[9] T.Berners-Lee, J.Hendler & O.Lassila, (2001) “The Semantic Web”, Scientific American, 284 (5), pp
34-44.
Human
Attribute Data type
CID PK
name String
live FK
Town
Attribute Data type
CID PK
Town_name string
mincardinality=1,
maxcardinality=1
16. 98 Computer Science & Information Technology (CS & IT)
[10] H.Dehainsala, G.Pierra & L.Bellatreche, (2007) “Ontodb: An ontology-based database for data
intensive applications”. InProceedings of the 12th International Conference on Database Systems for
Advanced Applications (DASFAA'07), Lecture Notes in Computer ScienceSpringer, pp 497-508.
[11] M.Shoaib & A.Basharat, (2010) “ERMOS: An Efficient Relational Mapping for Ontology Storage”.
2010 IEEE International Conference on Advanced Management Science (ICAMS 2010), Chengdu,
China, pp 399 – 403.
[12] H. A.Santoso, S.C.Haw and Z.T.Abdul-Mehdi, (2011) “Ontology extraction from relational database:
Concept hierarchy as background knowledge”. Knowledge-Based System 24 Elseiver , pp 457-464.
[13] E.Vysniauskas & L.Nemuraite, (2006) “Transforming Ontology Representation from OWL to
Relational Database”. Information Technology And Control, Kaunas, Technologija, Vol. 35A, No. 3,
pp 333 - 343.
[14] E.Vyšniauskas, L.Nemuraitė & A.Šukys, (2010) “A hybrid approach for relating OWL 2 ontologies
and relationaldatabases”. In: P. Forbrig, H. Gunther (Eds.): Perspektives in Business Informatics
Research. Proceedings of the 9th international conference, BIR 2010, Rostock, Germany, September
29 - October 1. Berlin-Heidelberg-New York, Springer, 2010, pp 86–101.
[15] J.Han, E.Haihong, G.Le & J.Du, (2011) “Survey on nosql database,” in Pervasive Computing and
Applications (ICPCA), 2011 6th International Conference on. IEEE, pp 363–366.
[16] T.Bourbia & M.Boufaida, (2013) “Extension of Databases by semantics: XML Schema for
embedding OWL-DL Ontology”, KMIKS’ 13 International Conference on Knowledge Management,
Information and Knowledge Systems, 18-20 Avril 2013, Hammamet, Tunisia, pp 219-232.
[17] K.Wilkinson, C.Sayers, H.Kuno & D.Reynolds, (2003) “Efficient rdf storage and retrieval in jena2”.
HP Laboratories Technical Report HPL-2003-266, pp 131–150.
[18] J.J.Carroll, I.Dickinson, C.Dollin, D.Reynolds, A.Seaborne & K.Wilkinson, (2004) “Jena:
implementing the semantic web recommendations”. In WWW Alt. ’04: Proceedings of the 3th
international World Wide Web conference on Alternate track papers & posters, NY, USA. ACM
Press, pp 74–83.
[19] E.F.Codd, (1970) "A Relational Model of Data for Large Shared Data Banks". Communications of
the ACM 13 (6), pp 377–387.
[20] F. S.John, (2000) Knowledge Representation: Logical, Philosophical, and Computational
Foundations, Brooks Cole Publishing Co., Pacific Grove, CA, USA.
Authors
Tarek Bourbia is affiliated to LIRE laboratory of the university Constantine 2 in Algeria.
He received his magister degree in 2009 in database and web semantic domain. He is
preparing his PhD thesis in the field of ontological database
Mahmoud Boufaïda is a full professor in the Computer Science department of the
university Constantine 2 in Algeria. He heads the research group ‘Information Systems
and Knowledge Bases’. He has published several papers in international conferences
and journals. He has managed and initiated multiple national and international level
projects including interoperability of information systems and integration of applications
in organizations. He has been program committee member of several conferences. His
research interests include cooperative information systems, web databases and software
engineering