Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
While the world is witnessing an information revolution unprecedented and great speed in the growth of databases in all aspects. Databases interconnect with their content and schema but use different elements and structures to express the same concepts and relations, which may cause semantic and structural conflicts. This paper proposes a new technique for integration the heterogeneous eXtensible Markup Language (XML) schemas, under the name XDEHD. The returned mediated schema contains all concepts and relations of the sources without duplication. Detailed technique divides into three steps; First, extract all subschemas from the sources by decompose the schemas sources, each subschema contains three levels, these levels are ancestor, root and leaf. Thereafter, second, the technique matches and compares the subschemas and return the related candidate subschemas, semantic closeness function is implemented to measures the degree how similar the concepts of subschemas are modelled in the sources. Finally, create the medicate schema by integration the candidate subschemas, and then obtain the minimal and complete unified schema, association strength function is developed to compute closely of pair in candidate subschema across all data sources, and elements repetition function is employed to calculate how many times each element repeated between the candidate subschema.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
Semantic Annotation: The Mainstay of Semantic WebEditor IJCATR
Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal
knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the
web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its
vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this
paper introduces the semantic annotation categories, tools, domains and models
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
While the world is witnessing an information revolution unprecedented and great speed in the growth of databases in all aspects. Databases interconnect with their content and schema but use different elements and structures to express the same concepts and relations, which may cause semantic and structural conflicts. This paper proposes a new technique for integration the heterogeneous eXtensible Markup Language (XML) schemas, under the name XDEHD. The returned mediated schema contains all concepts and relations of the sources without duplication. Detailed technique divides into three steps; First, extract all subschemas from the sources by decompose the schemas sources, each subschema contains three levels, these levels are ancestor, root and leaf. Thereafter, second, the technique matches and compares the subschemas and return the related candidate subschemas, semantic closeness function is implemented to measures the degree how similar the concepts of subschemas are modelled in the sources. Finally, create the medicate schema by integration the candidate subschemas, and then obtain the minimal and complete unified schema, association strength function is developed to compute closely of pair in candidate subschema across all data sources, and elements repetition function is employed to calculate how many times each element repeated between the candidate subschema.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
Semantic Annotation: The Mainstay of Semantic WebEditor IJCATR
Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal
knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the
web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its
vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this
paper introduces the semantic annotation categories, tools, domains and models
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
Although of the semantic web technologies utilization in the learning development field is a new research area, some authors have already proposed their idea of how an effective that operate. Specifically, from analysis of the literature in the field, we have identified three different types of existing applications that actually employ these technologies to support learning. These applications aim at: Enhancing the learning objects reusability by linking them to an ontological description of the domain, or, more generally, describe relevant dimension of the learning process in an ontology, then; providing a comprehensive authoring system to retrieve and organize web material into a learning course, and constructing advanced strategies to present annotated resources to the user, in the form of browsing facilities, narrative generation and final rendering of a course. On difference with the approaches cited above, here we propose an approach that is modeled on narrative studies and on their transposition in the digital world. In the rest of the paper, we present the theoretical basis that inspires this approach, and show some examples that are guiding our implementation and testing of these ideas within e-learning. By emerging the idea of the ontologies are recognized as the most important component in achieving semantic interoperability of e-learning resources. The benefits of their use have already been recognized in the learning technology community. In order to better define different aspects of ontology applications in e-learning, researchers have given several classifications of ontologies. We refer to a general one given in that differentiates between three dimensions ontologies can describe: content, context, and structure. Most of the present research has been dedicated to the first group of ontologies. A well-known example of such an ontology is based on the ACM Computer Classification System (ACM CCS) and defined by Resource Description Framework Schema (RDFS). It’s used in the MOODLE to classify learning objects with a goal to improve searching. The chapter will cover the terms of the semantic web and e-learning systems design and management in e-learning (MOODLE) and some of studies depend on e-learning and semantic web, thus the tools will be used in this paper, and lastly we shall discuss the expected contribution. The special attention will be putted on the above topics.
Data integration is a perennial challenge facing large-scale data scientists. Bio-ontologies are useful in this endeavour as sources of synonyms and also for rules-based fuzzy integration pipelines.
semantic data integration the process of using a conceptual representation of the data and of their relationships to eliminate possible heterogeneities.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
This paper describes the importance of a performant presentation tier. It presents the easiest way of optimizing the client-side code, providing source code examples for good practices. It then shows the correct approach to using CSS and HTML and the impact it has on the website response time. The Ajax technology is briefly described, emphasizing the role of JavaScript and presenting methods for improving its performance. In the end, some popular tools for monitoring and testing web applications are introduced.
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Many applications required integration of data from different sources, such as data mining and data / information fusion, etc., and the problem facing any project like this is that data structure different way and the terms and their meanings different from each other, and in this paper we will discuss the most important problems and how solve it using ontology.
Translating Ontologies in Real-World SettingsMauro Dragoni
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. The main challenge in translating ontologies is to find the right term with respect to the domain modeled by ontology itself. Machine translation services may help in this task; however, a crucial requirement is to have translations validated by experts before the ontologies are deployed. Real-world applications must implement a support system addressing this task for relieve experts work in validating all translations. In this paper, we present ESSOT, an Expert Supporting System for Ontology Translation. The peculiarity of this system is to exploit semantic information of the concept's context for improving the quality of label translations. The system has been tested both within the Organic.Lingua project by translating the modeled ontology in three languages and on other multilingual ontologies in order to evaluate the effectiveness of the system in other contexts. The results have been compared with the translations provided by the Microsoft Translator API and the improvements demonstrated the viability of the proposed approach.
UNIT III MINING COMMUNITIES
Aggregating and reasoning with social network data, Advanced Representations - Extracting
evolution of Web Community from a Series of Web Archive - Detecting Communities in Social
Networks - Evaluating Communities – Core Methods for Community Detection & Mining Applications of Community Mining Algorithms - Node Classification in Social Networks.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
Although of the semantic web technologies utilization in the learning development field is a new research area, some authors have already proposed their idea of how an effective that operate. Specifically, from analysis of the literature in the field, we have identified three different types of existing applications that actually employ these technologies to support learning. These applications aim at: Enhancing the learning objects reusability by linking them to an ontological description of the domain, or, more generally, describe relevant dimension of the learning process in an ontology, then; providing a comprehensive authoring system to retrieve and organize web material into a learning course, and constructing advanced strategies to present annotated resources to the user, in the form of browsing facilities, narrative generation and final rendering of a course. On difference with the approaches cited above, here we propose an approach that is modeled on narrative studies and on their transposition in the digital world. In the rest of the paper, we present the theoretical basis that inspires this approach, and show some examples that are guiding our implementation and testing of these ideas within e-learning. By emerging the idea of the ontologies are recognized as the most important component in achieving semantic interoperability of e-learning resources. The benefits of their use have already been recognized in the learning technology community. In order to better define different aspects of ontology applications in e-learning, researchers have given several classifications of ontologies. We refer to a general one given in that differentiates between three dimensions ontologies can describe: content, context, and structure. Most of the present research has been dedicated to the first group of ontologies. A well-known example of such an ontology is based on the ACM Computer Classification System (ACM CCS) and defined by Resource Description Framework Schema (RDFS). It’s used in the MOODLE to classify learning objects with a goal to improve searching. The chapter will cover the terms of the semantic web and e-learning systems design and management in e-learning (MOODLE) and some of studies depend on e-learning and semantic web, thus the tools will be used in this paper, and lastly we shall discuss the expected contribution. The special attention will be putted on the above topics.
Data integration is a perennial challenge facing large-scale data scientists. Bio-ontologies are useful in this endeavour as sources of synonyms and also for rules-based fuzzy integration pipelines.
semantic data integration the process of using a conceptual representation of the data and of their relationships to eliminate possible heterogeneities.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
This paper describes the importance of a performant presentation tier. It presents the easiest way of optimizing the client-side code, providing source code examples for good practices. It then shows the correct approach to using CSS and HTML and the impact it has on the website response time. The Ajax technology is briefly described, emphasizing the role of JavaScript and presenting methods for improving its performance. In the end, some popular tools for monitoring and testing web applications are introduced.
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
Association Rule Mining Based Extraction of Semantic Relations Using Markov L...IJwest
Ontology may be a conceptualization of a website into a human understandable, however machine-readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the intentional aspects of a site, whereas the denotative part is provided by a mental object that contains assertions about instances of concepts and relations. Semantic relation it might be potential to extract the whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations describe the linguistics relationships among the entities involve that is beneficial for a higher understanding of human language. The relation can be identified from the result of concept hierarchy extraction. The existing ontology learning process only produces the result of concept hierarchy extraction. It does not produce the semantic relation between the concepts. Here, we have to do the process of constructing the predicates and also first order logic formula. Here, also find the inference and learning weights using Markov Logic Network. To improve the relation of every input and also improve the relation between the contents we have to propose the concept of ARSRE. This method can find the frequent items between concepts and converting the extensibility of existing lightweight ontologies to formal one. The experimental results can produce the good extraction of semantic relations compared to state-of-art method.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Many applications required integration of data from different sources, such as data mining and data / information fusion, etc., and the problem facing any project like this is that data structure different way and the terms and their meanings different from each other, and in this paper we will discuss the most important problems and how solve it using ontology.
Translating Ontologies in Real-World SettingsMauro Dragoni
To enable knowledge access across languages, ontologies that are often represented only in English, need to be translated into different languages. The main challenge in translating ontologies is to find the right term with respect to the domain modeled by ontology itself. Machine translation services may help in this task; however, a crucial requirement is to have translations validated by experts before the ontologies are deployed. Real-world applications must implement a support system addressing this task for relieve experts work in validating all translations. In this paper, we present ESSOT, an Expert Supporting System for Ontology Translation. The peculiarity of this system is to exploit semantic information of the concept's context for improving the quality of label translations. The system has been tested both within the Organic.Lingua project by translating the modeled ontology in three languages and on other multilingual ontologies in order to evaluate the effectiveness of the system in other contexts. The results have been compared with the translations provided by the Microsoft Translator API and the improvements demonstrated the viability of the proposed approach.
UNIT III MINING COMMUNITIES
Aggregating and reasoning with social network data, Advanced Representations - Extracting
evolution of Web Community from a Series of Web Archive - Detecting Communities in Social
Networks - Evaluating Communities – Core Methods for Community Detection & Mining Applications of Community Mining Algorithms - Node Classification in Social Networks.
Association Rule Mining Based Extraction of Semantic Relations Using Markov ...dannyijwest
Ontology may be a conceptualization of a website into a human understandable, however machine-
readable format consisting of entities, attributes, relationships and axioms. Ontologies formalize the
intentional aspects of a site, whereas the denotative part is provided by a mental object that contains
assertions about instances of concepts and relations. Semantic relation it might be potential to extract the
whole family-tree of a outstanding personality employing a resource like Wikipedia. In a way, relations
describe the linguistics relationships among the entities involve that is beneficial for a higher
understanding of human language. The relation can be identified from the result of concept hierarchy
extraction. The existing ontology learning process only produces the result of concept hierarchy extraction.
It does not produce the semantic relation between the concepts. Here, we have to do the process of
constructing the predicates and also first order logic formula. Here, also find the inference and learning
weights using Markov Logic Network. To improve the relation of every input and also improve the relation
between the contents we have to propose the concept of ARSRE.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
Semantic Query Optimisation with Ontology Simulationdannyijwest
Semantic Web is, without a doubt, gaining momentum in both industry and academia. The word “Semantic” refers to “meaning” – a semantic web is a web of meaning. In this fast changing and result oriented practical world, gone are the days where an individual had to struggle for finding information on the Internet where knowledge management was the major issue. The semantic web has a vision of linking, integrating and analysing data from various data sources and forming a new information stream, hence a web of databases connected with each other and machines interacting with other machines to yield results which are user oriented and accurate. With the emergence of Semantic Web framework the naïve approach of searching information on the syntactic web is cliché. This paper proposes an optimised semantic searching of keywords exemplified by simulation an ontology of Indian universities with a proposed algorithm which ramifies the effective semantic retrieval of information which is easy to access and time saving.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
Web of Data as a Solution for Interoperability. Case StudiesSabin Buraga
The paper draws several considerations regarding the use of Web of Data (Semantic Web) technologies – such as metadata vocabularies and ontological constructs – to increase the degree of interoperability within distributed systems. A number of case studies are presenting to express the knowledge in a
platform- and programming language-independent manner.
Towards Ontology Development Based on Relational Databaseijbuiiir1
Ontology is defined as the formal explicit specification of a shared conceptualization. It has been widely used in almost all fields especially artificial intelligence, data mining, and semantic web etc. It is constructed using various set of resources. Now it has become a very important task to improve the efficiency of ontology construction. In order to improve the efficiency, need an automated method of building ontology from database resource. Since manual construction is found to be erroneous and not up to the expectation, automatic construction of ontology from database is innovated. Then the construction rules for ontology building from relational data sources are put forward. Finally, ontology for �automated building of ontology from relational data sources� has been implemented
An Approach to Owl Concept Extraction and Integration Across Multiple Ontolog...dannyijwest
Increase in number of ontologies on Semantic Web and endorsement of OWL as language of discourse for
the Semantic Web has lead to a scenario where research efforts in the field of ontology engineering may be
applied for making the process of ontology development through reuse a viable option for ontology
developers. The advantages are twofold as when existing ontological artefacts from the Semantic Web are
reused, semantic heterogeneity is reduced and help in interoperability which is the essence of Semantic
Web. From the perspective of ontology development advantages of reuse are in terms of cutting down on
cost as well as development life as ontology engineering requires expert domain skills and is time taking
process. We have devised a framework to address challenges associated with reusing ontologies from the
Semantic Web. In this paper we present methods adopted for extraction and integration of concepts across
multiple ontologies.
A Comparative Study Ontology Building Tools for Semantic Web Applications IJwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing,
especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms
and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all
possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely
available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d)
market status and penetration. The results of the review in ontologies are analyzed for each application area, such
as transport, tourism, personal services, health and social services, natural languages and other HCI-related
domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks.
Although each tool provides different functionalities, most of the users just use only one, because they are not able
to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different
ontologies with different development and management tools. The paper is also concerns the detection of
commonalities and differences between the examined ontologies, both on the same domain (application area) and
among different domains.
A Comparative Study Ontology Building Tools for Semantic Web Applications dannyijwest
Ontologies have recently received popularity in the area of knowledge management and knowledge sharing, especially after the evolution of the Semantic Web and its supporting technologies. An ontology defines the terms and concepts (meaning) used to describe and represent an area of knowledge.The aim of this paper is to identify all possible existing ontologies and ontology management tools (Protégé 3.4, Apollo, IsaViz & SWOOP) that are freely available and review them in terms of: a) interoperability, b) openness, c) easiness to update and maintain, d) market status and penetration. The results of the review in ontologies are analyzed for each application area, such as transport, tourism, personal services, health and social services, natural languages and other HCI-related domains. Ontology Building/Management Tools are used by different groups of people for performing diverse tasks. Although each tool provides different functionalities, most of the users just use only one, because they are not able to interchange their ontologies from one tool to another. In addition, we considered the compatibility of different ontologies with different development and management tools. The paper is also concerns the detection of commonalities and differences between the examined ontologies, both on the same domain (application area) and among different domains.
Similar to A category theoretic model of rdf ontology (20)
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
A category theoretic model of rdf ontology
1. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
DOI : 10.5121/ijwest.2015.6304 41
A Category Theoretic Model of RDF Ontology
S. Aliyu1
, S.B. Junaidu2
, A. F. Donfack Kana3
1,2,3
Department of Mathematics, Faculty of Science, Ahmadu Bello University, Zaria.
ABSTRACT
Ontology languages are used in modelling the semantics of concepts within a particular domain and the
relationships between those concepts. The Semantic Web standard provides a number of modelling
languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way
that each language level builds on the expressivity of the other. There are several problems when one
attempts to use independently developed ontologies. When existing ontologies are adapted for new
purposes it requires that certain operations are performed on them. These operations are currently
performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics
of RDF ontology as a step towards the formalization of ontological operations using category theory.
KEYWORDS
RDF, Ontology, Category Theory
1. INTRODUCTION
A formal representation of knowledge is based on conceptualization of the objects, concepts, and
other entities that are presumed to exist in some area of interest and the relationships that hold
them [1]. A conceptualization is an abstract, simplified view of the world that is represented for
some purpose [2]. Every knowledge-based system, or knowledge-level agent is committed to
some conceptualization, explicitly or implicitly.
An ontology is an explicit specification of a conceptualization [2]. The use of ontologies in
information systems is becoming more and more popular in various fields, such as web
technologies, database integration, multi agent systems, natural language processing, semantic
web etc. The Semantic Web is a revolution in the World Wide Web which has gained the
attention of many researchers. Semantic Web describes methods and technologies to enable
machines to understand the semantics of data on the World Wide Web using Ontologies.
Ontologies include computer-usable definitions of basic concepts in a domain and the
relationships among them. They encode knowledge in a domain and knowledge that spans
domains. In this way, knowledge reusability is promoted. The availability of machine-readable
ontologies would enable automated agents and other software to access the web more
intelligently. The agents would be able to perform tasks and locate related information
automatically on behalf of the user [3].
It will rarely be the case that a single ontology fulfils all the needs of a particular application
domain. More often, multiple ontologies are combined. This has raised the ontological
2. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
42
composition problems. Ontology Composition refers to those operations(e.g. instantiation,
subsumption, satisfiability, alignment, merging, mapping etc.) involved when a single ontology is
modified or combined with another to form another ontology. As noted by [3], ontology
composition problems is widely seen as both a crucial issue for the realization of the semantic
web and as one of the hardest problems to solve. Ontology composition has received wide
attention in the research community in recent years [4], [5], [6], [7], [8], [9], [10] and [11].
[5] observed that another important problem to be solved while combining ontologies is the
automation of the process. Techniques that rely strongly on human input are able to score better
on precision, but, are less scalable and labour intensive as compared to methods that rely strongly
on automation. [3] also noted that the sharing of information based on the intended semantics, is
still a fundamental challenge.
It is believed that a formal view on ontologies can contribute a lot in solving ontology
composition problems. As noted by [4], it is essential that the chosen formalism emphasizes
relationships between things, allowing mapping in an appropriate manner, allowing coexistence
of heterogeneous entities and also offering a good set of operations to put entities together.
This paper uses category theory to define a formal syntax and semantics for RDF ontologies
which is the foundation on which other components of the semantic web stack is built upon.
Category theory has been successful in situations where interoperability is crucial as in formal
specification of systems and in software architecture. A direct gain of this formalization is the
modularization and reuse of the framework.
The rest of this paper is structured as follows: Section 2 introduces the Semantic Web Stack,
Ontological languages and Category Theory. Section 3 describes the RDF Ontological model.
Section 4 then describes the modelling of our RDF Ontology Categorically with examples.
Section 5 introduces our interpretation model and then model the semantic of our Categorical
RDF Ontology. Section 6 discusses the related works and finally, Section 7 gave a brief
conclusion to this paper.
2. PRELIMINARIES
2.1 Semantic Web Stack
The Semantic Web was designed as an information space, with the goal that it should be useful
not only for human-human communication, but also that machines would be able to participate
and help [18]. One of the major obstacles to this is that, most information on the Web is designed
for human consumption, and even if it was derived from some formally defined representation
such as Relational Database or some early knowledge representation technique such as Semantic
Network or frame based system, the use of the data is not evident to a robot browsing the Web.
The Semantic Web provides languages for expressing information in a machine process-able
form. The Semantic Web is a long-term project started by W3C with the stated purpose of
realizing the idea of having data on the Web defined and linked in a way that it can be used by
machines not just for display purposes but for automation, integration, and reuse of data across
various applications [12]. The Semantic Web is designed to allow reasoning and inference
capabilities to be added to the pure descriptions of knowledge in a domain. This includes stating
3. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
43
facts which can also be extended to the formation of complicated relationships. This allow
intelligent software to act on this descriptive information. The Semantic Web stack as illustrated
in figure 1 is a layered cake illustrating key technologies that makes semantic web vision
possible.
Figure 1: Semantic Web Stack (Source: http://www1.cse.wustl.edu/)
The semantic web stack has at its base the URI (Universal Resource Identifier), a compact string
of characters used to identify or name a resource. IRI (Internationalized Resource Identifier) is a
form of URI that uses characters beyond ASCII, thus becoming more useful in an international
context. Also at the same level to the URI, is the Unicode which is the universal standard
encoding system and provides a unified system for representing textual data. Immediately above
that layer is the XML (Extensible Markup Language). XML provides a standard way to compose
information so that it can be easily shared. XML allows users to add arbitrary structure to their
documents but says nothing about what the structures mean. This leads us to RDF—the Resource
Description Framework. The W3C developed this new logical language to facilitate
interoperability of applications which generate and process machine-understandable
representations of data resources on the Web. In RDF, a document makes assertions that
particular things have properties (such as "is a brother of," "is wife of") with certain values. This
structure turns out to be a natural way to describe the majority of data processed by machines.
Within this structure, the subject and object are each identified by a Universal Resource Identifier
(URI). Because RDF does not make assumptions about any particular domain, nor does it define
the semantics of any domain, RDFS was developed so as to achieve that. RDFS is a language for
defining the semantics of a particular domain. RDFS is RDF's vocabulary description language.
RDFS is too inexpressive and cannot be used to define constraint on relationships to be other than
m:n (many-to-many). Clearly, for realistic applications something better than RDFS was needed.
For such reasons, the Web Ontology Language (OWL) was developed. OWL is a Knowledge
Representation language proposed by the W3C as a standard to codify ontologies in a prospective
Semantic Web. OWL is based on Description Logics. We can represent a knowledge domain
computationally in an OWL ontology, in order to apply automated reasoning, infer knowledge,
queries, classify entities against the ontology, integrate knowledge from different resources etc.
4. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
44
2.2 Category Theory
A Category is a collection of data that satisfy some particular properties. Category theory is the
mathematical theory of structures and its greatest significance lies in its capacity to express
relationships between structures. Examples of category include: a database schema which is a
system of tables linked by foreign keys, an ontology representation of concepts in a particular
domain.
Category Theory is designed to describe various structural concepts in a uniform way. A category
models entities of a certain sort and the relationships between them [17]. A categorical structure
is similar to a graph where the nodes are entities and the arrows are relationships. For example,
figure 2 shows the relationship between a Categorical structure and a graph. Where the entities
are the nodes represented as A, B, C and the relationships are the arrows represented as f, g, h.
A B
C
Figure 2: A Categorical Structure
For example, the statement from [17]:
"self email is an email from a person = self email is an email to a person"
can be categorically modelled as:
Figure 3: A Categorical Statement Model
A Categorical structure is built on the following constituents:
a) A collection of things called objects, denoted by: A,B,C,... varying over
objects.
b) A collection of arrows called morphisms or maps, usually denoted by: f, g,
h,... varying over morphisms.
c) A relation on morphisms and pairs of objects, called typing of the morphisms.
For morphism f and objects A,B , the relation is denoted f: A B. We
f
5. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
45
also say that A B is the type of f and that f is a morphism from A to B .
A and B are referred to as domain and co-domain of f respectively.
For each pair of morphisms say: : A B and : B C
a composite map holds: : A C
A Categorical structure must also satisfy the following rules, laws or axioms.
Identity Laws: if : A B , then 1B f = f and f 1A =f
Associative law: if : A B and : B C :C D, then (h g) f=h
(g f)
3. RDF ONTOLOGY
The Resource Description Framework (RDF) is a framework for representing information on the
Web [13]. The core structure of the abstract syntax is a set of triples, each consisting of a subject,
a predicate and an object. A set of such triples is called an RDF graph. An RDF graph can be
visualized as a node and directed-arc diagram in which each triple is represented as a node-arc-
node link [13]. There can be three kinds of nodes in an RDF graph: IRI's, literals and blank nodes.
Asserting an RDF triple says that some relationship, indicated by the predicate, holds between the
resources (nodes) denoted by the subject and object. This statement corresponding to an RDF
triple is known as an RDF simple statement [13]. An RDF ontological structure can now be
defined as follows:
Definition 1 (An RDF Ontology/Graph): An RDF Ontological structure is a 3-tuple <R, P, h>
where:
R is a disjoint set of IRI's which identifies resources, literals and blank nodes.
P is a set of properties
h is a mapping from P into the powerset of (R X R), which is an operation that associates
to each property in P its domain and range objects.
Assuming pair wise disjoint set of IRIs denoted by I, the set of blank nodes denoted by B and the
set of literals denoted by L.
An RDF triple (simple statement) is a tuple <s, p ,o> (I B) X I X (I B L) and an RDF
graph is a set of RDF triple [14].
RDF also provides a construct which refer to a collection of resources known as RDF containers
and also make statements about statements referred to as reification.
This paper defines simple categorical RDF statements and handles the container construct with a
repeated property construct i.e. using a resource to form multiple statements with the same
g f
f
f g h
6. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
46
predicate. For reification, we will build a model of the original statement and this model will be a
new resource to which additional properties can be attached.
Based on the description of a Categorical structure in Section 2, a categorical structure can now
be defined as follow:
Definition 2 (A Category): A Category C is a structure (O, M, f, o, id), where:
O is a collection of objects.
M is a collection of morphisms say g M such that g: A B where A,B O.
f: M O X O is an operation which associates to each morphism its domain object
and codomain object.
o is an associative operation of morphism composition.
id is a collection of identity morphism for each object of O.
Definition 2 can now be use to model ontological languages categorically.
4. CATEGORICAL RDF ONTOLOGY MODEL
Definition 3 (A Categorical RDF Ontology): A Categorical RDF Ontology C is a structure
(R, P, f, o, id), where:
R is a collection of resources;
P is a collection of morphisms (e.g. g: A B) where g P ,A,B R
f: P W, where W R X R , f is an operation which associates to each morphism
its domain object and codomain object;
is an associative operation of morphism composition;
id is a collection of identity morphisms for each object of R.
From definition 3, the whole bunch of vocabulary to be used are gotten from the union of the two
sets R and P. In order not to reinvent the wheel, vocabulary used in this paper will be those
provided by RDF Syntax specification. The semantics to these syntax will be based on the RDF
Semantics specification but the interpretation or mapping of these syntax to their intended
meaning or semantics will be defined categorically in our model.
Definition 4 (Categorical RDF Triple): A Categorical RDF Triple or simple statement is a
category p: S O where S and O are categorical objects representing resources and p is a
morphism that maps from S to O.
RDF has an abstract syntax that reflects a simple graph-based data model [13]. Table 1 shows the
relationship between the abstract syntax and the Categorical RDF Ontology model.
7. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
47
Table 1: Categorical RDF Ontology Resource, Properties and Statement description
Basic Data
Model
Abstract Syntax Categorical RDF
Ontology
Resource Labelled node with URI reference
(Subject (s) or Object (o))
Objects
e.g. A, B, S, O
....
Properties Arc with URI reference (p) Morphisms
e.g. f, g, h, p .....
Statement Triples : (s, p, o)
p: S O
To illustrate our definitions, consider the following example:
Example 1. Consider a place description graph or ontology in which a particular place
identified by Samaru is in a City, Zaria and Zaria is in a Country, Nigeria and Nigeria is
within a particular Continent, Africa. The resource, Zaria is also related to a particular
Geo Location and the Geo Location has both a latitude value and a longitude value.
Zaria as a place is also associated to a population value. A graphical RDF model of a
place description is pictured in Figure 3.
1Million Nigeria
in
population in
in Zaria
Samaru Africa
location
12o
N
latitude
Geo Location
longitude
8o
E
Figure 3: Graph based RDF Place Description
From figure 3, the following Categorical RDF Ontology constituents can be obtained:
The set of resources, R = {Samaru, Zaria, Nigeria, Africa, Geo Location, 1 Million, 8o
E,
12o
N }.
The set of predicates or properties, P = {population, location, longitude, latitude, in}.
8. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
48
The subset of the Cartesian product on R , represented by W=
{<Samaru, Zaria>,<Zaria, Nigeria>,<Nigeria, Africa>,<Samaru, Geo
Location>,<Samaru, 1 Million>,<Geo Location, 8o
E>, <Geo Location, 12o
N>}
So, a morphism mapping from the set of properties, P, to an element in W forms an RDF
statement like: Samaru is in Zaria. It should be noted that in forming the statements from W, it is
possible that not all <A, B> can be used. Some may be meaningless in the context of the ontology
domain.
5. CATEGORICAL RDF ONTOLOGY SEMANTIC MODEL
In other to describe the semantics of the Categorical RDF Ontology Model, an important
construct in Category Theory known as functor is worth defining. [15] has defined a functor as
follow.
Definition 5 (A Functor): A functor F: C D is a pair of functions F0 and F1 for which:
if f:A B in C, then F1(f): F0(A) F0(B) in D.
For any object A of C, F1(idA)=idF0(A).
If g o f is defined in C, then F1(g)o F1(f) is defined in D and F1(g o f)=F1(g) o F1(f).
In otherwords, a functor is a function from one category to another. This means that, for every
object or morphism in Category C there is an image, mapping, transformation or translation in the
other Category D via the functor.
A functor is therefore a structure-preserving map between categories, in the same way that a
homomorphism is a structure-preserving map between graphs [15].
The above definition can be illustrated in a diagrammatic format as:
Category C
Category C Category D
Figure 4: Functors Definition
B
A
Bi
Ai
f
F0(B)
F1(f)
F0(A)
9. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
49
We intend to attach meanings to the Categorical RDF ontological construct (say Category C) by
mapping them to their interpretations (say Category D) via an interpretation function which is in
this case a functor.
This idea is according to [16], an interpretation is a mapping from IRIs and literals into a set of
interpretations, together with some constraints upon the set and the mapping. Our IRIs and literals
will be our Categorical RDF Ontology while the set of interpretations will be our Categorical
Interpretation Model then the mapping will be functors between the two categories.
Before defining our Interpretation Model, it is important to review the concept of subcategory as
described by [15]:
Definition 6 (subcategory): A subcategory D of a category C is a category for which:
All the objects of D are objects of C and all the arrows of D are arrows of C.
The source and target of an arrow of D are the same as its source and target in C.
If A is an object of D then its identity arrow idA in C is in D.
If f:A B and g:B C in D, then the composite (in C) g o f is in D and is the
composite in D.
Definition 6 is now used to defined our Categorical RDF Ontology Interpretation Model in
Definition 7.
Definition 7 (A Categorical RDF Ontology Interpretation Model):
A Categorical RDF Ontology Interpretation Model CI is a 3 tuple structure <RI
, PI
, FI> where RI
is a collection of IRIs representing referred resources, PI
is a collection representing referred
properties and FI is a collection of 3 operations (f1,f2, f3) where these operations describes three
subcategories of CI defined as:
<Z, PI
, f1> where
o Z is the powerset of RI
X RI
(i.e. Cartesian product of resources).
o PI
is the collection of properties.
o f1 is an operation that assigns an element of PI
to an ordered pair from Z
(i.e. f1: PI
Z).
<Y, R, f2> where
o Y is the union of referred resources and properties, RI
PI
.
o RI
is a collection of referred resources.
o f2 is an operation that maps an IRI into the union of resources and properties (i.e.
f2: RI
Y).
<L, R, f3> where
o L represents literals where L IRIs
o RI
is the collection of resources.
o f3 is an operation that maps partially a literal into a collection of resources
(i.e. f3 : L RI
).
10. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
50
Now, having defined our interpretation model, the function from any Categorical RDF expression
or statement into our Interpretation model describes the semantic model defined as follows:
Definition 8 (A Categorical RDF Ontology Semantic Model):
A Categorical Ontology Semantic Model is a 3-tuple <CV, CI, F> structures where:
CV is a Categorical RDF Ontology Expression.
CI is our Interpretation Model.
F is our interpretation function (i.e. functor).
Let E be a Categorical RDF statement consisting of names as objects, triples or statements as
binary mapping between objects and graphs as multiple triples then, the meaning can be given by
the following rules defined algorithmically:
if (E is a literal ) then
F(E) f3(E); // Interpretation of E
else if (E is a resource) then
F(E) f2(E);
else if (E is a statement with <s, p, o>){
if( (F(p)P ) && (<F(s), F(o)> f1(F(p)));
(F(E) true );
else
(F(E) false );
}
else // i.e. when we have multiple triples (say in graph, G) {
foreach EI
in G { // where EI
G
if(F(EI
)=false )
then F(E)=false;
else
F(E)=true;
}
}
Note that this rules changes as the expressivity of our ontology language changes.
6. RELATED WORK
Several other approaches and techniques exist for the effective and efficient management of
ontologies. According to [5], most of these techniques are semi-automatic approaches towards
ontological operations and are focused at constructing software tools that can be used to combine
independently developed ontologies. In contrast, the approach taken in this paper is to develop a
systematic mathematical theory for ontological operations management. Other works that have
also considered a mathematical approach include the work of [5] which described an algebra for
composing ontologies by defining a recursive finite typing of RDF collections for expressing
ontologies expressed in heterogeneous languages, then constructing a well-founded algebraic
scheme over a universe of ontologies that respects well known algebraic operations and identities.
11. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
51
The semantics of his model was defined as well founded set in ZF set theory. [5] work has the
limitation of considering only the RDF level in the semantic web stack. [4], viewed ontology as a
structural construct where operations between more than one ontologies are performed
structurally with no consideration of the semantics of the data involved.
7. CONCLUSION
This paper has modelled the RDF Ontology using an algebraic structure, Category Theory. We
reviewed related work in which most are focused on the practical implementation of software
applications to manage ontological operations with limited theoretical or conceptual
understanding underlying such operation. As a result, they fail to precisely state the details of the
operations they describe. Hence, we developed a theoretical basis to define precisely ontological
operations by first modelling its base (RDF) constructs. Reification and Container constructs are
not left out in our model. we can capture most of the RDF modelling constructs categorically as
we intend to build upwards along the semantic web stack to fully capture the requirements of the
Semantic Web and efficiently managing its information represented as ontologies both
syntactically and semantically. Thus, with a fully modelled ontology, ontological statement can
be reduced to its mathematical equivalent.
REFERENCES
[1] Genesereth, M. R., & Nilsson, N. J. (1987). Logical Foundations of Artificial Intelligence. San Mateo:
CA: Morgan Kaufmann Publishers.
[2] Gruber, T. R. (1993). A Translation Approach to Portable Ontology Specifications. Stanford,
California: Knowledge Systems Laboratory, Computer Science Department,Stanford University.
[3] Antoniou, G. (2008). A Semantic Web Primer. Massachusetts Institute of Technology.
[4] Isabel Cafezeiro, E. H. (2007). Semantic Interoperability via Category Theory.
[5] Kaushik, S. W. (2006). An Algebra for Composing Ontologies. Proceedings of the 4th International
Conference on Formal Ontology in Information Systems (pp. 269-275). Baltimore, Maryland, USA:
Amsterdam Ios Press.
[6] Klein, M. (2001). Combining and relating ontologies: an analysis of problems and solutions.
Proceedings of the 17th International Joint Conference on Artificial Intelligence (pp. 53-62). Seattle:
Workshop: Ontologies and Information Sharing.
[7] Lakshmi Tulasi et.al. (2014). Survey on Techniques for Ontology Interoperability in Semantic.
Global Journal of Computer Science and Technology , 3-5.
[8] Zimmermann, A. K. (2006). Formalizing Ontology Alignment and its Operations with Category
Theory. Proceedings of the 4th International Conference on formal Ontology in Information Systems
(pp. 277-288). Baltimore, Maryland, USA: IOS Press.
[9] Euzenat, J. (2008). Algebras of ontology alignment relations. Proceedings of the 7th International
Semantic Web Conference, (pp. 387-402). Karlsruhe, Germany.
[10] Mitra, P. a. (2004). An Ontology Composition Algebra. International Handbooks on Information
System, SpringerVerlag , 93-117.
12. International Journal of Web & Semantic Technology (IJWesT) Vol.6, No.3, July 2015
52
[11] Mocan, A. C. (2006). Formal Model for Ontology Mapping Creation. Proceedings of the 5th
International Semantic Web Conference (pp. 459-472). Athens, USA: Springer.
[12] Herman, I. (2013). Semantic Web Activity Statement. http://www.w3.org/2001/sw/Activity.
[13] Richard, C. (2014, February 25). RDF 1.1 Concepts and Abstract Syntax. Retrieved January 20, 2015,
from http://www.w3.org/TR/rdf11-concepts/
[14] Olaf Harfig, B. T. (2014). Foundations of an Alternative Approach to Reification in RDF.
arXiv:1406.3399v1.[cs.DB] , 3.
[15] Micheal Barr, C. W. (1998). Category Theory for Computing Science. McGill University.
[16] Patrick J. Hayes, P. F.-S. (2014, February 25). RDF 1.1 Semantics. Retrieved January 20, 2015, from
W3C: http://www.w3.org/TR/rdf11-mt/
[17] Spivak, David I. Categorical Databases. [Online] Mathematics Department, Massachusetts Institute of
Technology, 13-01-2012. [Cited:1-04-2013.] http://math.mit.edu/~dspivak/ informatics
/talks/CTDBIntroductoryTalk.
[18] Berners-Lee, Tim. Semantic Web Roadmap. [Online] September 1998. [Cited: 30 January 2015.]
http://www.w3.org/DesignIssues/Semantic.html.