This document discusses knowledge representation and reasoning (KR&R) systems for autonomous underwater vehicles (AUVs). It proposes a "global KR&R system" using semantic web technologies to coordinate missions between multiple AUVs. The system would allow each AUV to maintain its own ontology while still enabling knowledge sharing through common upper ontologies. The document then presents a case study of collaborative AUVs carrying out mine clearing missions and divides them into search, inspection, and execution roles. Data structures and algorithms for the proposed KR&R system are discussed to efficiently represent and share knowledge between the AUVs.
PROLOGUSED TO REPRESENT AND REASON QUALITATIVELYOVER A SPACE DOMAINijaia
Spatial reasoning is a relevant topic in artificial intelligence with applications in geographical InformationSystem, robotics, content-based image retrieval, traffic engineering. Additionally formal representation of knowledge allows the processing in a computer. Prolog is a programming language used in artificial intelligence that is useful to represent knowledge and perform a search, by asking questions in the knowledge base. Prolog can be used to develop a variety of applications like check the consistency or to perform any kind of reasoning. This article proposes the use of Prolog as a representation model and a reasoning engine to describe the topological relations between several objects in a geographic space, using the RCC model. The application of this simplifies the constructionprogram, allows us to focus on the spatial problem.
Explanations in Dialogue Systems through Uncertain RDF Knowledge BasesDaniel Sonntag
We implemented a generic dialogue shell that can be configured for and applied to domain-specific dialogue applications. The dialogue system works robustly for a new domain when the application backend can automatically infer previously unknown knowledge (facts) and provide explanations for the inference steps involved. For this purpose, we employ URDF, a query engine for uncertain and potentially inconsistent RDF knowledge bases. URDF supports rule-based, first-order predicate logic as used in OWL-Lite and OWL-DL, with simple and effective top-down reasoning capabilities. This mechanism also generates explanation graphs. These graphs can then be displayed in the GUI of the dialogue shell and help the user understand the underlying reasoning processes. We believe that proper explanations are a main factor for increasing the level of user trust in end-to-end human-computer interaction systems.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
PROLOGUSED TO REPRESENT AND REASON QUALITATIVELYOVER A SPACE DOMAINijaia
Spatial reasoning is a relevant topic in artificial intelligence with applications in geographical InformationSystem, robotics, content-based image retrieval, traffic engineering. Additionally formal representation of knowledge allows the processing in a computer. Prolog is a programming language used in artificial intelligence that is useful to represent knowledge and perform a search, by asking questions in the knowledge base. Prolog can be used to develop a variety of applications like check the consistency or to perform any kind of reasoning. This article proposes the use of Prolog as a representation model and a reasoning engine to describe the topological relations between several objects in a geographic space, using the RCC model. The application of this simplifies the constructionprogram, allows us to focus on the spatial problem.
Explanations in Dialogue Systems through Uncertain RDF Knowledge BasesDaniel Sonntag
We implemented a generic dialogue shell that can be configured for and applied to domain-specific dialogue applications. The dialogue system works robustly for a new domain when the application backend can automatically infer previously unknown knowledge (facts) and provide explanations for the inference steps involved. For this purpose, we employ URDF, a query engine for uncertain and potentially inconsistent RDF knowledge bases. URDF supports rule-based, first-order predicate logic as used in OWL-Lite and OWL-DL, with simple and effective top-down reasoning capabilities. This mechanism also generates explanation graphs. These graphs can then be displayed in the GUI of the dialogue shell and help the user understand the underlying reasoning processes. We believe that proper explanations are a main factor for increasing the level of user trust in end-to-end human-computer interaction systems.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
Swoogle: Showcasing the Significance of Semantic SearchIDES Editor
The World Wide Web hosts vast repositories of
information. The retrieval of required information from the
Internet is a great challenge since computer applications
understand only the structure and layout of web pages and
they do not have access to their intended meaning. Semantic
web is an effort to enhance the Internet, so that computers
can process the information presented on WWW, interpret
and communicate with it, to help humans find required
essential knowledge. Application of Ontology is the
predominant approach helping the evolution of the Semantic
web. The aim of our work is to illustrate how Swoogle, a
semantic search engine, helps make computer and WWW
interoperable and more intelligent. In this paper, we discuss
issues related to traditional and semantic web searching. We
outline how an understanding of the semantics of the search
terms can be used to provide better results. The experimental
results establish that semantic search provides more focused
results than the traditional search.
The increased potential of the ontologies to reduce the human interference has wide range of applications. This paper identifies requirements for an ontology development platform to innovate artificially intelligent web. To facilitate this process, RDF and OWL have been developed as standard formats for the sharing and integration of data and knowledge. The knowledge in the form of rich conceptual schemas called ontologies. Based on the framework, an architectural paradigm is put forward in view of ontology engineering and development of ontology applications and a development portal designed to support ontology engineering, content authoring and application development with a view to maximal scalability in size and complexity of semantic knowledge and flexible reuse of ontology models and ontology application processes in a distributed and collaborative engineering environment.
An IoT platform is a fusion of physical resources such as connectors, wireless networks, smart phones and computer technologies viz; protocols, web service technologies, etc. the heterogeneity of used technologies generates a high cost at interoperability level. This paper presents a generic meta-model of IoT interoperability based on different organizational concepts such as service, compilation, activity and architectures. This model called M2IOTI, defines a very simple description of the IoT interoperability. M2IOTI is a meta-model of IoT interoperability by which one can build a model of IoT interoperability with different forms of organizations. We show that this meta-model allows for connected objects heterogeneity in semantic technologies, activities, services and architectures, in order to offer a high level at IoT interoperability. We also introduce the concept PSM which uses the same conceptual model to describe each interoperability model already existed. Such as conceptual, behavioral, semantic and dynamic models. We have also proposed a PIM model that regroups all the common concepts between the PSMs interoperability models.
Peer to Peer Approach based Replica and Locality Awareness to Manage and Diss...IJCNCJournal
Distributed Hash Table (DHT) based structured peer-to-peer (P2P) systems provide an efficient method of disseminating information in a VANET environment owing to its high performance and properties (e.g., self-organization, decentralization, scalability, etc.). The topology of ad hoc vehicle networks (VANET) varies dynamically; its disconnections are frequent due to the high movement of vehicles. In such a topology, information availability is an ultimate problem for vehicles, in general, connect and disconnect frequently from the network. Data replication is an appropriate and adequate solution to this problem. In this contribution, to increase the accessibility of data, which also increases the success rate of the lookup, a method based on replication in the Vanet network is proposed named LAaR-Vanet. Also, this replication strategy is combined with a locality-awareness method to promote the same purpose and to avoid the problems of long paths. The performance of the proposed solution is assessed by a series of in-depth simulations in urban areas. The obtained results indicate the efficiency of the proposed approach, in terms of the following metrics: lookup success rate, the delay, and the number of the logical hop.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Examines how new technologies can be applied to overcome problems in controlled vocabularies, focusing on Resource Description Framework (RDF), Simple Knowledge Organisation System (SKOS), metadata registries and web services. Part of the Cataloguing and Indexing Group in Scotland (CIGS) seminar "Toto, I've got a feeling we're not in Kansas anymore": metadata issues and Web2.0 services.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Intelligent Expert systems can provide decisions for users for estimate from user preferences to find better destination from user profits. this present provides description of above system and suggest new approach for next researches.
An IoT platform is a fusion of physical resources such as connectors, wireless networks, smart phones and computer technologies viz; protocols, web service technologies, etc. the heterogeneity of used technologies generates a high cost at interoperability level. This paper presents a generic meta-model of IoT interoperability based on different organizational concepts such as service, compilation, activity and architectures. This model called M2IOTI, defines a very simple description of the IoT interoperability. M2IOTI is a meta-model of IoT interoperability by which one can build a model of IoT interoperability with different forms of organizations. We show that this meta-model allows for connected objects heterogeneity in semantic technologies, activities, services and architectures, in order to offer a high level at IoT interoperability. We also introduce the concept PSM which uses the same conceptual model to describe each interoperability model already existed. Such as conceptual, behavioral, semantic and dynamic models. We have also proposed a PIM model that regroups all the common concepts between the PSMs interoperability models.
Peer to Peer Approach based Replica and Locality Awareness to Manage and Diss...IJCNCJournal
Distributed Hash Table (DHT) based structured peer-to-peer (P2P) systems provide an efficient method of disseminating information in a VANET environment owing to its high performance and properties (e.g., self-organization, decentralization, scalability, etc.). The topology of ad hoc vehicle networks (VANET) varies dynamically; its disconnections are frequent due to the high movement of vehicles. In such a topology, information availability is an ultimate problem for vehicles, in general, connect and disconnect frequently from the network. Data replication is an appropriate and adequate solution to this problem. In this contribution, to increase the accessibility of data, which also increases the success rate of the lookup, a method based on replication in the Vanet network is proposed named LAaR-Vanet. Also, this replication strategy is combined with a locality-awareness method to promote the same purpose and to avoid the problems of long paths. The performance of the proposed solution is assessed by a series of in-depth simulations in urban areas. The obtained results indicate the efficiency of the proposed approach, in terms of the following metrics: lookup success rate, the delay, and the number of the logical hop.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
SEMANTIC INTEGRATION FOR AUTOMATIC ONTOLOGY MAPPING cscpconf
In the last decade, ontologies have played a key technology role for information sharing and agents interoperability in different application domains. In semantic web domain, ontologies are efficiently used toface the great challenge of representing the semantics of data, in order to bring the actual web to its full
power and hence, achieve its objective. However, using ontologies as common and shared vocabularies requires a certain degree of interoperability between them. To confront this requirement, mapping ontologies is a solution that is not to be avoided. In deed, ontology mapping build a meta layer that allows different applications and information systems to access and share their informations, of course, after resolving the different forms of syntactic, semantic and lexical mismatches. In the contribution presented in this paper, we have integrated the semantic aspect based on an external lexical resource, wordNet, to design a new algorithm for fully automatic ontology mapping. This fully automatic character features the
main difference of our contribution with regards to the most of the existing semi-automatic algorithms of ontology mapping, such as Chimaera, Prompt, Onion, Glue, etc. To better enhance the performances of our algorithm, the mapping discovery stage is based on the combination of two sub-modules. The former
analysis the concept’s names and the later analysis their properties. Each one of these two sub-modules is
it self based on the combination of lexical and semantic similarity measures.
Examines how new technologies can be applied to overcome problems in controlled vocabularies, focusing on Resource Description Framework (RDF), Simple Knowledge Organisation System (SKOS), metadata registries and web services. Part of the Cataloguing and Indexing Group in Scotland (CIGS) seminar "Toto, I've got a feeling we're not in Kansas anymore": metadata issues and Web2.0 services.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Intelligent Expert systems can provide decisions for users for estimate from user preferences to find better destination from user profits. this present provides description of above system and suggest new approach for next researches.
Semantic Web & Information Brokering: Opportunities, Commercialization and Ch...Amit Sheth
Amit Sheth, "Semantic Web & Info. Brokering Opportunities, Commercialization and Challenges," Keynote talk at the workshop on Semantic Web: Models, Architecture and Management, September 21, 2000, Lisbon, Portugal.
This was the keynote given at probably the first international event with "Semantic Web" in title (and before the well known SciAm article). As in TBL's use of Semantic Web in his 1999 book, (semantic) metadata plays central role. The use of Worldmodel/Ontology is consistent with our use of ontology for (Web) information integration in 1994 CIKM paper. Summary of the talk by event organizers and other details are at: http://knoesis.org/library/resource.php?id=735
Prof. Sheth started a Semantic Web company Taalee, Inc. in 1999 (product was called MediaAnywhere A/V search engine- discussed in this paper in the context of one of its use by a customer Redband Broadcasting). The product included Semantic Web/populated Ontology based semantic (faceted) search, semantic browsing, semantic personalization, semantic targeting (advertisement), etc as is described in U.S. Patent #6311194, 30 Oct. 2001 (filed 2000). MediaAnywhere has about 25 ontologies in News/Business, Sports, Entertainment, etc.
Taalee merged to become Voquette in 2001 (product was called SCORE), Semagix in 2004 (product was called Semagix Freedom), and then Fortent in 2006 (products included Know Your Customers).
USING ONTOLOGIES TO OVERCOMING DRAWBACKS OF DATABASES AND VICE VERSA: A SURVEYcseij
For a same domain, several databases (DBs) exist. The emergence of classical web to the semantic web has
contributed to the appearance of the notion of ontology that have shared and consensual vocabulary. For a
given, it is more interesting to take advantage of existing databases, to build an ontology. Most of the data
are already stored in these databases. So many DBs can be integrated to enable reuse of existing data for
the semantic web. Even for existing ontologies, the relevance of the information they contain requires
regular updating. These databases can be useful sources to enrich these ontologies. In the other hand, for
these ontologies more than the ratio ‘size of the instances on the size of working memory’ is large more
than the management of these instances, in memory, is difficult. Finding a way to store these instances in a
structured manner to satisfy the needs of performance and reliability required for many applications
becomes an obligation. As a consequence, defining query languages to support these structures becomes a
challenge for SW community. We will show through this paper how ontologies can benefit from DBs to
increase system performance and facilitate their design cycle. The DBs in their turn suffers from several
drawbacks namely complexity of the design cycle and lack of semantics. Since ontologies are rich in
semantic, DBs can profit from this advantage to overcoming their drawbacks.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
Semantic Annotation: The Mainstay of Semantic WebEditor IJCATR
Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal
knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the
web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its
vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this
paper introduces the semantic annotation categories, tools, domains and models
1. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
Knowledge Representation and Reasoning in Autonomous
Underwater Vehicles
Pablo Murillo, Marta de la Cruz, Héctor Prieto
Abstract
In collaborative robots is necessary an efficient and robust Knowledge Representation and Reasoning system. In this paper, a
KR&R system based on Semantic Web is presented and is proposed an upper KR&R system to coordinate missions in aquatic
environments. As aquatic environments are dynamic, Autonomous Underwater Vehicles (AUVs) need to deal with non-
programed situations which bring the system to manage incompleteness and uncertainties. In this work different Semantic Web
technologies are studied in order to build a “global KR&R system”. Our case of study is focused on collaborative AUVs which
carry out missions for the mines treatment in the ocean. To make it possible, three different types of AUV have been proposed:
search AUV (sAUV), inspection AUV (iAUV) and execution AUV (eAUV). The “global KR&R system” manages mission-
related knowledge. Protégé has been used to implement this proposed system and Pellet has been used as the reasoner.
KR&R; AUV; Semantic Web; Ontology; RDF; OWL; SPARQL; SWRL; RIF; Reasoning; Protègè; Pellet;
2. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
1. Introduction
Knowledge Representation and Reasoning (KR&R) is a field of Artificial Intelligence (AI) that represents
information about the world in the way that it is useful for a computer system so that it can solve complex tasks such
as diagnosing a medical condition or having a dialog in a natural language and it has a high impact in applications
like data mining, search engines and recommendation systems [1].
KR&R incorporates, from psychology, how humans solve problems and represent knowledge in order to design
formalisms to make easier complex systems [1].
In [2] it is said that “an Autonomous Underwater Vehicle (AUV) is a robot which travels underwater without
requiring input from an operator”.
This document will focus on how to represent the knowledge in underwater environments, in other words the
techniques of KR&R in AUVs, and the methods and algorithms for understanding this knowledge in cooperative
robots. And a case of use will be presented in the last section in order to exemplifier the algorithms and protocols
studied in the previous sections.
2. State of the Art
This section present a brief resume about robots history and the state of the art about KR&R and semantic Web and
how it is useful in robotics.
In 1993 Randall Davis of MIT described 5 roles to analyze a KR&R framework [3]:
• A knowledge representation (KR) is fundamentally a substitute for the thing itself, which make the entity
determine consequences by thinking instead of acting, i.e., by reasoning about the world rather than taking
action in it. It is a set of ontological commitments, i.e., an answer to the question: How should I think about
the world?
• It divides intelligent reasoning in three components:
• The representation's fundamental conception of intelligent reasoning.
• The set of inferences the representation sanctions.
• The set of inferences it recommends.
• The computational environment in which thinking is set must have a pragmatic efficiency.
• It is a medium of human expression, i.e., a language in which we say things about the world.
2.1 KR&R Systems
The first thing to do when deploying a knowledge representation system is choosing a suitable formalism in order
to express knowledge in a way it could infer more knowledge from it. A formalism must be defined over an explicit
Formal Language, going from pure mathematics First Order Logic “FOL” (the most expressive language in logics)
to subsets of it (like if-then structures based logic) or any other less expressive one. The decision of the expressive
power of the formalism will be determinant in our future goals, and it must be adapted to each use case [3].
A KR&R System will be composed of:
• Data “structures”: Higher level ones, because “low level” data structures will be dependent on which OS
and hardware is running the KR system.
• KR&R Primitives: FOL, sub-sets of FOL, Semantic Networks, Frames & Rules….
• Algorithms: i.e. general search, querying, meta-representation, handling incompleteness and
inconsistencies, non-monotonic reasoning tools... [3] [4].
Every one of this elements must be chosen carefully or even developed depending on the characteristics and
requirements of each use case [4].
3. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
Historically there have been, not only different Knowledge Representation Primitives, but so many different
approaches to the KR problem. Psychology, Philosophy and other fields have had different phases in which ones the
KR problem was considered under different points of view, but since 80´s semantic theorists support a language-
based construction of meaning [4].
But first computerized KR works was focused in general problem solvers (like “GPS”, Newell, Simon 1959), this
systems had a constrained “toy” domain because of the amorphous problem definitions they worked with. [5]
It was the failure of these efforts that led to the cognitive revolution in psychology and to the phase of AI focused on
knowledge representation that resulted in Expert Systems in the 1970s and 80s [4] (considered as the first truly
successfully forms of AI software),[4][6] Production Systems and, later in the mid 80´s, Frame Languages. Until
then, the most used “non-FOL” KR Primitives was Semantic Maps, but then many Expert Systems (using the LISP
programming language which was modelled after the lambda calculus) often used lambda calculus as a form of
functional knowledge representation. [4][7] Frames and Rules were the next kind of primitive.
Progressively the integration of Frames and Rules-based systems appears because there was an obvious synergy
between their approaches, an example is the Knowledge Engineering Environment “KEE” developed by IntelliCorp
in 1983 which drove to the integration of Object Oriented Programming with Frames and Rules. [4][8]
At the same time as this was happening, there was another field of research which was less commercially focused
and was driven by mathematical logic and automated theorem proving. The technique they used for primitives is to
define languages that were modelled after First Order Logic (FOL). The most well-known example is Prolog but
there are also many special purpose theorem proving environments, other of the most historically influential
languages in this research was KL-ONE (mid 80's) [4].
Currently one of the most active areas of knowledge representation research are projects associated with the
Semantic web. [4][9]
2.2 KR&R in Semantic Web
“The Semantic Web is an extension of the Web through standards by de World Wide Web Consortium (W3C). The
standards promote common data formats and exchange protocols on the Web, most fundamentally the Resource
Description Framework [...]The Semantic Web provides a common framework that allows data to be shared and
reused across application, enterprise, and community boundaries" [10]
There are some limitations of HTML in the semantic web so that the semantic web takes the solutions further. It
involves publishing in languages specifically designed for data: Resource Description Framework (RDF), Web
Ontology Language (OWL), and Extensible Markup Language (XML) [10]
The architecture of semantic web is shown in the next figure.
4. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
Figure 1: The semantic Web Architecture [10]
A brief definition [10] about each level is defined here but the most important languages are describing in the next
point.
• XML contribute with an elemental syntax for content structure in documents.
• RDF is a language used for expressing data models.
• RDF Schema is a vocabulary for describing properties and classes of RDF-based resources with semantics.
• OWL is a language which adds more vocabulary for describing properties and classes, for example
relations between classes.
• SPARQL is a protocol and query language for semantic web data sources.
• RIF defines the format of the interchange rules. It is an XML language for expressing web rules.
KR&R is a key technology for the Semantic web. Languages based on the Frame model with automatic
classification provide a layer of semantics on top of the existing Internet. Rather than making typical searches via a
simple text, it will be possible to define logical queries and find pages that map to those queries. The automated
reasoning component in these systems is an engine known as the classifier. A classifier can deduce new classes and
dynamically change the ontology as new information becomes available. This capability is ideal for the ever
changing and evolving information space of the Internet [1].
2.3 KR&R in Robotics
The field of KR&R is starting to be integrated in robotics, to make robots able to represent, use, exchange and
reason about knowledge [1]. Some examples are:
• The representation of higher level concept in semantic maps, that integrate several types of knowledge,
for example, geometric, topological, functional, categorical, temporal, etc.
5. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
• The use of ontologies to enable robots to induce information from the web.
Guarantee a high level of communication and synergy between robotics and the field of KR&R is essential in order
to use the large amount of knowledge, experience and tools acquired [1].
The two main points in this context are:
• Knowledge should be represented in an explicit way inside the robot, using suitable representation
formalism.
• The elements in this representation must be based in real physical objects, parameters, and events in
the robot’s operational environment [1].
2.4 Problematic
In the use case of Cooperative AUV´s for MCM missions, robots require sophisticated and robust concept and
knowledge management capabilities if they need to individually acquire knowledge, communicate it, and learn
it from another ones, exhibiting intelligent group behavior [2][11].
Semantic approaches had a proven added value for other multi-agent cooperative systems, two of them referenced in
[12] [13]. Semantic Web technologies and tools provide a bridge between individual perception and upper symbolic
levels (the semantic language which robots finally use to store knowledge and communicate or learn it from another
ones) grounding sensory and symbolic information into semantic shared knowledge in a hierarchy structure [13].
This means that, even when each robot maintain his own different ontologies, and use them to store and use his own
knowledge, they are able to share it in a form that only the receivers needs to "translate" it using the existing upper
ontologies, just if they do not understand at all some of the "definitions" the sender used.
However, most of the current KR systems for AUV´s generally target mono-domain simple applications, like
gathering data from sensors in order to be offline and manually post-processed. So they do not need more than a
very simple KR&R system. If it is necessary higher levels of autonomy and distributed work, as it is on our case of
use, MCM missions using collaborative AUV´s when they are going through a previously planned mission, robots
require access to higher levels of knowledge representation in order to increase group, adaptation, and
efficiency during mission accomplishment. So they need a global KR&R system if they are to efficiently infer,
share or request global mission-related knowledge between agents [2][11]. The Semantic Web technologies and
researches focused from the start in the possibility of multi-ontology systems [14] [13], so they are able to deal with
heterogeneous data which may need to be combined for many purposes.
2.5 Conclusions
The field of KR&R, and specially the semantic approaches, are starting to be an important part in many fields of
robotics. Semantic approaches and ontologies are being widely used in many recent AUV researches, some of them
referenced in [14] [15] [16] and projects referenced in [17] [18] to make robots being able to efficiently represent,
use, exchange and reason about shared knowledge [1] and in order to increase the autonomy levels and efficient
cooperation.
In this work we will try to analyze all the mentioned advantages that might be achieved with the use of Semantic
Web technologies in order to improve MCM Collaborative AUV´s mission accomplishing.
3. Proposed KR&R System
In this section a case of use is presented in order to analyse KR&R requirements. The case of use is set
developed in a subaquatic environment.
The target is to deactivate mines lost in the sea during the Second World War. To achieve this mission a set of teams
of AUVs are going to cooperate. It can be difference three types of AUVs:
6. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
1. SAUV: Search AUV: this AUV have the target of mapping the environment and to mark potential targets.
2. IAUV: Inspection AUV: once the SAUV has finished its mission, this AUV has the target of determinate if
the potential targets are or not clear objectives.
3. EAUV: Execution AUV: once both the SAUV and the IAUV has finished its mission, this AUV has to go to
the objective and detonate the mine if it is possible.
The initial state of the SWARM will require some kind of training or pre-programming of the needed global
missions and sub-missions procedures and/or parameters.
Each AUV will have its mono-domain KR&R system, which will not be analyzed in the present work, but will
describe mechanisms for maintaining a distributed KR&R System that might work with all mono-domain
knowledge in a dynamic way. So, for example, we might have the global knowledge base and reasoning tools for
dynamically representing and using the knowledge associated to the global accomplishment state of the present
mission and sub-missions.
First of all, the algorithms necessary for developing this mission will be presented and analyzed. Each subsection of
this section has the next structure: first, different algorithms of this part are going to mention. Then one of these will
be chosen and explained and finally related works will be cited. In the final subsection, the case of use will be
develop with examples of each algorithms.
3.1 Data structures
As it was mentioned in the state of the art, “low level” data structures will be dependent on the different OS and
hardware used in each case. So, efficiently managing the system physical memory and storage drives, and efficiently
storing and retrieving data from them, will be transparent to upper layers of the system architecture.
The OS´s typically work with some kind of directory tree and file managing system in order to abstract this
mechanisms to the user, providing the necessary tools to compile/interpret developed software that efficiently uses
the system capabilities. For example, the Robot Operative System (ROS) project uses a Linux kernel (UNIX i-node
file system) with C++ (and others) as source code of the libraries, providing means to develop and test the software
in many Unix-based platforms [30].
When using SW technologies, the knowledge is represented on last instance with RDF triples (subject-predicate-
object), serialized in RDF files. RDF extend the resource description capabilities of URI incorporating (DL) logic
knowledge to that description. Subject and predicate must be URI resource identifiers, and objects may be an URI or
a literal (plain or typed with XML). A set of triples form a RDF graph, and only subjects or predicates could be
blank in order to be used as graph scoped identifiers (blank nodes). RDF language namespace contains the RDF
vocabulary with several predefined elements to conform the graphs. The elements rdf:statement,
rdf:subject, rdf:predicate and rdf:object, allows to dissemble a statement (triple) to its parts, and to use
that parts or the whole statement as a resource to make assertions about it (e.g. using it as subject in another
statement), composing then nested RDF graphs [31].
RDF triples and graphs conform sets of related logical assertions, so it may be considered just as data rich
descriptors to meta-data for resources (data).
RDF Schema is a semantic extension of RDF vocabulary that allows to describe taxonomies of classes and
properties. In RDF Schema, all resources can be divided into groups called classes (rdfs:class). Classes are also
resources (rdfs:resource), so they are identified by URIs and can be described using properties (rdfs:domain,
rdfs:range, rdfs:subClassOf, rdfs:subPropertyOf, rdfs:member, rdfs:isDefinedBy...). The
members of a class are instances of classes, which is stated using the rdf:type property. Note that class and a set of
instances does not have to be the same. The RDF Schema class and property system is similar to the systems used in
object-oriented programming languages such as Java. RDF Schema differs from them in that instead of defining a
class in terms of the properties its instances may have, RDF Schema describes properties in terms of the classes of
7. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
resource to which they apply using rdfs:domain and rdfs:range properties. Using the RDF Schema approach, it is
easy for others to subsequently define additional properties with a specific domain or range. This can be done
without redefining the original description of these classes, allowing anyone to easily extend the description of
existing resources. [31][32]
The RDF Schema strategy is to acknowledge that there are many techniques through which the meaning of classes
and properties can be described. Richer vocabulary or 'ontology' languages such as OWL, inference rule languages
and other formalisms (for example temporal logics) will each contribute to our ability to capture meaningful
generalizations about the data. [32]
RDF files can be serialized using XML, but other less verbose serialization formats are used as well, like TURTLE
and N3. [31]
“Unfortunately, not everything from RDF can be expressed in DL. For example, the classes of classes are not
permitted in the (chosen) DL, and some of the triple expressions would have no sense in DL. That is why OWL can
be only syntactic extension of RDF/RDFS (note that RDFS is both syntactic and semantic extension of RDF). To
partially overcome this problem, and also to allow layering within OWL, three species of OWL are defined”.[31]
OWL is a syntactic extension of RDF/RDF Schema that allows to conform ontologies in order to extract logical-
computable knowledge from RDF/RDFS-based knowledge bases or files. So it may be considered that at this level
the RDF resources taxonomies we can describe with RDF Schema begin to conform knowledge structures more than
simply data-structures.
An example of RDF based on the case of studied is presented in next figure.
Figure 2: Graph RDF and triplet.
Next figure shows an example of RDF-S graph.
8. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
Figure 3: Graph RDF and triplet.
3.2 Knowledge Representation Primitives
Our use case will work in/with Ontologies, so the required tools to represent them should be chosen or developed
and work with ontologies in a human-readable way [1].
When explicitly representing knowledge, it helps to assume that it is composed of elemental knowledge pieces
(statements/propositions) like “eAUV1 is a robot”.
An ontology will be a set of this atomic knowledge pieces (that are called axioms in OWL 2). The ontology will
generally confirm if all the axioms on the ontology are true for a certain "state of affairs". That distinguishes them
from entities or expressions that will be further explained.
In our approach, statements in OWL are considered as the KR Primitives, thus they usually are not "monolithic",
and have some subjacent structure, like “eAUV1 is a robot”, there we have an object from the real world and a
category assignation. In the statement “eAUV1 is doing the same mission that eAUV2” we see two object from the
real world and a relation between them. In OWL 2 the objects are called individuals, categories are called classes,
and relations are called properties. Properties are subdivided in Object properties and Datatype properties. All this
atomic elements are called entities.
Entities names could be combined in expressions using constructors, conforming new entities defined by their
structure. Class expressions is one of the main features of OWL 2. [31][33]
“The OWL is a family of knowledge representation languages for authoring ontologies. Ontologies are a formal way
to describe taxonomies and classification networks, essentially defining the structure of knowledge for various
domains: the nouns representing classes of objects and the verbs representing relations between the objects.” [19]
9. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
Next figure shows the structure of OWL 2 (OWL 2 is an extension and revision of the OWL published in 2004). The
eclipse in the centre represents the abstract notion of an ontology. At the top are various concrete syntaxes that can
be used to serialize and exchange ontologies. At the bottom are the two semantic specifications that define the
meaning of OWL2 ontologies [10].
Figure 4: The structure of OWL 2 [20]
There are three variants of OWL: OWL Lite, OWL DL and OWL Full (ordered by increasing expressiveness) [20]
[21] [22].
• OWL Lite: it was intended to provide a simple tool support, allowing quick migration path for systems
using thesauri and other taxonomies.
• OWL DL: it was designed to provide the maximum expressiveness possible while retaining computational
completeness decidability, and the availability of practical reasoning algorithms.
• OWL Full: it is based on a different semantics from OWL Lite or OWL DL and was designed to preserve
some compatibility with RDF Schema.
3.2.1 OWL-DL (Deterministic)
OWL DL is an extension of OWL focused on description logics (DL). A DL models concepts, roles and individuals,
and their relationships. The main modelling concept of a DL is the axiom (logical statement which relates roles
and/or concepts) because DL declares and completely defines a class [23].
OWL DL is a subset of FOL (First-Order Logic) and it use its terminology, in spite of being an implementation of a
description logic. Like FOL, a syntax defines which collections of symbols are legal expressions in a DL, and
semantics determine meaning. Unlike FOL, a DL may have a lot of well-known syntactic variants [24].
Modelling knowledge with OWL DL is based in two components: TBox (terminological box) and ABox (assertion
box). TBox is related with the terminology (the vocabulary of an application domain). And ABox relates concepts
described in TBox with individuals.
10. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
In the case of Cooperative Robots in dynamic environments, TBox may take the role of an upper ontology symbolic
library used by the whole system. Robots could share easier knowledge, using its own ontologies (or Abox) created
for its individual perception and using his own contextual knowledge.
Some examples of OWL DL applications in collaborative AUVs are:
• Mine Countermeasures (MCM):
“OWL DL provides much of the expressiveness of first-order logic while providing very desirable features such as
soundness, completeness and decidability upon applying reasoning.” [25]
• Project Bückner et al:
“Bückner et al use semantic network for knowledge representation. Also, image processing languages, image token
databases, logic programming languages and description logic systems can be used for KR in image
understanding.” [26]
3.2.2 PR-OWL (Probabilistic)
Other approaches try to join uncertainty and semantics with a probabilistic approach, dealing with the already
mentioned problem of uncertainty or incompleteness of the acquired knowledge. Uncertainty can be considered
ubiquitous, so any KR system intended to model real world processes must be able to deal with it. [27] [28]
Providing means to model uncertainty in ontologies, PR-OWL and PR-OWL 2 are possible solutions for this
problem. They just adds new definitions to current OWL, keeping backward compatibility with it. They use formal
semantics based on Multi Entity Bayesian Networks probabilistic logic MEBN [29], which is a FOL subset with
some add-ons so it combine the expressiveness power of FOL with the inference capabilities and robustness of
probabilistic representations. In essence, this OWL extensions define classes that can be used to represent the
elements needed by a MEBN: random variables, MFrags, MTheories and associated probabilistic distributions [27].
[27] Also enumerate the most important types of incompleteness and uncertainty in the knowledge that an AUV
robot like PANDORA [18] has to deal with:
Instances incompleteness or uncertainty. Due to sensory limitations the robot e.g. might believe that
certain object described by some ontology is present in some place of the world with certain probability,
nut the uncertainty is only given in the robot “belief” of the world. Assuming the ontology represents the
belief in the world, the uncertainty is only related to the ABox of the ontology, while the TBox might not
have any uncertainty.
Relations incompleteness or uncertainty. The same problem appears when establishing ontological
relations between context elements grounded into different semantic concepts by sensor processing, again
the uncertainty will be related to the ABox “instances” of some TBox “deterministic class” of the ontology.
Inferred relations and concepts incompleteness or uncertainty. In this case the uncertainty is given for
the probabilistic inference methods and not by the sensor limitations. However the uncertainty keeps
related only to the ABox physical instances, now generated from some probabilistic rules of the TBox.
Evolving world uncertainty. Now the uncertainty is not about the world beliefs but about the changing
world itself and it evolution in time. However the system can also naturally evolve into different states with
different probabilities, which will be updated with every given world model “actualization”.
In all this cases PR-OWL offers more effective solutions than other probabilistic-based ontology frameworks like
BayesOWL, which [27] describe as: “not appropriate for domains where probabilities must be associated with
instances in the ontology, for example where there is a requirement to associate an uncertainty with sensor readings.
This effectively rules out BayesOWL for the Pandora project.”
11. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
The main drawback when using PR-OWL ontologies is the difficulty of linking the concepts in a given domain
ontology with the ones in a PR-OWL ontology, while BayesOWL could be used with it. However PR-OWL 2 has
considerably improved this aspect [27] [28].
3.3 Querying
Once the ontologies have been chosen, the next choice should focus on which querying algorithms are suited more
to our use case. Queries are one of the things that make databases so powerful. A "query" refers to the action of
retrieving data from your database. Querying algorithms help the system makes a properly data selection to use only
the knowledge it needs.
The main querying algorithm used in Robotics based on semantic web is SPARQL, but in some projects, they have
created their own algorithms. These algorithms were adapted to their specific ontologies and systems. Some
examples of this algorithms are:
OpenEval:
In Carnegie Mellon University was developed OpenEval, it is a querying algorithm with predicate evaluator
functions. It can return a probability distribution over instances of predicates [34].
RaQueL (ROBOBRAIN QUERY LIBRARY):
Between Cornell University and Stanford University, RoboBrain was developed as a robot for smart homes,
specially, kitchens. The RoboBrain’s knowledge source is the Internet (such as WordNet, ImageNet, Free-base and
OpenCyc). RaQuel is more than a querying algorithm, RaQueL can be used for diverse tasks such as semantic
labeling, cost functions and features for grasping and manipulation, grounding language, anticipating activities, and
fetching trajectory representations [35].
3.3.1 SPARQL
SPARQL (SPARQL Protocol and RDF Query Language) is defined in [36] as: “an RDF query language, that is, a
semantic query language for databases, able to retrieve and manipulate data stored in Resource Description
Framework format.”
SPARQL contains IRIs (in SPARQL queries are absolute) which are a subset of RDF URI References that omits
spaces and include URIs and URLs [37].
The queries of SPARQL contains a set of triple patterns called a basic graph pattern which are similar to RDF triples
but in this case, each subject, predicate and object may be variable [37].
In SPARQL is used the Turtle data format to show each triple explicitly because Turtle allows IRIs to be abbreviated
with prefixes. A simple example of query in SPARQL is shown below:
The data is presented in Turtles:
@prefix tsk: <localhost:/GLOBAL/KB/term/Task#>
@prefix auv: <localhost:/GLOBAL/KB/term/auv.rdf#>
@prefix : <localhost:/GLOBAL/KB/assert/exampleTask.rdf#>.
:20150101-002 tsk:percentAccomplish “51” ,
:hasMember auv:sauv2 .
The query is:
PREFIX tsk: <localhost:/GLOBAL/KB/term/Task#>
SELECT ?hasMember
WHERE { ?x tsk:percentAccomplish ?percentAccomplish
FILTER (?percentAccomplish > 50)
}
12. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
It is a simple query but in the document referenced in [37] there are a lot of examples of different queries in function
of the information that is want to be consult. In the section X an example of Turtles and SPARQL queries will be
shown related with de case of use.
As a resume of [37], SPARQL permits to apply filters (FILTER), optional triples (OPTIONAL) and to make the
FROM part optional. Modifiers of the sequence results are similar to SQL: ORDER BY, DISTINCT, OFFSET and
LIMIT. The allowed queries are four: SELECT (returns variables and their bindings directly), CONSTRUCT
(returns a single RDF graph specified by a graph template), ASK (no information is returned about the possible
query solutions, jus whether or not a solution exists) and DESCRIBE (returns a single result RDF graph containing
RDF data about resources). Formats results supported by SPARQL 1.1 are XML, JSON, CSV and TSV [36]. To
obtain more information the documents referenced in [38], [39] and [40] may be consulted. There are and extension
of the basic SPARQL, referenced in [41] that allow to explicitly delegate certain subqueries to different SPARQL
endpoints. Finally, in the document referenced in [42], it is described the update request for SPARQL.
The SPARQL protocol referenced in [43] consist of two operations: queries and updates. Its works with HTTP
request and responses.
3.4 Rules
Continuing with the definition of the tools and algorithms of our case of use it is necessary to extend the knowledge
system by introducing some Ruling tools defined on W3C, especially in this subsection. It will be focused on RIF
and SWRL.
3.4.1 RIF (Rule Interchange Format)
RIF is “a standard for exchanging rules among Semantic Web systems. RIF focused on exchange rather trying to
develop a single one-fits-all rule language” [44].
This language depends of the paradigms used in each system. A classification has been made based on this problem:
first-order, logic-programming, and action rules. To solve this problem RIF has designed a family of languages,
called dialects. This dialects have been created to be uniform and extensible: uniform because they are expected to
share as much as possible of the existing knowledge system and extensible because developers can define a new
dialect as a syntactic extension to an existing RIF dialect [44].
In the proposed case of use, it is used a data structure based on RDF/S and a representation of ontologies based on
OWL, the most appropriate RIF dialect for our system is the dialect called "RIF-RDF and OWL Compatibility". The
basic idea is that RIF uses its frame syntax to communicate with RDF/OWL. These frames are mapped onto RDF
triples and a joint semantics is defined for the combination [44].
3.4.2 SWRL (Semantic Web Rule Language)
SWRL is “based on a combination of the OWL DL and OWL Lite sublanguages of the OWL Web Ontology
Language with the Unary/Binary Datalog RuleML sublanguages of the Rule Markup Language” [45]. SWRL
extends the set of OWL axioms to include Horn-like rules (FOL-based rules), so it enables Horn-like rules to be
combined with an OWL knowledge base.
Rules are of the form of an implication between an antecedent (body) and consequent (head). The intended meaning
can be read as: whenever the conditions defined in the antecedent hold, then the conditions defined in the
consequent must also hold [45].
The antecedent (body) and consequent (head) consist of zero or more atoms. An empty antecedent is treated as
trivially true, so the consequent must also be satisfied by every interpretation; an empty consequent is treated as
13. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
trivially false, so the antecedent must also not be satisfied by any interpretation. Multiple atoms are treated as a
conjunction [45].
In the proposed case of use, we need to extend the expressivity of OWL by adding SWRL rules to our ontology
because OWL is not able to express all relations. For example, OWL cannot express the relation of a task with the
submissions which compose a general mission, because there is no way in OWL to express the relation between
individuals with which an individual has relations [46].
An example of rules is shown in next figure.
Figure 5: Graph
Figure 6: Rules
Declaration(Class(:iAUV))
Declaration(ObjectProperty(:hasMaster))
Declaration(Class(:AUV))
Declaration(Class(:Agent))
SubClassOf(:iAUV :AUV)
SubClassOf(:Master :AUV)
Declaration(DataProperty(:tsk:percentAccomplish))
Declaration(NamedIndividual(:eAUV1))
ClassAssertion(:eAUV : eAUV1)
DataPropertyAssertion(:tsk:percentAccomplish “51”^^xsd:integer)
Declaration(NamedIndividual(:eAUV2))
ClassAssertion(:eAUV : eAUV2)
DataPropertyAssertion(:tsk:percentAccomplish “30”^^xsd:integer)
Declaration(NamedIndividual(:iAUV3))
ClassAssertion(:iAUV : iAUV3)
DataPropertyAssertion(:tsk:percentAccomplish “90”^^xsd:integer)
ObjectPropertyAssertion(:hasMaster : eAUV1: iAUV3)
ObjectPropertyAssertion(:hasMaster : eAUV2: iAUV3)
14. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
3.5 Reasoning
This subsection provides a comparison between semantic web reasoners. “A reasoner is a program that infers
logical consequences from a set of explicitly asserted facts or axioms and typically provides automated support for
reasoning tasks such as classification, debugging and querying” [47]. To choose a suitable reasoner it is necessary
to analyse each dimension of reasoners. First, the underlying reasoning characteristics for example, the method of
reasoning, the expressivity, rules and so on. Next dimension is practical usability: if the reasoner implements OWL,
if it is commercial or Open Source, the platforms where it runs and, in our case, if it is available as Protègè plugin.
Last dimension is about performance indicators that can be evaluated empirically (for example classification). [47]
Next table shows a brief comparison between three reasoners: FaCT++, HermiT and Pellet.
Table 1: Reasoners Comparison [47]
FaCT++ HermiT Pellet
Completeness Yes Yes Yes
Expressivity SROIQ(D) SROIQ(D) SROIQ(D)
Incremental
Classification
(additional/removal)
No/No No/No Yes/Yes
Rule support No Yes (SWRL) Yes (SWRL)
Justifications No No Yes
ABox Reasoning Yes Yes Yes (SPARQL)
OWL/ OWLlink API Yes Yes Yes
Protègè Plugin Yes Yes Yes
License GLGPL GLGPL GLGPL
Open Source Yes Yes Yes
Language C++ Java Java
Platforms All All All
In the case of use is used the Pellet reasoner because it supports SWRL rules and SPARQL which are the necessary
characterises.
4. KR&R algorithms
In this section current approaches will be describe and next step goals in the robotic field related to KRR systems.
So that some current approaches are mentioned and are linking with its possible application in our case of study.
Handling incompleteness and inconsistency:
As previously mentioned, handling incompleteness in knowledge representation can be partially solved by using
some probabilistic approach like PR-OWL extension. Therefore, any OWL inference engine with PR-OWL syntax
compatibility should be able to reason about that knowledge, producing results that may be expressed with the help
of that syntax too. Anyway, the problematic of handling incompleteness and inconsistency goes further: the KR&R
System should be able, not only to express knowledge with some uncertainty degree in its atomic elements, but to
detect and fix inconsistencies between previously stored and inferred knowledge and the new one (acquired by other
means than reasoning).
Previously it was mentioned too that world model "actualizations" are needed in order to handle Evolving World
Uncertainty, and that the system will evolve into different states with each given actualization of the world model;
this is possible, and relational data-base systems used on the backbone of the proposed system solve all the
consistency problems associated to this. But, with probabilistic approaches and due to the mentioned "state changes"
15. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
of the system when updating the world model, some other kind of inconsistencies emerge between knowledge
inferred in some previously states, and knowledge inferred from the same statements in their actual state.
As the system may consider them as representations of different real world objects, even when they were inferred
from the same statements (but with different uncertainty degrees), they could coexists in the knowledge base without
problems, so detecting and fixing this partial inconsistencies should be periodically and atomically done by some
parallel processes or agents which will somehow interact with the knowledge base. This process is called belief
revision.
In order to achieve this, negation as failure operator, statement removal, default declarations and exceptions
declarations are all needed. But FOL (and so DL) are essentially monotonic logics, in the sense that anything that
could be concluded before a clause is added can still be concluded after it is added, so it is impossible to achieve this
only with DL. [48]
On the other hand we don´t have the same problem with rule-sets, that, in the proposed use case, will be expressed
with the Semantic Web Rule Language. There is no point in applying a rule to a randomly chosen individual rather
than to specific ones (like it is done when applying generic probabilistic knowledge [49]).
In the next section, some different approaches to solve the non-monotonic reasoning problem and other architecture
problems associated to the inference engine will be presented, and with the help of this approaches, handling
incompleteness and inconsistency problems should be partially solved too.
Non-monotonic reasoning:
Non monotonic reasoning refers to the act of reason about knowledge expressed in some non-monotonic logic. Non-
monotonic logics are formal logics whose consequent relation is non monotonic, that means that reasoners can draw
tentative conclusions in what is called defeasible inferences, and later could retract that conclusions based on further
evidence [50]. A monotonic logic cannot handle belief revision and some other reasoning task such as reasoning by
default, inductive reasoning or abductive reasoning.
Non monotonic logics use implies a closed-world assumption.
Some other defeasible reasoning types are statistical, probabilistic and paraconsistent logic reasoning [51].
In the present approach, the proposed global KR&R System must provide means to reason about knowledge
expressed in probabilistic terms with some OWL inference engine. One way to achieve this, is by using some
extension or variation of the used inference engine that allows it to reason with PR-OWL ontologies. Some works
have been developed in this direction. For example, "Pronto" is a probabilistic OWL reasoner that can handle
uncertainty in both terminological and assertional DL statements (axioms), it extends the Pellet inference engine
making it capable to do default probabilistic reasoning. A deeper analysis of the Pronto prototype could be found on
[52].
Other approaches to solve the problem of non-monotonic reasoning in systems based on DL ontologies like [52]
tries to divide the problem by using two inference engines for various different "contexts", one context will be the
DL ontology-based KR&R system, and the other one will be some non-monotonic logic programming rules context
(MKNF knowledge bases in this case). Both context interact with something called "bridge rules", that is just some
set of rules that project logical propositions from one set to another. This is called a Multi Context System (MCS),
and they provide a way to integrate knowledge generated from different sources and reason about it with some DL
inference engine.
MCS´s could be a perfectly suited solution not only to integrate different monotonic and non-monotonic logics in
the same AUV, but to integrate other needed knowledge and routines like:
- Static real world objects models: mission plans, maps etc...
- Models of the AUV itself (meta-representation).
- Belief revision and other knowledge base "maintenance" routines.
- Other AUV function-related process and/or algorithms.
- Many more...
16. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
MCS´s could make the knowledge generated by all them suitable to the OWL inference engine "translating" it into
DL. Inference engine could then reason about that knowledge and make the knowledge produced by it readable to
the other contexts [53].
To summarize, the KRR system that is proposed could use Pellet inference engine with the Pronto extension to
handle with some problems that have been exposed. Although, it could benefit from using MCS not only with this
problems but to do all its components being well integrate with all the others one.
5. Conclusions
In our case of use the task of coordinating a mission between collaborative AUV can be improved an eased by using
a global KR&R system. This system is independent of each individual cognitive architecture and enhance mission-
related knowledge sharing. So that, the efficiency of the synchronized-mission execution is improved.
Using Semantic Web technologies eased the process of developing the structure of the knowledge system.
Non-monotonic logic inclusion in the inference engine rules and knowledge representation primitives could solve
inconsistency problems, for example with “belief-revision”. And thanks to the use of MCS (Multi-Context Systems)
it could be possible to make completely independent the character of the set of rules used in the proposed KR&R
system.
Other reason to use non-monotonic logic is the capacity of representing and reasoning about knowledge in
probabilistic terms.
6. Future Work
First, to check the advantages of the proposed KR&R system when the case of study involves coordinating a large
number of AUVs and task, more complex missions, and different contexts of MCS.
Second, to study further and experience how PR-OWL and other probabilistic approaches KR&R can improve the
functioning of the proposed system.
And at least but not the less important, to implement non-monotonic logical contexts in any of the system as, for
example, modules “belief-revision”. To implement non-monotonic logic in the set of rules so that judgments can be
removed from the knowledge base.
Acknowledgements
The authors would like to thank all the class-mates of ATA and our teacher JF. Also, to thank Gregorio Rubio for his
advice.
17. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
References
[1] SPARC, «Robotics 2020 Multi-Annual Roadmap» 06 February 2015. [Online]. Retrieved 15 March 2015.
Available: http://www.eu-robotics.net/cms/upload/PDF/Multi-
Annual_Roadmap_2020_Call_1_Initial_Release.pdf.
[2] S. R. H. L. Chun-Yi Su, Intelligent Robotics and Applications, 5th International Conference, ICIRA 2012,
Montreal, QC, Canada: Springer, 2012.
[3] R. Davis, H. Shrobe and P. Szolovits, “What is a Knowledge Representation?,” AI Magazine, vol. 14, no.
1, pp. 17-33, 1993.
[4] Knublauch, Holger; Oberle, Daniel; Tetlow, Phil; Evan 09 March 2006. "A Semantic Web Primer for
Object-Oriented Software Developers". W3C.
[5]. Wikipedia contributors. “General Problem Solver”. Wikipedia, The Free Encyclopedia, 16 March 2015.
Retrieved 15 March 2015. [Online]. Available:
http://en.wikipedia.org/w/index.php?title=General_Problem_Solver&oldid=599871187
[6] Russell, Stuart; Norvig, Peter (1995). Artificial Intelligence: A Modern Approach. Simon & Schuster. pp.
22–23. ISBN 0-13-103805-2.
[7] Wikipedia contributors. "Expert system." Wikipedia, The Free Encyclopedia., 16 March 2015. Retrieved 15
March 2015. [Online]. Available:
http://en.wikipedia.org/w/index.php?title=Expert_system&oldid=651655828
[8] Wikipedia contributors. “Knowledge Engineering Environment”. Wikipedia, The Free Encyclopedia. 14
March 2015. Retrieved 15 March 2015. [Online]. Available:
http://en.wikipedia.org/w/index.php?title=Knowledge_Engineering_Environment&oldid=651351502
[9] Berners-Lee, Tim; James Hendler and Ora Lassila (May 17, 2001). "The Semantic Web A new form of
Web content that is meaningful to computers will unleash a revolution of new possibilities". Scientific
American.
[10] W3C, «OWL 2 Web Ontology Language Document Overview (Second Edition),» 11 December 2012.
[Online]. Available: http://www.w3.org/TR/owl2-overview/. [Last access: 11 March 2015].
[11] Grounding Robot Sensory and Symbolic Information Using the Semantic Web Christopher Stanton and
Mary-Anne Williams Innovation and Technology Research Laboratory, Faculty of Information Technology
University of Technology, Sydney, Australia
[12] Efficient Multi-AUV Cooperation using Semantic Knowledge Representation for Underwater
Archaeology Missions Nikolaos Tsiogkas, Georgios Papadimitriou, Zeyn Saigol, David Lane Ocean Systems
Laboratory Heriot Watt University, EH14 4AS Edinburgh, Scotland, UK
[13] Semantic Approach to Dynamic Coordination in Autonomous Systems Artem Katasonov and Vagan
Terziyan University of Jyväskylä, Finland artem.katasonov@jyu.fi, vagan@jyu.
[14] Claire D’Este, Ahsan Morshed, and Ritaban Dutta, Castray Esplanade, Hobart, TAS Australia. CSIRO
Computational Informatics. Robot Sensor Data Interoperability and Tasking with Semantic Technologies.
[15] A. Elgi, and B. Rahnama Department of Computer Engineering, and Internet Technologies Research
Center, Eastern Mediterranean University, Gazimagusa, Mersin 10, TRNC, Turkey. Human-Robot Interactive
Communication Using Semantic Web Tech in Design and Implementation of Collaboratively Working Robot.
[16] Francesco Maurelli, Zeyn Saigol, Carlos C. Insaurralde, Yvan R. Petillot, David M. Lane Ocean Systems
Laboratory School of Engineering & Physical Sciences Heriot-Watt University EH14 4AS Edinburgh. Marine
world representation and acoustic communication: challenges for multi-robot collaboration.
[17] Interactive And Robotic Systems lab 7 March 2015 [Onine]. Available:
http://www.irs.uji.es/project/trident-eu-fp7-project
[18] Pandora, Persistent Autonomous Robots. 7 March 2015 [Online]. Available:
http://persistentautonomy.com/
[19] W. contributors, “Web Ontology Language,” 7 March 2015. [Online]. Available:
http://en.wikipedia.org/w/index.php?title=Web_Ontology_Language&oldid=650337140. [Accessed 11 March
2015]
[20] W3C, «OWL 2 Web Ontology Language Document Overview (Second Edition),» 11 December 2012.
18. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
[Online]. Available: http://www.w3.org/TR/owl2-overview/. [Last access: 11 March 2015]
[21] W3C, «OWL Web Ontology Language Guide,» 10 February 2004. [Online]. Available:
http://www.w3.org/TR/owl-guide/. [Last access: 11 March 2015].
[22] W3C, «OWL Web Ontology Language Reference,» 10 February 2004. [Online]. Available:
http://www.w3.org/TR/owl-ref/. [Last access: 11 March 2015].
[23] Grau, B. C.;Horrocks, I.; Motik, B.; Parsia, B.; Patel-Schneider, P. F.; Sattler, U. (2008)."OWL 2: The
next step for OWL". Web Semantics: Science, Services and Agents on the World Wide Web 6 (4): 309–
322.doi:10.1016/j.websem.2008.05.001
[24] Ian Horrocks and Ulrike Sattler Ontology Reasoning in the SHOQ(D) Description Logic, in Proceedings
of the Seventeenth International Joint Conference on Artificial Intelligence, 2001.
[25] Papadimitriou, G., Ocean Syst. Lab., Heriot-Watt Univ., Edinburgh, UK, Lane, D. Semantic Based
Knowledge Representation and Adaptive Mission Planning for MCM Missions using AUVs
[26] Gaopan Huang, Integrated Inf. Syst. Res. Center, CASIA, Beijing, China, Yuan Tian ; Guanqing Chang A
Knowledge Representation Architecture for Remote Sensing Image Understanding Systems.
[27] F. Maurelli, Z. A. Saigol, G. Papadimitriou, T. Larkworthy, V. De Carolis, D.M. Lane Ocean Systems
Laboratory School of Engineering & Physical Sciences Heriot-Watt University EH14 4AS Edinburgh.
Probabilistic Approaches in Ontologies: Joining Semantics and Uncertainty for AUV Persistent Autonomy.
[28] Kathryn Blackmond Laskey, Richard Haberlin, Paulo Costa Volgenau School of Engineering George
Mason University Fairfax, VA USA Rommel Novaes Carvalho Brazilian Office of the Comptroller General
Brasília, Brazil. PR-OWL 2 Case Study: A Maritime Domain Probabilistic Ontology
[29] Bases Kathryn Blackmond Laskey Department of Systems Engineering and Operations Research MS4A6
George Mason University Fairfax, VA 22030, USA. MEBN: A Language for First-Order Bayesian Knowledge
[30] ROS.org, «Introduction to Ontologies and Semantic Web» 22 May 2014. [Online]. Available:
http://wiki.ros.org/ROS/Introduction. [Last access: 15 April 2015]
[31] Obitko.com, «ROS/Introduction» 2007. [Online]. Available: http://www.obitko.com/tutorials/ontologies-
semantic-web/. [Last access: 15 April 2015]
[32] W3C, «RDF Schema 1.1» 25 February 2014. [Online]. Available: http://www.w3.org/TR/rdf-schema/.
[Last access: 15 April 2015]
[33] W3C, «OWL 2 Web Ontology Language Primer (Second Edition)» 11 December 2012. [Online].
Available: http://www.w3.org/TR/2012/REC-owl2-primer-20121211/. [Last access: 15 April 2015]
[34]. Thomas Kollar, Mehdi Samadi, Manuela Veloso, School of Computer Science, Carnegie Mellon
University. Enabling Robots to Find and Fetch Objects by Querying the Web.
[35] Ashutosh Saxena, Ashesh Jain, Ozan Sener, Aditya Jami, Dipendra K Misra, Hema S Koppula.
Department of Computer Science, Cornell University and Stanford University. RoboBrain: Large-Scale
Knowledge Engine for Robots.
[36] W3C, «SPARQL 1.1 Overview» 21 March 2013. [Online]. Available: http://www.w3.org/TR/sparql11-
overview/. [Last access: 15 April 2015]
[37] W3C, «SPARQL Query Language for RDF» 15 January 2008. [Online]. Available:
http://www.w3.org/TR/rdf-sparql-query/. [Last access: 15 April 2015]
[38] W3C, «SPARQL Query Results XML Format (Second Edition)» 21 March 2013. [Online]. Available:
http://www.w3.org/TR/rdf-sparql-XMLres/. [Last access: 15 April 2015]
[39] W3C, «SPARQL 1.1 Query Results JSON Format» 21 March 2013. [Online]. Available:
http://www.w3.org/TR/sparql11-results-json/ .[ Last access: 15 April 2015]
[40] W3C, «SPARQL 1.1 Query Results CSV and TSV Formats» 21 March 2013. [Online]. Available:
http://www.w3.org/TR/sparql11-results-csv-tsv/. [Last access: 15 April 2015]
[41] W3C, «SPARQL 1.1 Federated Query» 21 March 2013. [Online]. Available:
http://www.w3.org/TR/sparql11-federated-query/. [Last access: 15 April 2015]
[42] W3C, «SPARQL 1.1 Update» 21 March 2013. [Online]. Available: http://www.w3.org/TR/sparql11-
update/. [Last access: 15 April 2015]
[43] W3C, «SPARQL 1.1 Protocol» 21 March 2013. [Online]. Available: http://www.w3.org/TR/sparql11-
protocol/. [Last access: 15 April 2015]
[44] W3C, «RIF Overview» 5 February 2013. [Online]. Available: http://www.w3.org/TR/rif-overview/. [Last
19. Murillo P., de la Cruz M., Prieto H / Procedia Computer Science 00 (2015) 000–000
access: 15 April 2015]
[45] W3C, «SWRL: A Semantic Web Rule Language» 21 May 2014. [Online]. Available:
http://www.w3.org/Submission/SWRL/. [Last access: 15 April 2015]
[46] Masaryk University, «OWL 2 and SWRL Tutorial» 21 May 2014. [Online]. Available:
http://dior.ics.muni.cz/~makub/owl/#swrl [Last Access: 15 April 2015]
[47] Kathrin Dentler, Ronald Cornet, Annete ten Teije and Nicolette de Keizer. Comparison of Reasoners for
large Ontologies in the OWL 2 EL Profile. 2011. [Online]. Available: http://www.semantic-web-
journal.net/sites/default/files/swj120_2.pdf
[48] David Poole and Alan Mackworth. Non-monotonic Reasoning. 2010. [Online]. Available:
http://artint.info/html/ArtInt_129.html
[49] Ngoc-Tung Nguyen. [Online]. http://gicl.cs.drexel.edu/images/c/cf/NNguyenPronto-
LexicographicEntailmentSlides.pdf
[50] Wikipedia Contributors. Non-monotonic logic. (2015, April 8). In Wikipedia, The Free Encyclopedia.
Retrieved 21:53, April 29, 2015, from http://en.wikipedia.org/w/index.php?title=Non-
monotonic_logic&oldid=655478606
[51] Wikipedia contributors. "Defeasible reasoning." Wikipedia, The Free Encyclopedia. Wikipedia, The Free
Encyclopedia [Online], 23 Apr. 2015. [Last access: 9 May. 2015].
[52] Pavel Klinov, Bijan Parsia. Demonstrating Pronto: a Non-Monotonic Probabilistic OWL Reasoner.
P. Klinov1, B. Parsia1. The University of Manchester, Manchester, M13 9PL, UK
[53] Martin Homola, Matthias Knorr, Joao Leite, and Martin Slota [Online]. Available:
http://dai.fmph.uniba.sk/~homola/papers/clima2012.pdf