The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
This paper describes the importance of a performant presentation tier. It presents the easiest way of optimizing the client-side code, providing source code examples for good practices. It then shows the correct approach to using CSS and HTML and the impact it has on the website response time. The Ajax technology is briefly described, emphasizing the role of JavaScript and presenting methods for improving its performance. In the end, some popular tools for monitoring and testing web applications are introduced.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.
While the world is witnessing an information revolution unprecedented and great speed in the growth of databases in all aspects. Databases interconnect with their content and schema but use different elements and structures to express the same concepts and relations, which may cause semantic and structural conflicts. This paper proposes a new technique for integration the heterogeneous eXtensible Markup Language (XML) schemas, under the name XDEHD. The returned mediated schema contains all concepts and relations of the sources without duplication. Detailed technique divides into three steps; First, extract all subschemas from the sources by decompose the schemas sources, each subschema contains three levels, these levels are ancestor, root and leaf. Thereafter, second, the technique matches and compares the subschemas and return the related candidate subschemas, semantic closeness function is implemented to measures the degree how similar the concepts of subschemas are modelled in the sources. Finally, create the medicate schema by integration the candidate subschemas, and then obtain the minimal and complete unified schema, association strength function is developed to compute closely of pair in candidate subschema across all data sources, and elements repetition function is employed to calculate how many times each element repeated between the candidate subschema.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
This paper describes the importance of a performant presentation tier. It presents the easiest way of optimizing the client-side code, providing source code examples for good practices. It then shows the correct approach to using CSS and HTML and the impact it has on the website response time. The Ajax technology is briefly described, emphasizing the role of JavaScript and presenting methods for improving its performance. In the end, some popular tools for monitoring and testing web applications are introduced.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
Ontology languages are used in modelling the semantics of concepts within a particular domain and the relationships between those concepts. The Semantic Web standard provides a number of modelling languages that differ in their level of expressivity and are organized in a Semantic Web Stack in such a way that each language level builds on the expressivity of the other. There are several problems when one attempts to use independently developed ontologies. When existing ontologies are adapted for new purposes it requires that certain operations are performed on them. These operations are currently performed in a semi-automated manner. This paper seeks to model categorically the syntax and semantics of RDF ontology as a step towards the formalization of ontological operations using category theory.
Improve information retrieval and e learning usingIJwest
The Web-based education and E-Learning has become a very important branch of new educational technology. E-learning and Web-based courses offer advantages for learners by making access to resources and learning objects very fast, just-in-time and relevance, at any time or place. Web based Learning Management Systems should focus on how to satisfy the e-learners needs and it may advise a learner with most suitable resources and learning objects. But Because of many limitations using web 2.0 for creating E-learning management system, now-a-days we use Web 3.0 which is known as Semantic web. It is a platform to represent E-learning management system that recovers the limitations of Web 2.0.In this paper we present “improve information retrieval and e-learning using mobile agent based on semantic web technology”. This paper focuses on design and implementation of knowledge-based industrial reusable, interactive, web-based training activities at the sea ports and logistics sector and use e-learning system and semantic web to deliver the learning objects to learners in an interactive, adaptive and flexible manner. We use semantic web and mobile agent to improve Library and courses Search. The architecture presented in this paper is considered an adaptation model that converts from syntactic search to semantic search. We apply the training at Damietta port in Egypt as a real-world case study. we present one of possible applications of mobile agent technology based on semantic web to management of Web Services, this model improve the information retrieval and E-learning system.
The logic-based machine-understandable framework of the Semantic Web often challenges naive users when they try to query ontology-based knowledge bases. Existing research efforts have approached this problem by introducing Natural Language (NL) interfaces to ontologies. These NL interfaces have the ability to construct SPARQL queries based on NL user queries. However, most efforts were restricted to queries expressed in English, and they often benefited from the advancement of English NLP tools. However, little research has been done to support querying the Arabic content on the Semantic Web by using NL queries. This paper presents a domain-independent approach to translate Arabic NL queries to SPARQL by leveraging linguistic analysis. Based on a special consideration on Noun Phrases (NPs), our approach uses a language parser to extract NPs and the relations from Arabic parse trees and match them to the underlying ontology. It then utilizes knowledge in the ontology to group NPs into triple-based representations. A SPARQL query is finally generated by extracting targets and modifiers, and interpreting them into SPARQL. The interpretation of advanced semantic features including negation, conjunctive and disjunctive modifiers is also supported. The approach was evaluated by using two datasets consisting of OWL test data and queries, and the obtained results have confirmed its feasibility to translate Arabic NL queries to SPARQL.
While the world is witnessing an information revolution unprecedented and great speed in the growth of databases in all aspects. Databases interconnect with their content and schema but use different elements and structures to express the same concepts and relations, which may cause semantic and structural conflicts. This paper proposes a new technique for integration the heterogeneous eXtensible Markup Language (XML) schemas, under the name XDEHD. The returned mediated schema contains all concepts and relations of the sources without duplication. Detailed technique divides into three steps; First, extract all subschemas from the sources by decompose the schemas sources, each subschema contains three levels, these levels are ancestor, root and leaf. Thereafter, second, the technique matches and compares the subschemas and return the related candidate subschemas, semantic closeness function is implemented to measures the degree how similar the concepts of subschemas are modelled in the sources. Finally, create the medicate schema by integration the candidate subschemas, and then obtain the minimal and complete unified schema, association strength function is developed to compute closely of pair in candidate subschema across all data sources, and elements repetition function is employed to calculate how many times each element repeated between the candidate subschema.
Semantic Annotation: The Mainstay of Semantic WebEditor IJCATR
Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal
knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the
web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its
vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this
paper introduces the semantic annotation categories, tools, domains and models
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
Tutorial at OAI5 (cern.ch/oai5). Abstract: This tutorial will provide a practical overview of current practices in modelling complex or compound digital objects. It will examine some of the key scenarios around creating complex objects and will explore a number of approaches to packaging and transport. Taking research papers, or scholarly works, as an example, the tutorial will explore the different ways in which these, and their descriptive metadata, can be treated as complex objects. Relevant application profiles and metadata formats will be introduced and compared, such as Dublin Core, in particular the DCMI Abstract Model, and MODS, alongside content packaging standards, such as METS MPEG 21 DIDL and IMS CP. Finally, we will consider some future issues and activities that are seeking to address these. The tutorial will be of interest to librarians and technical staff with an interest in metadata or complex objects, their creation, management and re-use.
Although of the semantic web technologies utilization in the learning development field is a new research area, some authors have already proposed their idea of how an effective that operate. Specifically, from analysis of the literature in the field, we have identified three different types of existing applications that actually employ these technologies to support learning. These applications aim at: Enhancing the learning objects reusability by linking them to an ontological description of the domain, or, more generally, describe relevant dimension of the learning process in an ontology, then; providing a comprehensive authoring system to retrieve and organize web material into a learning course, and constructing advanced strategies to present annotated resources to the user, in the form of browsing facilities, narrative generation and final rendering of a course. On difference with the approaches cited above, here we propose an approach that is modeled on narrative studies and on their transposition in the digital world. In the rest of the paper, we present the theoretical basis that inspires this approach, and show some examples that are guiding our implementation and testing of these ideas within e-learning. By emerging the idea of the ontologies are recognized as the most important component in achieving semantic interoperability of e-learning resources. The benefits of their use have already been recognized in the learning technology community. In order to better define different aspects of ontology applications in e-learning, researchers have given several classifications of ontologies. We refer to a general one given in that differentiates between three dimensions ontologies can describe: content, context, and structure. Most of the present research has been dedicated to the first group of ontologies. A well-known example of such an ontology is based on the ACM Computer Classification System (ACM CCS) and defined by Resource Description Framework Schema (RDFS). It’s used in the MOODLE to classify learning objects with a goal to improve searching. The chapter will cover the terms of the semantic web and e-learning systems design and management in e-learning (MOODLE) and some of studies depend on e-learning and semantic web, thus the tools will be used in this paper, and lastly we shall discuss the expected contribution. The special attention will be putted on the above topics.
Some background and thoughts on Metadata Mapping and Metadata Crosswalks. A collection of online sources and related projects. Comments are more than welcome, as is reuse!
{Ontology: Resource} x {Matching : Mapping} x {Schema : Instance} :: Compone...Amit Sheth
Invited Talk, International Workshop on Ontology Matching
collocated with the 5th International Semantic Web Conference
ISWC-2006, November 5, 2006, Athens GA
UNIT III MINING COMMUNITIES
Aggregating and reasoning with social network data, Advanced Representations - Extracting
evolution of Web Community from a Series of Web Archive - Detecting Communities in Social
Networks - Evaluating Communities – Core Methods for Community Detection & Mining Applications of Community Mining Algorithms - Node Classification in Social Networks.
call for paper 2012, hard copy of journal, research paper publishing, where t...IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
AUTOMATIC CONVERSION OF RELATIONAL DATABASES INTO ONTOLOGIES: A COMPARATIVE A...IJwest
Constructing ontologies from relational databases is an active research topic in the Semantic Web domain.
While conceptual mapping rules/principles of relational databases and ontology structures are being
proposed, several software modules or plug-ins are being developed to enable the automatic conversion of
relational databases into ontologies. However, the correlation between the resulting ontologies built
automatically with plug-ins from relational databases and the database-toontology mapping principles has
been given little attention. This study reviews and applies two Protégé plug-ins, namely, DataMaster and
OntoBase to automatically construct ontologies from a relational database. The resulting ontologies are
further analysed to match their structures against the database-to-ontology mapping principles. A
comparative analysis of the matching results reveals that OntoBase outperforms DataMaster in applying
the database-to-ontology mapping principles for automatically converting relational databases into
ontologies
Semantic Annotation: The Mainstay of Semantic WebEditor IJCATR
Given that semantic Web realization is based on the critical mass of metadata accessibility and the representation of data with formal
knowledge, it needs to generate metadata that is specific, easy to understand and well-defined. However, semantic annotation of the
web documents is the successful way to make the Semantic Web vision a reality. This paper introduces the Semantic Web and its
vision (stack layers) with regard to some concept definitions that helps the understanding of semantic annotation. Additionally, this
paper introduces the semantic annotation categories, tools, domains and models
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
Tutorial at OAI5 (cern.ch/oai5). Abstract: This tutorial will provide a practical overview of current practices in modelling complex or compound digital objects. It will examine some of the key scenarios around creating complex objects and will explore a number of approaches to packaging and transport. Taking research papers, or scholarly works, as an example, the tutorial will explore the different ways in which these, and their descriptive metadata, can be treated as complex objects. Relevant application profiles and metadata formats will be introduced and compared, such as Dublin Core, in particular the DCMI Abstract Model, and MODS, alongside content packaging standards, such as METS MPEG 21 DIDL and IMS CP. Finally, we will consider some future issues and activities that are seeking to address these. The tutorial will be of interest to librarians and technical staff with an interest in metadata or complex objects, their creation, management and re-use.
Although of the semantic web technologies utilization in the learning development field is a new research area, some authors have already proposed their idea of how an effective that operate. Specifically, from analysis of the literature in the field, we have identified three different types of existing applications that actually employ these technologies to support learning. These applications aim at: Enhancing the learning objects reusability by linking them to an ontological description of the domain, or, more generally, describe relevant dimension of the learning process in an ontology, then; providing a comprehensive authoring system to retrieve and organize web material into a learning course, and constructing advanced strategies to present annotated resources to the user, in the form of browsing facilities, narrative generation and final rendering of a course. On difference with the approaches cited above, here we propose an approach that is modeled on narrative studies and on their transposition in the digital world. In the rest of the paper, we present the theoretical basis that inspires this approach, and show some examples that are guiding our implementation and testing of these ideas within e-learning. By emerging the idea of the ontologies are recognized as the most important component in achieving semantic interoperability of e-learning resources. The benefits of their use have already been recognized in the learning technology community. In order to better define different aspects of ontology applications in e-learning, researchers have given several classifications of ontologies. We refer to a general one given in that differentiates between three dimensions ontologies can describe: content, context, and structure. Most of the present research has been dedicated to the first group of ontologies. A well-known example of such an ontology is based on the ACM Computer Classification System (ACM CCS) and defined by Resource Description Framework Schema (RDFS). It’s used in the MOODLE to classify learning objects with a goal to improve searching. The chapter will cover the terms of the semantic web and e-learning systems design and management in e-learning (MOODLE) and some of studies depend on e-learning and semantic web, thus the tools will be used in this paper, and lastly we shall discuss the expected contribution. The special attention will be putted on the above topics.
Some background and thoughts on Metadata Mapping and Metadata Crosswalks. A collection of online sources and related projects. Comments are more than welcome, as is reuse!
{Ontology: Resource} x {Matching : Mapping} x {Schema : Instance} :: Compone...Amit Sheth
Invited Talk, International Workshop on Ontology Matching
collocated with the 5th International Semantic Web Conference
ISWC-2006, November 5, 2006, Athens GA
UNIT III MINING COMMUNITIES
Aggregating and reasoning with social network data, Advanced Representations - Extracting
evolution of Web Community from a Series of Web Archive - Detecting Communities in Social
Networks - Evaluating Communities – Core Methods for Community Detection & Mining Applications of Community Mining Algorithms - Node Classification in Social Networks.
call for paper 2012, hard copy of journal, research paper publishing, where t...IJERD Editor
call for paper 2012, hard copy of journal, research paper publishing, where to publish research paper,
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
AUTOMATIC CONVERSION OF RELATIONAL DATABASES INTO ONTOLOGIES: A COMPARATIVE A...IJwest
Constructing ontologies from relational databases is an active research topic in the Semantic Web domain.
While conceptual mapping rules/principles of relational databases and ontology structures are being
proposed, several software modules or plug-ins are being developed to enable the automatic conversion of
relational databases into ontologies. However, the correlation between the resulting ontologies built
automatically with plug-ins from relational databases and the database-toontology mapping principles has
been given little attention. This study reviews and applies two Protégé plug-ins, namely, DataMaster and
OntoBase to automatically construct ontologies from a relational database. The resulting ontologies are
further analysed to match their structures against the database-to-ontology mapping principles. A
comparative analysis of the matching results reveals that OntoBase outperforms DataMaster in applying
the database-to-ontology mapping principles for automatically converting relational databases into
ontologies
Web of Data as a Solution for Interoperability. Case StudiesSabin Buraga
The paper draws several considerations regarding the use of Web of Data (Semantic Web) technologies – such as metadata vocabularies and ontological constructs – to increase the degree of interoperability within distributed systems. A number of case studies are presenting to express the knowledge in a
platform- and programming language-independent manner.
Intelligent Expert systems can provide decisions for users for estimate from user preferences to find better destination from user profits. this present provides description of above system and suggest new approach for next researches.
The World Wide Web is booming and radically vibrant due to the well established standards and widely accountable framework which guarantees the interoperability at various levels of the application and the society as a whole. So far, the web has been functioning at the random rate on the basis of the human intervention and some manual processing but the next generation web which the researchers called semantic web, edging for automatic processing and machine-level understanding. The well set notion, Semantic Web would be turn possible if only there exists the further levels of interoperability prevails among the applications and networks. In achieving this interoperability and greater functionality among the applications, the W3C standardization has already released the well defined standards such as RDF/RDF Schema and OWL. Using XML as a tool for semantic interoperability has not achieved anything effective and failed to bring the interconnection at the larger level. This leads to the further inclusion of inference layer at the top of the web architecture and its paves the way for proposing the common design for encoding the ontology representation languages in the data models such as RDF/RDFS. In this research article, we have given the clear implication of semantic web research roots and its ontological background process which may help to augment the sheer understanding of named entities in the web.
Linked Data Generation for the University Data From Legacy Database dannyijwest
Web was developed to share information among the users through internet as some hyperlinked documents.
If someone wants to collect some data from the web he has to search and crawl through the documents to
fulfil his needs. Concept of Linked Data creates a breakthrough at this stage by enabling the links within
data. So, besides the web of connected documents a new web developed both for humans and machines, i.e.,
the web of connected data, simply known as Linked Data Web. Since it is a very new domain, still a very
few works has been done, specially the publication of legacy data within a University domain as Linked
Data.
Nelson Piedra , Janneth Chicaiza
and Jorge López, Universidad Técnica Particular de Loja, Edmundo
Tovar, Universidad Politécnica de Madrid,
and Oscar Martínez, Universitas
Miguel Hernández
Explore the advantages of using linked data with OERs.
Semantic Web: Technolgies and Applications for Real-WorldAmit Sheth
Amit Sheth and Susie Stephens, "Semantic Web: Technolgies and Applications for Real-World," Tutorial at 2007 World Wide Web Conference, Banff, Canada.
Tutorial discusses technologies and deployed real-world applications through 2007.
Tutorial description at: http://www2007.org/tutorial-T11.php
Bridging the gap between the semantic web and big data: answering SPARQL que...IJECEIAES
Nowadays, the database field has gotten much more diverse, and as a result, a variety of non-relational (NoSQL) databases have been created, including JSON-document databases and key-value stores, as well as extensible markup language (XML) and graph databases. Due to the emergence of a new generation of data services, some of the problems associated with big data have been resolved. In addition, in the haste to address the challenges of big data, NoSQL abandoned several core databases features that make them extremely efficient and functional, for instance the global view, which enables users to access data regardless of how it is logically structured or physically stored in its sources. In this article, we propose a method that allows us to query non-relational databases based on the ontology-based access data (OBDA) framework by delegating SPARQL protocol and resource description framework (RDF) query language (SPARQL) queries from ontology to the NoSQL database. We applied the method on a popular database called Couchbase and we discussed the result obtained.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Semantic Web Nature
1. Nature Inspired Methods for the Semantic Web
Monica Macoveiciuc and Constantin Stan
Faculty of Computer Science, Alexandru Ioan Cuza University, Iasi
Abstract. The Semantic Web is a vision of information that is under-
standable by computers. Although there is great exploitable potential,
we are still in “Generation Zero” of the Semantic Web, since there are
few real-world compelling applications. The heterogeneity, the volume
of data and the lack of standards are problems that could be addressed
through some nature inspired methods. The paper presents the most im-
portant aspects of the Semantic Web, as well as its biggest issues; it then
describes some methods inspired from nature - genetic algorithms, arti-
ficial neural networks, swarm intelligence, and the way these techniques
can be used to deal with Semantic Web problems.
2. Introduction
The World Wide Web is a universal medium for information and data exchange.
Exploiting the huge amount of knowledge distributed on the Web is a significant
challenge. Humans can understand the information, but it takes great effort to
find and combine data from such a large number of sources; on the other hand,
computers can easily browse through millions of pages in no time, but they are
not capable of understanding the content. The Semantic Web is a new paradigm
for the Web in which the semantics of information is defined, making it possible
for the web to understand and satisfy the requests of people and machines to use
the web resources[1]. In other words, the Semantic Web is a vision of information
that is understandable by computers. It contains a set of design principles and
a variety of enabling technologies. Some of the elements are expressed in formal
specifications, while others are still to be rigorously described.
The ontology is a key aspect of the Semantic Web, although it does not have
a universally accepted definition. It is described as “a formal specification of a
shared conceptualization” [2]. There is no commonly agreed ontology that every
data provider would rely on; the information is heterogeneous and distributed.
Existing reasoning techniques may not be able to deal with the different ontolo-
gies describing the same piece of knowledge, with the high number of instances,
with the lack of maintenance, the unreliability of the network, the variety of qual-
ity of the information available on the web. Given this context, soft computing
has an important role in coping with knowledge, and the methods inspired from
natures might be able to suggest interesting solutions for these problems.
This paper presents nature inspired techniques that can address some of the
main issues of the Semantic Web. Genetic algorithms, swarm intelligence or
neural networks could represent viable solutions for overcoming problems such
as ontology alignment, concept classification, RDF query path optimization etc.
3. Semantic Web
Advanced management information is the main benefit brought by the Semantic
Web vision. One should stop browsing documents and start performing concrete
queries. New knowledge should be inffered from the existing facts. The potential
advantages of these achievements are multiple:
– information can be located based on its meaning;
– information from different sources can be combined, summarized, presented
to the user in a improved format;
– information can be integrated across different sources.
1 Technologies
Semantic Web technologies can be considered in terms of layers, each of them
resting on and extending the functionality of the layers beneath it. The hierarchy
of the most important languages and technologies is described in the famous
“Layer Cake” diagram [3].
Semantic Web Layer Cake
The core technologies are RDF (Resource Description Framework) and RDFS
(RDF Schema). RDF is a markup language for describing information and re-
sources on the web. Any object that is uniquely identifiable by an URI (or
Uniform Resource Identifier) is considered a resource. Resources have properties
(attributes or characteristics).
The RDF model is a collection of facts, represented by statements (triples).
Each triple consists of a subject, a predicate and an object. The most common
representation of the triple is the graph-based one: subject-predicate-object is
seen as a node-arch-node link. The statements are unambiguous and have a uni-
form structure; each concept is defined in a dedicated space on the web. For
example, the statement “Jane is Tom’s mother” can be expressed is RDF as:
4. <rdf:Description rdf:about=www.persons.org/#jane>
<s:isWoman>Jane</s:isWoman>
<s:hasChild rdf:resource= www.persons.org/#tom>
</rdf:Description>
In order to describe general statements about classes or groups of objects, we
use RDF Schema, or RDFS. RDFS provides a basic object model, while RDF
refers to specific objects. The statement above can be described in RDFS as “A
woman is someone’s mother”.
RDF and RDFS allow us to describe aspects of a domain, but the modeling
primitives are too restrictive to be of general use. The taxonomic structure of
the domain, the restrictions and constraints cannot be described through this
model. It is also not possible to reason over inference rules. All these limitations
are overcome with the use of ontologies. Ontologies provide a common under-
standing of a domain of interest. The specification is formal, which means that
computers can perform reasoning about it. OWL (Web Ontology Language) is
a family of ontology languages, and it is the W3C specification for creating
Semantic Web applications. OWL builds upon RDF and RDFS and defines hi-
erarchies and relationships between resources. Semantic Web ontologies consist
of a taxonomy and a set of inference rules from which machines can make logical
conclusions. A taxonomy is a system of classification that groups resources into
classes and sub-classes based on their relationships and shared properties.
The top layers of the Layer Cake are very important in the context of Seman-
tic Web applications deployment. The trust layer deals with authentication and
reliability of data and services, through the use of digital signatures, ratings by
certification agencies, recommendations by trusted agents etc. The proof layer
allows applications to give proof of their conclusions and it includes the actual
deductive process, validation etc. Several refinements have been proposed for the
Semantic Web Layer Cake. One of them, suggested by sir Tim Berners-Lee in
2006, includes new features, such as:
– Rules and Inferencing Systems (RIF). It is a language for representing rules
and for linking rule-based systems; the formalisms are being extended in
order to encapsulate probabilistic, temporal and causal knowledge.
– RDF Extraction. GRDDL “Gleaning Resource Descriptions from Dialects of
Languages”) is a language that identifies when an XML document contains
data compatible with RDF and it is capable of extracting the data.
– Database Support for RDF. Oracle provides support for RDF and OWL
databases; for the moment, the focus is on storage, rather than inferencing
capabilities. There are various open source projects that offer solutions for
storage - such as Jena, as well as query languages for RDF (SPARQL being
the most important).
5. Revised Semantic Web Layer Cake
2 Current Problems
Although the Semantic Web vision has great potential, for more than a decade
it has been “a kind of academic exercise rather than a practical technology” [4].
One of the main reasons is the lack of a common understanding of what the
Semantic Web can offer and, more particularly, of the role of ontologies.
RDF and OWL can be confusing and complicated to understand for less-technical
people. There is a huge amount of information that needs to be annotated in
order to be processed and infered, and the two possible solutions for this are
both hard to put into practice: either an automatic process should apply an al-
gorithm that takes a piece of text and produces RDF, or people should manually
annotate existing documents. The first approach - of an intelligent algorithm -
is unlikely, since having such an algorithm would make the RDF and OWL seem
deprecated. Manual annotation is inefficient and prone to error.
One of the biggest issues of the Semantic Web is that it seems to be scat-
tered into small pieces. The existing initiatives and applications focus on small
domains and the access to the Semantic Web seems limited from the perspective
of the average user.
However, there are already a wide range of applications in existence or under
development. Some typical areas seem to offer a great potential (although not
fully exploited for the moment) for the development of such applications.
1. E-Science. These kind of applications involve large data collections that require
computationally intensive processing. The participants are usually distributed
6. across the world. A representative project is the Gene Ontology (GO) [5] one.
GO is a major bioinformatics initiative with the aim of standardizing the repre-
sentation of gene and gene product attributes across species and databases. The
Human Genome Project, finalized in 2003, is probably the most famous e-science
project.
2. Travel Information Systems. There are efforts in the direction of building
XML based specifications which would allow the interchange of information be-
tween companies. The benefits would be major for the users, since they would
be able to easily plan the whole trip - accommodation, transportation etc. The
big issue for the moment is the inexistence of an agreed ontology for this domain.
3. Digital Libraries. Over the past years, institutions such as universities, li-
braries and museums have made their large inventories of materials available
online. Although they have the same goal, the implementations of these sys-
tems are totally different. It is difficult for an institution to access another one’s
catalogues. One solution for this problem is the use of ontologies and of some
ontology mapping techniques, that would help achieve semantic interoperability.
4. Health Care. This domain stands to gain tremendous benefit by adoption
of Semantic Web technologies, as it depends on the interoperability of informa-
tion from many domains and processes for efficient decision support.
At present, the Semantic Web is increasingly used by small and large business.
Oracle (RDF management platform), IBM, Adobe (tool for adding RDF-based
metadata to most of their file formats), Software AG, or Yahoo! are the most im-
portant corporations that have already started working with these technologies
and are already selling tools, as well as complete business solutions. In August
2008, Microsoft bought Powerset, a semantic search engine, for a reported $ 100
million.
There are also open source applications, such as Protege [6] and Kowari [7],
that provide building blocks for application development, making it more cost
effective to develop Semantic Web products.
7. Nature Inspired Methods in the Context of
Semantic Web
The vast amount, the variety and heterogeneity of the data involved in the Se-
mantic Web vision makes it sometimes difficult for applications to deal with it,
turning many real world problems into NP-hard problems. Nature inspired rea-
soning might be able to adress and solve some of these issues.
Natural computing finds its source of inspiration in biological phenomena and
social behaviors from mainly insects and birds. Such algorithms are able to find
acceptable results for NP-hard problems within a reasonable amount of time,
rather than guarantee the optimal solution. The most important methods in-
spired from nature include genetic algorithms, neural networks, particle swarm
or ant colony optimization etc.
3 Genetic Algorithms
Genetic Algorithms (GAs) consist of some model techniques used by simple bi-
ological system. These systems use reproduction to produce offspring that can
better survive in their environment. Genetic algorithms use reproduction oper-
ators (mutation and crossover) and strategies (’survival of the fittest’) inspired
from these realities, in order to improve the quality solutions to a particular
problem. The advantage of GAs compared to other algorithms and methods is
that they make only few assumptions about the underlying fitness landscape
and, therefore, they perform well in many different problem categories.
These algorithms proceed according to a simple scheme:
1. a population of random individuals is created;
2. each individual is tested in order to determine its utility as solution;
3. a fitness value is assigned to each individual, based on the previous evalua-
tion;
4. a selection process filters out the individuals with low fitness and allows those
with good fitness to enter the mating pool with a higher probability;
5. a reproduction process creates offspring by combining or varying the solution
candidates;
6. if the termination criterion is met, the evolution stops; otherwise, it continues
starting with step 2.
Genetic algorithms can be a viable solutions for different problems that Seman-
tic Web is confronting with, such as RDF query path and ontology alignment
optimization, Semantic Web services composition etc.
8. The possibility of querying large amounts of data from different, heterogeneous
sources, in an efficient way, is an unsolved problem at the moment. In this con-
text, an interesting research field is the determination of query paths - the order
in which the parts of a query are evaluated. The order has a major role when it
comes to the execution time of the query, thus a good algorithm for determining
the query path can contribute to quick, efficient querying. Genetic algorithms
have been already tested, with some success, in problems related to this field.
The Iterative Improvement algorithm, followed by Simulated Annealing - also
known as the ’Two-Phase Optimization’ - addresses the optimal determination
of query paths.
A RDF query can be seen as a chain of subject-predicate-object triples. It can
be visualized as a tree, in which the leaf nodes represent the inputs and the
internal nodes are relational algebra operations. The nodes in such a query can
be ordered in many different ways, all of them producing the same result, but
with different execution time. In these conditions, the challenge consists in de-
termining the order in which the nodes should be placed, in order to optimize
the response time.
It is not difficult to identify the solution space of the problem as the set of
all the possible RDF trees. A population can be created by randomly select-
ing some of these threes (the chromosomes). A simple mutation operator would
switch the order of two random nodes (triples) in a chromosome. A crossover
operator would pick some of the nodes from a chromosome, conserving the order,
and put them together with the missing nodes taken from a second chromosome
(also conserving the order in this second chromosome). The fitness function is
calculated based on the execution time. Long executing times are not desirable
for a GA in an RDF query execution environment, therefore the stopping con-
dition should also consider (or be complemented with) a time limit.
Another interesting problem is the ontology alignment optimization. At the mo-
ment, there is no general agreed standard when it comes to ontologies. The
diversity of data makes it even less likely that such a standard would be possible
in the near future - the standards do not often fit to the specific needs of all
the participants in a potential standardization process; and it is very difficult
and expensive for many organizations to reach an agreement. Thus, ontology
alignment is a key aspect in order to make the knowledge exchange possible in
the context of Semantic Web.
Many attempts have been made to solve this issue using different combinations
of matchers, such as string normalization or similarity, data type comparison,
linguistic methods, inheritance analysis, graph mapping, taxonomy analysis etc.
A solution involving genetic algorithms would be able to cope with huge amounts
of data, without requiring human intervention. There are two difficult tasks when
defining the problem from the GA point of view: the content of a tentative solu-
9. tion should be encoded in a string of values, and a good fitness function should
be provided (a similarity measure function between two ontologies). Genetics
for Ontology ALignments (GOAL) [8] is a software tool for optimizing ontol-
ogy matching functions. GOAL defines the alignment evaluation process based
on four goals: optimizing the precision, optimizing the recall, optimizing the
f-measure or reducing the number of false positives. A chromosome is defined
through a method that converts a bit representation to a set of floating-point
numbers in the real range [0, 1]. The fitness function consists of selecting one of
the parameters retrieved by an alignment evaluation. The parameters are:
– precision - the percentage of items returned that are relevant;
– recall - the fraction of the items that are relevant to a query;
– f-measure - a harmonic mean from precision and recall;
– false positives - relationships which have been provided although they are
false.
The algorithm has its limitations, but it has managed to find the optimal so-
lution for different instances of the ontology mapping problem, in an efficient way.
Semantic Web service composition consists in finding web services (available
in a repository) that are able to accomplish a certain task. The task is defined
in a form of a composition request that contains a set of available input pa-
rameters and a set of wanted output parameters. The parameters are not the
explicit values, but concepts from an ontology describing the semantics of the
values. A sequence of services is called a composition. If the input parameters
given in the request are provided, the services from this sequence can be subse-
quently executed and will finally produce the desired output parameters. For a
genetic algorithm approach, one needs to find a way of representing a web service
sequence as a chromosome. A simple solution is to use strings of service identi-
fiers, which can be processed by standard genetic algorithms. Considering that
the chromosomes can have variable length, the normal GA operators could be
modified in order to make the search more efficient. This operation either deletes
the first service from a sequence, or adds a promising service to the sequence.
The other standard GA operations can be easily applied.
4 Neural Networks
An artificial neural network (ANN) is a system loosely modeled based on the
human brain, an emulation of the biological neural system. It consists of an in-
terconnected group of artificial neurons. The information is processed using a
connectionist approach to computation. Generally, an ANN is an adaptive sys-
tem, changing its structure according to the information that flows through the
network during the learning phase. They can be used to model complex rela-
tionships between inputs and outputs or to find patterns in data.
In the context of Semantic Web, artificial neural networks can be used in the
10. process of ontology mapping. The heterogeneity among different ontologies is
one of the biggest issues in this field nowadays. Web applications are developed
by different parties, that design their own ontologies, according to their own
views of the world. Many approaches have been proposed, in order to deal with
this heterogeneity, but each of them has its drawbacks. A centralized ontology
is very unlikely, so the efforts are now focused on distributed solutions: trying
to match the individual ontologies, and possibly reuse each other as well. Most
of the existing techniques are either rule-based, or learning-based, but both cat-
egories have their disadvantages.
A different approach combines rule-based and learning-based solutions, integrat-
ing machine learning techniques, such that the weights of a concept’s semantic
aspects can be learned from training examples, instead of being pre-defined. In
the real world, a common problem that can occur is the lack of instance data
- either in quantity or quality. This method avoids this problem, because the
learning process is carried out at the schema level, instead of the instance level.
Artificial neural networks are a good solution for the learning process, for many
reasons: instances are represented by attribute-value pairs; the target function
output is a real-valued one; fast evaluation of the learned target function is
preferable. ANN are also known to perform well in the presence of noisy data.
If the ontologies are to be learned from uncontrolled data, such as real existing
web pages, the handling of noise becomes a real issue.
Another interesting approach to the problem of ontology mapping is the use
of interactive activation and competition (IAC) neural networks (NN) to search
for a global optimal solution to best satisfy the ontology constraints. An IAC neu-
ral network consists of a number of competitive nodes connected to each other.
Each of these nodes represents a hypothesis, while the connection between two
nodes is a constraint between their hypotheses. The connection can be either
positive (activation) - if the hypotheses support each other - or negative (com-
petition). Each connection has a weight, which is proportional to the strength
of the constraint. The activation of a node is determined by more sources:
– the initial activation;
– the input from its adjacent nodes;
– its bias;
– the external input.
The characteristics of ontology mapping and the mechanisms of the IAC network
have common properties. The constraints in ontology mapping can be interactive
or competitive between mapping hypotheses. Before applying a neural network
based algorithm for learning, a preliminary mapping is made, which estimates
both the linguistic and structure information of ontologies. These prior knowl-
edge can be sees as external inputs or bias of a node in the IAC network.
11. 5 Swarm Intelligence
Swarm intelligence is another approach to problem solving that takes inspiration
from social behaviors of insects and of other animals. Particularly, ant colony
optimization is one of the most successful techniques. Ant colony optimization
(ACO) is inspired by the ants that deposit pheromone on the ground in order
to mark some favorable path that should be followed by other members of the
colony. A similar mechanism has be transposed in an algorithm for solving op-
timization problems.
Semantic Web reasoning systems deal with growing amounts of distributed, dy-
namic resources. Swarm intelligence could be used in order to implement a RDF
graph traversal algorithm. Among the main properties of swarms are adaptive-
ness, robustness and scalability. These correspond to three concepts - no central
control, locality and simplicity. Thus, the combination of reasoning and swarm
intelligence can be a viable solution for obtaining reasoning performance by ba-
sic means.
A model of a decentralized system implies the traversal of a graph in order
to calculate the deductive closure of the graph, with respect to the RDFS se-
mantics. The role of swarm intelligence is to reduce the computational cost. In
order to calculate the RDFS closure over an RDF graph, a set of rules need
to be applied repeatedly to the triples in the graph. In the metaphor of ants,
each insect represent one of these rules, which might be (partially) instantiated.
Ants communicate with each other only locally and indirectly. Whenever the
condition of a rule matches the node an ant is on, it locally adds the newly
derived triple to the graph. Only the active reasoning rules are moving in the
network and not the data, minimizing network traffic, since schema-data is far
less numerous than instance-data. Having some transition capabilities between
graph-boundaries, the method converges towards the closure. This model has
been successfully implemented and the results are described in [9].
12. Conclusions
The Semantic Web vision comes with the promise of a world in which a common
understanding of the meaning of data can help humans and computers coop-
erate. However, it takes great effort to put in practice the revolutionary ideas,
since it is very difficult to agree upon standards and, afterwards, to update the
existing resources according to the potential standards. For the moment, the
Semantic Web seems to be scattered into small pieces, being available only on
a small scale and for very specific domains. On the other hand, there is a huge
amount of knowledge that can be exploited through automated processing and
adapted in order to be used. In this context, methods inspired from nature seem
to have the potential to address the currently unresolved problems of the Se-
mantic Web. These methods can deal with numerous data and can be used to
build high scalable applications. Since there is no perfection, therefore, no op-
timal solution for these problems, concepts such as genetic algorithms, artificial
neural networks or swarm intelligence might be able to provide good results.
This paper presented some ideas of applying nature inspired methods in or-
der to deal with Semantic Web’s challenges. The main aspects of Semantic Web
have been described, as well as the evolution during the past decade. The ar-
eas of interest and some (potential) applications have been presented and the
most important problems have been introduced and explained. Finally, the pa-
per presented the way methods inspired from nature can address the problems
of Semantic Web. Three of the most important techniques - genetic algorithms,
swarm intelligence, artificial neural networks - have been briefly described, along
with the efforts of applying them for solving problems such as ontology mapping,
RDF path optimization, RDF graph traversal, ontology alignment optimization.
References
[1] Tim Berners-Lee, James Hendler and Ora Lassila. The Semantic Web. Scientific
American, May 2001.
[2] Tom Gruber. What is an Ontology? http://www-ksl.stanford.edu/kst/what-is-an-
ontology.html
[3] Semantic Web Layer Cake. http://c2.com/cgi/wiki?SemanticWebLayerCake
[4] Alex Iskold. Semantic Web: Difficulties with the Classic Approach.
http://www.readwriteweb.com
[5] Gene Ontology Project. http://www.geneontology.org/
[6] Protege. http://protege.stanford.edu/
[7] Kowari. http://kowari.sourceforge.net/
[8] Jorge Martinez-Gil, Enrique Alba, Jose F. Aldana Montes. Optimizing Ontology
Alignments by Using Genetic Algorithms. Proceedings of Nature inspired Reasoning
for the Semantic Web (NatuReS), 2008.
[9] Kathrin Dentler, Stefan Schlobach, Christophe Guret. Semantic Web Reasoning by
Swarm Intelligence. Vrije Universiteit Amsterdam
13. [10] Human Genome Project.
http://www.ornl.gov/sci/techresources/Human Genome/home.shtml
[11] Riccardo Leardi. Nature Inspired Methods in Chemometrics: Genetic Algorithms
and Artificial Neural Networks, Volume 23 (Data Handling in Science and Technol-
ogy). Elsevier BV, 2003.
[12] Alexander Hogenboom, Viorel Milea, Flavius Frasincar, Uzay Kaymak. Genetic
Algorithms for RDF Query Path Optimization. Proceedings of Nature inspired Rea-
soning for the Semantic Web (NatuReS), 2008.
[13] Thomas Weise, Steffen Bleul, Diana Comes, and Kurt Geihs. Different Approaches
to Semantic Web Service Composition. WowKiVS, 2009.
[14] http://en.wikipedia.org/wiki/Artificial neural network
[15] John Cardiff. The Evolution of the Semantic Web. Social Media Research Group,
Institute of Technology Tallaght, Dublin, Ireland.