In Databases one of the active research fields is mapping relational databases (RDB) into Resource
Description Framework (RDF). An enormous data is kept in the form of relational databases and accessing
of data is done in the semantic web. The data stored in RDB is to be efficiently mapped to the semantic web
or RDF for data availability to the users. There is a definite need for improvement in technologies for
efficient mapping languages from RDB to RDF in semantic web. This paper presents an up-to-date survey
of different RDB to RDF mapping languages proposed in recent times. It outlines the main features or
characteristics to be considered for efficient mapping in different scenarios. The main objective of this
content, pictures identification of limitations existing in the mapping languages. It also enhances the
comparisons between each language and helps researchers to propose further better proposals in their
future scope of work to improve better mapping techniques.
Presentation of the initial draft of the shared harmonized data model for the Cancer Research Data Center nodes. The CRDC-H leverages existing standards, where possible and are designed to meet the needs of systems like the Cancer Data Aggregator (CDA) that support integrated search and metadata-based analyses across datasets in the CRDC ecosystem.
Information on the web is tremendously increasing in
recent years with the faster rate. This massive or voluminous data
has driven intricate problems for information retrieval and
knowledge management. As the data resides in a web with several
forms, the Knowledge management in the web is a challenging
task. Here the novel 'Semantic Web' concept may be used for
understanding the web contents by the machine to offer
intelligent services in an efficient way with a meaningful
knowledge representation. The data retrieval in the traditional
web source is focused on 'page ranking' techniques, whereas in
the semantic web the data retrieval processes are based on the
‘concept based learning'. The proposed work is aimed at the
development of a new framework for automatic generation of
ontology and RDF to some real time Web data, extracted from
multiple repositories by tracing their URI’s and Text Documents.
Improved inverted indexing technique is applied for ontology
generation and turtle notation is used for RDF notation. A
program is written for validating the extracted data from
multiple repositories by removing unwanted data and considering
only the document section of the web page.
A Comparative Evaluation of POS Tagging and N-Gram Measures in Arabic Corpus ...CSCJournals
The purpose of this evaluation is twofold: an overview of the extent to which the functioning of the large-scale Arabic corpus resources examined serves the criteria of parts-of-speech tagging in the corpus design of linguistic data and to evaluate Arabic corpus analysis tools in terms of natural language processing statistics. The confusion matrix statistical method shows that some Arabic monitor corpora need further development, and the International Corpus of Arabic scores high levels on confusion matrices. There are nine Arabic corpus analysis tools under evaluation, and the attested reliable statistical outcomes are retrieved in terms of statistical algorithms for association measures. This is done by relying on one million empirically designated clean Arabic data to evaluate the association measures among the nine Arabic corpus analysis tools. The results presented at the end of this article indicate that the limitations could be tackled by evaluating the Arabic monitor Corpus resources rather than trusting them, and by implementing the new forms of programming rather than depending on the already-built natural Arabic language resources and tools.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
Presentation of the initial draft of the shared harmonized data model for the Cancer Research Data Center nodes. The CRDC-H leverages existing standards, where possible and are designed to meet the needs of systems like the Cancer Data Aggregator (CDA) that support integrated search and metadata-based analyses across datasets in the CRDC ecosystem.
Information on the web is tremendously increasing in
recent years with the faster rate. This massive or voluminous data
has driven intricate problems for information retrieval and
knowledge management. As the data resides in a web with several
forms, the Knowledge management in the web is a challenging
task. Here the novel 'Semantic Web' concept may be used for
understanding the web contents by the machine to offer
intelligent services in an efficient way with a meaningful
knowledge representation. The data retrieval in the traditional
web source is focused on 'page ranking' techniques, whereas in
the semantic web the data retrieval processes are based on the
‘concept based learning'. The proposed work is aimed at the
development of a new framework for automatic generation of
ontology and RDF to some real time Web data, extracted from
multiple repositories by tracing their URI’s and Text Documents.
Improved inverted indexing technique is applied for ontology
generation and turtle notation is used for RDF notation. A
program is written for validating the extracted data from
multiple repositories by removing unwanted data and considering
only the document section of the web page.
A Comparative Evaluation of POS Tagging and N-Gram Measures in Arabic Corpus ...CSCJournals
The purpose of this evaluation is twofold: an overview of the extent to which the functioning of the large-scale Arabic corpus resources examined serves the criteria of parts-of-speech tagging in the corpus design of linguistic data and to evaluate Arabic corpus analysis tools in terms of natural language processing statistics. The confusion matrix statistical method shows that some Arabic monitor corpora need further development, and the International Corpus of Arabic scores high levels on confusion matrices. There are nine Arabic corpus analysis tools under evaluation, and the attested reliable statistical outcomes are retrieved in terms of statistical algorithms for association measures. This is done by relying on one million empirically designated clean Arabic data to evaluate the association measures among the nine Arabic corpus analysis tools. The results presented at the end of this article indicate that the limitations could be tackled by evaluating the Arabic monitor Corpus resources rather than trusting them, and by implementing the new forms of programming rather than depending on the already-built natural Arabic language resources and tools.
WebSpa is a tool that allows the quick, intuitive (and even fun) interrogation of arbitrary SPARQL endpoints. WebSpa runs in the web browser and does not require the installation of any additional software. The tool manages a large variety of pre-defined SPARQL endpoints and allows the addition of new ones. An user account gives the possibility of saving both the interrogation and its results on the local computer, as well as further editing of the queries. The application is written in both Java and Flex. It uses Jena and ARQ application programming interface in order to perform the queries, and the results are processed and displayed using Flex.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
A semantic based approach for information retrieval from html documents using...csandit
Most of the internet applications are built using web technologies like HTML. Web pages are
designed in such a way that it displays the data records from the underlying databases or just
displays the text in an unstructured format but using some fixed template. Summarizing these
data which are dispersed in different web pages is hectic and tedious and consumes most of the
time and manual effort. A supervised learning technique called Wrapper Induction technique
can be used across the web pages to learn data extraction rules. By applying these learnt rules
to web pages, enables the information extraction an easier process. This paper focuses on
developing a tool for information extraction from the unstructured data. The use of semantic
web technologies much simplifies the process. This tool enables us to query the data being
scattered over multiple web pages, in distinguished ways. This can be accomplished by the
following steps – extracting the data from multiple web pages, storing them in the form of RDF
triples, integrating multiple RDF files using ontology, generating SPARQL query based on user
query and generating report in the form of tables or charts from the results of SPARQL query.
The relationship between various related web pages are identified using ontology and used to
query in better ways thus enhancing the searching efficacy.
A SEMANTIC BASED APPROACH FOR INFORMATION RETRIEVAL FROM HTML DOCUMENTS USING...cscpconf
Most of the internet applications are built using web technologies like HTML. Web pages are designed in such a way that it displays the data records from the underlying databases or just displays the text in an unstructured format but using some fixed template. Summarizing these data which are dispersed in different web pages is hectic and tedious and consumes most of the time and manual effort. A supervised learning technique called Wrapper Induction technique
can be used across the web pages to learn data extraction rules. By applying these learnt rules to web pages, enables the information extraction an easier process. This paper focuses on
developing a tool for information extraction from the unstructured data. The use of semantic web technologies much simplifies the process. This tool enables us to query the data being scattered over multiple web pages, in distinguished ways. This can be accomplished by the following steps – extracting the data from multiple web pages, storing them in the form of RDF triples, integrating multiple RDF files using ontology, generating SPARQL query based on user query and generating report in the form of tables or charts from the results of SPARQL query. The relationship between various related web pages are identified using ontology and used to query in better ways thus enhancing the searching efficacy.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
During the last 10 to 15 years the use of graphic symbols to support literacy development and access to text content has become increasingly widespread in special needs education and AAC (Augmentative and Alternative Communication) practices. This popularity is founded on a growing body of positive experience and research studies. It is also accompanied by the availability and use of a widening range of educational software tools (such as the Widgit_Communicate:-series, Clicker, BoardMaker-Speaking Dynamically Pro, and EdWord). But why should these methods and resources remain in the confined domain of special needs education? As part of the ÆGIS project, graphic symbol support for access to text is developed for the standard and open source office suite OpenOffice.org (OO.org). This task is a part of the ÆGIS ambition to include people with cognitive difficulties in the efforts towards more general accessibility in standard ICT environments. The graphic symbol support will be developed as a plug-in extension primarily for Writer in OO.org, and will build on the Concept Coding Framework (CCF) I suggested open standard for multimodal language support defined in the WWAAC project, and further developed within the SYMBERED and ÆGIS projects. When the user enters text – by ordinary letter-by-letter typing or by selecting and entering whole words (provided by Assistive Technology tools) – or loads a file, contained words will be matched against a concept database. Graphic symbol representations will be offered according to the user’s preferences, ranging from inline parallel text + symbol representation, using the Ruby Annotation format, to a word lookup service. The graphic symbol support will be integrated with the improved Text-to- Speech (TTS)I support within OO.orgI that will also be addressed in one of the ÆGIS tasks. Functionality will be evaluated and refined in three rounds of user pilot resting within ÆGIS.
This presentation describes the public data service - FactForge. It is a reason-able view of a segement of LOD cloud, and the biggest body of general knowledge on which inference is performed, supplied with a reference layer for a quick access.
Is fortran still relevant comparing fortran with java and c++ijseajournal
This paper presents a comparative study to evaluate and compare Fortran with the two mo
st popular
programming languages Java and C++. Fortran has gone through major and minor extensions in the
years 2003 and 2008. (1) How much have these extensions made Fortran comparable to Java and C++?
(2) What are the differences and similarities, in sup
porting features like: Templates, object constructors
and destructors, abstract data types and dynamic binding? These are the main questions we are trying to
answer in this study. An object
-
oriented ray tracing application is implemented in these three lan
guages to
compare them. By using only one program we ensured there was only one set of requirements thus making
the comparison homogeneous. Based on our literature survey this is the first study carried out to compare
these languages by applying software m
etrics to the ray tracing application and comparing these results
with the similarities and differences found in practice. We motivate the language implementers and
compiler developers, by providing binary analysis and profiling of the application, to impr
ove Fortran
object handling and processing, and hence making it more prolific and general. This study facilitates and
encourages the reader to further explore, study and use these languages more effectively and productively,
especially Fortran.
European Pharmaceutical Contractor: SAS and R Team in Clinical ResearchKCR
Statistical analysis constitutes an essential part of every serious scientific research. Without data and a formal process of searching for evidences supporting or disproving stated hypotheses, there is nothing but mere opinion. Evidence-based medicine is no exception
The W3C Linked Data Platform (LDP) candidate recommendation defines a standard HTTP-based protocol for read/write Linked
Data. The W3C R2RML recommendation defines a language to map relational databases (RDBs) and RDF. This paper presents morph-LDP, a novel system that combines these two W3C standardization initiatives to expose relational data as read/write Linked Data for LDP-aware applications, whilst allowing legacy applications to continue using their relational databases.
Modern PHP RDF toolkits: a comparative studyMarius Butuc
This work presents a comparative study on the RDF processing APIs implemented in PHP. We took into consideration diferent
criteria including, but not limited to: the solution for storing RDF statements, the support for SPARQL queries, performance, interoperability,
and implementation maturity.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Bridging the gap between the semantic web and big data: answering SPARQL que...IJECEIAES
Nowadays, the database field has gotten much more diverse, and as a result, a variety of non-relational (NoSQL) databases have been created, including JSON-document databases and key-value stores, as well as extensible markup language (XML) and graph databases. Due to the emergence of a new generation of data services, some of the problems associated with big data have been resolved. In addition, in the haste to address the challenges of big data, NoSQL abandoned several core databases features that make them extremely efficient and functional, for instance the global view, which enables users to access data regardless of how it is logically structured or physically stored in its sources. In this article, we propose a method that allows us to query non-relational databases based on the ontology-based access data (OBDA) framework by delegating SPARQL protocol and resource description framework (RDF) query language (SPARQL) queries from ontology to the NoSQL database. We applied the method on a popular database called Couchbase and we discussed the result obtained.
The Semantic Web is a vision of information that is understandable by computers. Although there is great exploitable potential, we are still in "Generation Zero'' of the Semantic Web, since there are few real-world compelling applications. The heterogeneity, the volume of data and the lack of standards are problems that could be addressed through some nature inspired methods. The paper presents the most important aspects of the Semantic Web, as well as its biggest issues; it then describes some methods inspired from nature - genetic algorithms, artificial neural networks, swarm intelligence, and the way these techniques can be used to deal with Semantic Web problems.
A semantic based approach for information retrieval from html documents using...csandit
Most of the internet applications are built using web technologies like HTML. Web pages are
designed in such a way that it displays the data records from the underlying databases or just
displays the text in an unstructured format but using some fixed template. Summarizing these
data which are dispersed in different web pages is hectic and tedious and consumes most of the
time and manual effort. A supervised learning technique called Wrapper Induction technique
can be used across the web pages to learn data extraction rules. By applying these learnt rules
to web pages, enables the information extraction an easier process. This paper focuses on
developing a tool for information extraction from the unstructured data. The use of semantic
web technologies much simplifies the process. This tool enables us to query the data being
scattered over multiple web pages, in distinguished ways. This can be accomplished by the
following steps – extracting the data from multiple web pages, storing them in the form of RDF
triples, integrating multiple RDF files using ontology, generating SPARQL query based on user
query and generating report in the form of tables or charts from the results of SPARQL query.
The relationship between various related web pages are identified using ontology and used to
query in better ways thus enhancing the searching efficacy.
A SEMANTIC BASED APPROACH FOR INFORMATION RETRIEVAL FROM HTML DOCUMENTS USING...cscpconf
Most of the internet applications are built using web technologies like HTML. Web pages are designed in such a way that it displays the data records from the underlying databases or just displays the text in an unstructured format but using some fixed template. Summarizing these data which are dispersed in different web pages is hectic and tedious and consumes most of the time and manual effort. A supervised learning technique called Wrapper Induction technique
can be used across the web pages to learn data extraction rules. By applying these learnt rules to web pages, enables the information extraction an easier process. This paper focuses on
developing a tool for information extraction from the unstructured data. The use of semantic web technologies much simplifies the process. This tool enables us to query the data being scattered over multiple web pages, in distinguished ways. This can be accomplished by the following steps – extracting the data from multiple web pages, storing them in the form of RDF triples, integrating multiple RDF files using ontology, generating SPARQL query based on user query and generating report in the form of tables or charts from the results of SPARQL query. The relationship between various related web pages are identified using ontology and used to query in better ways thus enhancing the searching efficacy.
The Web is a universal medium for information, data and knowledge exchange. The Semantic Web is an extension of the World Wide Web, ``in which information is given well-defined meaning, better enabling computers and people to work in cooperation''\cite{semweb:lee}. RDF, together with SparQL, provide a powerful mechanism for describing and interchanging metadata on the web. This paper presents briefly the two concepts - RDF, SparQL - and three of the most popular frameworks (written in Java) that offer support for RDF: Jena, Sesame and JRDF.
During the last 10 to 15 years the use of graphic symbols to support literacy development and access to text content has become increasingly widespread in special needs education and AAC (Augmentative and Alternative Communication) practices. This popularity is founded on a growing body of positive experience and research studies. It is also accompanied by the availability and use of a widening range of educational software tools (such as the Widgit_Communicate:-series, Clicker, BoardMaker-Speaking Dynamically Pro, and EdWord). But why should these methods and resources remain in the confined domain of special needs education? As part of the ÆGIS project, graphic symbol support for access to text is developed for the standard and open source office suite OpenOffice.org (OO.org). This task is a part of the ÆGIS ambition to include people with cognitive difficulties in the efforts towards more general accessibility in standard ICT environments. The graphic symbol support will be developed as a plug-in extension primarily for Writer in OO.org, and will build on the Concept Coding Framework (CCF) I suggested open standard for multimodal language support defined in the WWAAC project, and further developed within the SYMBERED and ÆGIS projects. When the user enters text – by ordinary letter-by-letter typing or by selecting and entering whole words (provided by Assistive Technology tools) – or loads a file, contained words will be matched against a concept database. Graphic symbol representations will be offered according to the user’s preferences, ranging from inline parallel text + symbol representation, using the Ruby Annotation format, to a word lookup service. The graphic symbol support will be integrated with the improved Text-to- Speech (TTS)I support within OO.orgI that will also be addressed in one of the ÆGIS tasks. Functionality will be evaluated and refined in three rounds of user pilot resting within ÆGIS.
This presentation describes the public data service - FactForge. It is a reason-able view of a segement of LOD cloud, and the biggest body of general knowledge on which inference is performed, supplied with a reference layer for a quick access.
Is fortran still relevant comparing fortran with java and c++ijseajournal
This paper presents a comparative study to evaluate and compare Fortran with the two mo
st popular
programming languages Java and C++. Fortran has gone through major and minor extensions in the
years 2003 and 2008. (1) How much have these extensions made Fortran comparable to Java and C++?
(2) What are the differences and similarities, in sup
porting features like: Templates, object constructors
and destructors, abstract data types and dynamic binding? These are the main questions we are trying to
answer in this study. An object
-
oriented ray tracing application is implemented in these three lan
guages to
compare them. By using only one program we ensured there was only one set of requirements thus making
the comparison homogeneous. Based on our literature survey this is the first study carried out to compare
these languages by applying software m
etrics to the ray tracing application and comparing these results
with the similarities and differences found in practice. We motivate the language implementers and
compiler developers, by providing binary analysis and profiling of the application, to impr
ove Fortran
object handling and processing, and hence making it more prolific and general. This study facilitates and
encourages the reader to further explore, study and use these languages more effectively and productively,
especially Fortran.
European Pharmaceutical Contractor: SAS and R Team in Clinical ResearchKCR
Statistical analysis constitutes an essential part of every serious scientific research. Without data and a formal process of searching for evidences supporting or disproving stated hypotheses, there is nothing but mere opinion. Evidence-based medicine is no exception
The W3C Linked Data Platform (LDP) candidate recommendation defines a standard HTTP-based protocol for read/write Linked
Data. The W3C R2RML recommendation defines a language to map relational databases (RDBs) and RDF. This paper presents morph-LDP, a novel system that combines these two W3C standardization initiatives to expose relational data as read/write Linked Data for LDP-aware applications, whilst allowing legacy applications to continue using their relational databases.
Modern PHP RDF toolkits: a comparative studyMarius Butuc
This work presents a comparative study on the RDF processing APIs implemented in PHP. We took into consideration diferent
criteria including, but not limited to: the solution for storing RDF statements, the support for SPARQL queries, performance, interoperability,
and implementation maturity.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Bridging the gap between the semantic web and big data: answering SPARQL que...IJECEIAES
Nowadays, the database field has gotten much more diverse, and as a result, a variety of non-relational (NoSQL) databases have been created, including JSON-document databases and key-value stores, as well as extensible markup language (XML) and graph databases. Due to the emergence of a new generation of data services, some of the problems associated with big data have been resolved. In addition, in the haste to address the challenges of big data, NoSQL abandoned several core databases features that make them extremely efficient and functional, for instance the global view, which enables users to access data regardless of how it is logically structured or physically stored in its sources. In this article, we propose a method that allows us to query non-relational databases based on the ontology-based access data (OBDA) framework by delegating SPARQL protocol and resource description framework (RDF) query language (SPARQL) queries from ontology to the NoSQL database. We applied the method on a popular database called Couchbase and we discussed the result obtained.
Selecting the right database type for your knowledge management needs.Synaptica, LLC
This presentation looks at relational vs. graph databases and their advantages and disadvantages in storing semantic data for taxonomies and ontologies.
SOME INTEROPERABILITY ISSUES IN THE DESIGNING OF WEB SERVICES : CASE STUDY ON...ijwscjournal
In today’s environment most of the commercial web based project developed in the industry as well enumerous number of funded project/and studies taken as part of research oriented initiatives in the academia suffer from major technical issues as to how design, develop and deploy the Web Services that
can run in variety of heterogeneous environments. In this paper we address the issues of interoperability between Web Services, the metrics which can be used to measure the interoperability and simulate the Online shopping application by developing the Credit Card Verification Software using Luhn’s Mod 10 algorithm having Java Client written in NetBeans and the BankWebService in C# .NET.
SOME INTEROPERABILITY ISSUES IN THE DESIGNING OF WEB SERVICES : CASE STUDY ON...ijwscjournal
In today’s environment most of the commercial web based project developed in the industry as well
enumerous number of funded project/and studies taken as part of research oriented initiatives in the
academia suffer from major technical issues as to how design, develop and deploy the Web Services that
can run in variety of heterogeneous environments. In this paper we address the issues of
interoperability between Web Services, the metrics which can be used to measure the interoperability
and simulate the Online shopping application by developing the Credit Card Verification Software
using Luhn’s Mod 10 algorithm having Java Client written in NetBeans and the BankWebService in
C# .NET.
ENHANCING ENGLISH WRITING SKILLS THROUGH INTERNET-PLUS TOOLS IN THE PERSPECTI...ijfcstjournal
This investigation delves into incorporating a hybridized memetic strategy within the framework of English
composition pedagogy, leveraging Internet Plus resources. The study aims to provide an in-depth analysis
of how this method influences students’ writing competence, their perceptions of writing, and their
enthusiasm for English acquisition. Employing an explanatory research design that combines qualitative
and quantitative methods, the study collects data through surveys, interviews, and observations of students’
writing performance before and after the intervention. Findings demonstrate a beneficial impact of
integrating the memetic approach alongside Internet Plus tools on the writing aptitude of English as a
Foreign Language (EFL) learners. Students reported increased engagement with writing, attributing it to
the use of Internet plus tools. They also expressed that the memetic approach facilitated a deeper
understanding of cultural and social contexts in writing. Furthermore, the findings highlight a significant
improvement in students’ writing skills following the intervention. This study provides significant insights
into the practical implementation of the memetic approach within English writing education, highlighting
the beneficial contribution of Internet Plus tools in enriching students' learning journeys.
A SURVEY TO REAL-TIME MESSAGE-ROUTING NETWORK SYSTEM WITH KLA MODELLINGijfcstjournal
Messages routing over a network is one of the most fundamental concept in communication which requires
simultaneous transmission of messages from a source to a destination. In terms of Real-Time Routing, it
refers to the addition of a timing constraint in which messages should be received within a specified time
delay. This study involves Scheduling, Algorithm Design and Graph Theory which are essential parts of
the Computer Science (CS) discipline. Our goal is to investigate an innovative and efficient way to present
these concepts in the context of CS Education. In this paper, we will explore the fundamental modelling of
routing real-time messages on networks. We study whether it is possible to have an optimal on-line
algorithm for the Arbitrary Directed Graph network topology. In addition, we will examine the message
routing’s algorithmic complexity by breaking down the complex mathematical proofs into concrete, visual
examples. Next, we explore the Unidirectional Ring topology in finding the transmission’s
“makespan”.Lastly, we propose the same network modelling through the technique of Kinesthetic Learning
Activity (KLA). We will analyse the data collected and present the results in a case study to evaluate the
effectiveness of the KLA approach compared to the traditional teaching method.
A COMPARATIVE ANALYSIS ON SOFTWARE ARCHITECTURE STYLESijfcstjournal
Software architecture is the structural solution that achieves the overall technical and operational
requirements for software developments. Software engineers applied software architectures for their
software system developments; however, they worry the basic benchmarks in order to select software
architecture styles, possible components, integration methods (connectors) and the exact application of
each style.
The objective of this research work was a comparative analysis of software architecture styles by its
weakness and benefits in order to select by the programmer during their design time. Finally, in this study,
the researcher has been identified architectural styles, weakness, and Strength and application areas with
its component, connector and Interface for the selected architectural styles.
SYSTEM ANALYSIS AND DESIGN FOR A BUSINESS DEVELOPMENT MANAGEMENT SYSTEM BASED...ijfcstjournal
A design of a sales system for professional services requires a comprehensive understanding of the
dynamics of sale cycles and how key knowledge for completing sales is managed. This research describes
a design model of a business development (sales) system for professional service firms based on the Saudi
Arabian commercial market, which takes into account the new advances in technology while preserving
unique or cultural practices that are an important part of the Saudi Arabian commercial market. The
design model has combined a number of key technologies, such as cloud computing and mobility, as an
integral part of the proposed system. An adaptive development process has also been used in implementing
the proposed design model.
AN ALGORITHM FOR SOLVING LINEAR OPTIMIZATION PROBLEMS SUBJECTED TO THE INTERS...ijfcstjournal
Frank t-norms are parametric family of continuous Archimedean t-norms whose members are also strict
functions. Very often, this family of t-norms is also called the family of fundamental t-norms because of the
role it plays in several applications. In this paper, optimization of a linear objective function with fuzzy
relational inequality constraints is investigated. The feasible region is formed as the intersection of two
inequality fuzzy systems defined by frank family of t-norms is considered as fuzzy composition. First, the
resolution of the feasible solutions set is studied where the two fuzzy inequality systems are defined with
max-Frank composition. Second, some related basic and theoretical properties are derived. Then, a
necessary and sufficient condition and three other necessary conditions are presented to conceptualize the
feasibility of the problem. Subsequently, it is shown that a lower bound is always attainable for the optimal
objective value. Also, it is proved that the optimal solution of the problem is always resulted from the
unique maximum solution and a minimal solution of the feasible region. Finally, an algorithm is presented
to solve the problem and an example is described to illustrate the algorithm. Additionally, a method is
proposed to generate random feasible max-Frank fuzzy relational inequalities. By this method, we can
easily generate a feasible test problem and employ our algorithm to it.
LBRP: A RESILIENT ENERGY HARVESTING NOISE AWARE ROUTING PROTOCOL FOR UNDER WA...ijfcstjournal
Underwater detector network is one amongst the foremost difficult and fascinating analysis arenas that
open the door of pleasing plenty of researchers during this field of study. In several under water based
sensor applications, nodes are square measured and through this the energy is affected. Thus, the mobility
of each sensor nodes are measured through the water atmosphere from the water flow for sensor based
protocol formations. Researchers have developed many routing protocols. However, those lost their charm
with the time. This can be the demand of the age to supply associate degree upon energy-efficient and
ascendable strong routing protocol for under water actuator networks. During this work, the authors tend
to propose a customary routing protocol named level primarily based routing protocol (LBRP), reaching to
offer strong, ascendable and energy economical routing. LBRP conjointly guarantees the most effective use
of total energy consumption and ensures packet transmission which redirects as an additional reliability in
compare to different routing protocols. In this work, the authors have used the level of forwarding node,
residual energy and distance from the forwarding node to the causing node as a proof in multicasting
technique comparisons. Throughout this work, the authors have got a recognition result concerning about
86.35% on the average in node multicasting performances. Simulation has been experienced each in a
wheezy and quiet atmosphere which represents the endorsement of higher performance for the planned
protocol.
STRUCTURAL DYNAMICS AND EVOLUTION OF CAPSULE ENDOSCOPY (PILL CAMERA) TECHNOLO...ijfcstjournal
This research paper examined and re-evaluates the technological innovation, theory, structural dynamics
and evolution of Pill Camera(Capsule Endoscopy) technology in redirecting the response manner of small
bowel (intestine) examination in human. The Pill Camera (Endoscopy Capsule) is made up of sealed
biocompatible material to withstand acid, enzymes and other antibody chemicals in the stomach is a
technology that helps the medical practitioners especially the general physicians and the
gastroenterologists to examine and re-examine the intestine for possible bleeding or infection. Before the
advent of the Pill camera (Endoscopy Capsule) the colonoscopy was the local method used but research
showed that some parts (bowel) of the intestine can’t be reach by mere traditional method hence the need
for Pill Camera. Countless number of deaths from stomach disease such as polyps, inflammatory bowel
(Crohn”s diseases), Cancers, Ulcer, anaemia and tumours of small intestines which ordinary would have
been detected by sophisticated technology like Pill Camera has become norm in the developing nations.
Nevertheless, not only will this paper examine and re-evaluate the Pill Camera Innovation, theory,
Structural dynamics and evolution it unravelled and aimed to create awareness for both medical
practitioners and the public.
AN OPTIMIZED HYBRID APPROACH FOR PATH FINDINGijfcstjournal
Path finding algorithm addresses problem of finding shortest path from source to destination avoiding
obstacles. There exist various search algorithms namely A*, Dijkstra's and ant colony optimization. Unlike
most path finding algorithms which require destination co-ordinates to compute path, the proposed
algorithm comprises of a new method which finds path using backtracking without requiring destination
co-ordinates. Moreover, in existing path finding algorithm, the number of iterations required to find path is
large. Hence, to overcome this, an algorithm is proposed which reduces number of iterations required to
traverse the path. The proposed algorithm is hybrid of backtracking and a new technique(modified 8-
neighbor approach). The proposed algorithm can become essential part in location based, network, gaming
applications. grid traversal, navigation, gaming applications, mobile robot and Artificial Intelligence.
EAGRO CROP MARKETING FOR FARMING COMMUNITYijfcstjournal
The Major Occupation in India is the Agriculture; the people involved in the Agriculture belong to the poor
class and category. The people of the farming community are unaware of the new techniques and Agromachines, which would direct the world to greater heights in the field of agriculture. Though the farmers
work hard, they are cheated by agents in today’s market. This serves as a opportunity to solve
all the problems that farmers face in the current world. The eAgro crop marketing will serve as a better
way for the farmers to sell their products within the country with some mediocre knowledge about using
the website. This would provide information to the farmers about current market rate of agro-products,
their sale history and profits earned in a sale. This site will also help the farmers to know about the market
information and to view agricultural schemes of the Government provided to farmers.
EDGE-TENACITY IN CYCLES AND COMPLETE GRAPHSijfcstjournal
It is well known that the tenacity is a proper measure for studying vulnerability and reliability in graphs.
Here, a modified edge-tenacity of a graph is introduced based on the classical definition of tenacity.
Properties and bounds for this measure are introduced; meanwhile edge-tenacity is calculated for cycle
graphs and also for complete graphs.
COMPARATIVE STUDY OF DIFFERENT ALGORITHMS TO SOLVE N QUEENS PROBLEMijfcstjournal
This Paper provides a brief description of the Genetic Algorithm (GA), the Simulated Annealing (SA)
Algorithm, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm and attempts to
explain the way as how the Proposed Genetic Algorithm (GA), the Proposed Simulated Annealing (SA)
Algorithm using GA, the Backtracking (BT) Algorithm and the Brute Force (BF) Search Algorithm can be
employed in finding the best solution of N Queens Problem and also, makes a comparison between these
four algorithms. It is entirely a review based work. The four algorithms were written as well as
implemented. From the Results, it was found that, the Proposed Genetic Algorithm (GA) performed better
than the Proposed Simulated Annealing (SA) Algorithm using GA, the Backtracking (BT) Algorithm and
the Brute Force (BF) Search Algorithm and it also provided better fitness value (solution) than the
Proposed Simulated Annealing Algorithm (SA) using GA, the Backtracking (BT) Algorithm and the Brute
Force (BF) Search Algorithm, for different N values. Also, it was noticed that, the Proposed GA took more
time to provide result than the Proposed SA using GA.
PSTECEQL: A NOVEL EVENT QUERY LANGUAGE FOR VANET’S UNCERTAIN EVENT STREAMSijfcstjournal
In recent years, the complex event processing technology has been used to process the VANET’s temporal
and spatial event streams. However, we usually cannot get the accurate data because the device sensing
accuracy limitations of the system. We only can get the uncertain data from the complex and limited
environment of the VANET. Because the VANET’s event streams are consist of the uncertain data, so they
are also uncertain. How effective to express and process these uncertain event streams has become the core
issue for the VANET system. To solve this problem, we propose a novel complex event query language
PSTeCEQL (probabilistic spatio-temporal constraint event query language). Firstly, we give the definition
of the possible world model of VANET’s uncertain event streams. Secondly, we propose an event query
language PSTeCEQL and give the syntax and the operational semantics of the language. Finally, we
illustrate the validity of the PSTeCEQL by an example.
CLUSTBIGFIM-FREQUENT ITEMSET MINING OF BIG DATA USING PRE-PROCESSING BASED ON...ijfcstjournal
Now a day enormous amount of data is getting explored through Internet of Things (IoT) as technologies
are advancing and people uses these technologies in day to day activities, this data is termed as Big Data
having its characteristics and challenges. Frequent Itemset Mining algorithms are aimed to disclose
frequent itemsets from transactional database but as the dataset size increases, it cannot be handled by
traditional frequent itemset mining. MapReduce programming model solves the problem of large datasets
but it has large communication cost which reduces execution efficiency. This proposed new pre-processed
k-means technique applied on BigFIM algorithm. ClustBigFIM uses hybrid approach, clustering using kmeans algorithm to generate Clusters from huge datasets and Apriori and Eclat to mine frequent itemsets
from generated clusters using MapReduce programming model. Results shown that execution efficiency of
ClustBigFIM algorithm is increased by applying k-means clustering algorithm before BigFIM algorithm as
one of the pre-processing technique.
A MUTATION TESTING ANALYSIS AND REGRESSION TESTINGijfcstjournal
Software testing is a testing which conducted a test to provide information to client about the quality of the
product under test. Software testing can also provide an objective, independent view of the software to
allow the business to appreciate and understand the risks of software implementation. In this paper we
focused on two main software testing –mutation testing and mutation testing. Mutation testing is a
procedural testing method, i.e. we use the structure of the code to guide the test program, A mutation is a
little change in a program. Such changes are applied to model low level defects that obtain in the process
of coding systems. Ideally mutations should model low-level defect creation. Mutation testing is a process
of testing in which code is modified then mutated code is tested against test suites. The mutations used in
source code are planned to include in common programming errors. A good unit test typically detects the
program mutations and fails automatically. Mutation testing is used on many different platforms, including
Java, C++, C# and Ruby. Regression testing is a type of software testing that seeks to uncover
new software bugs, or regressions, in existing functional and non-functional areas of a system after
changes such as enhancements, patches or configuration changes, have been made to them. When defects
are found during testing, the defect got fixed and that part of the software started working as needed. But
there may be a case that the defects that fixed have introduced or uncovered a different defect in the
software. The way to detect these unexpected bugs and to fix them used regression testing. The main focus
of regression testing is to verify that changes in the software or program have not made any adverse side
effects and that the software still meets its need. Regression tests are done when there are any changes
made on software, because of modified functions.
GREEN WSN- OPTIMIZATION OF ENERGY USE THROUGH REDUCTION IN COMMUNICATION WORK...ijfcstjournal
Advances in micro fabrication and communication techniques have led to unimaginable proliferation of
WSN applications. Research is focussed on reduction of setup operational energy costs. Bulk of operational
energy costs are linked to communication activities of WSN. Any progress towards energy efficiency has a
potential of huge savings globally. Therefore, every energy efficient step is an endeavour to cut costs and
‘Go Green’. In this paper, we have proposed a framework to reduce communication workload through: Innetwork compression and multiple query synthesis at the base-station and modification of query syntax
through introduction of Static Variables. These approaches are general approaches which can be used in
any WSN irrespective of application.
A NEW MODEL FOR SOFTWARE COSTESTIMATION USING HARMONY SEARCHijfcstjournal
Accurate and realistic estimation is always considered to be a great challenge in software industry.
Software Cost Estimation (SCE) is the standard application used to manage software projects. Determining
the amount of estimation in the initial stages of the project depends on planning other activities of the
project. In fact, the estimation is confronted with a number of uncertainties and barriers’, yet assessing the
previous projects is essential to solve this problem. Several models have been developed for the analysis of
software projects. But the classical reference method is the COCOMO model, there are other methods
which are also applied such as Function Point (FP), Line of Code(LOC); meanwhile, the expert`s opinions
matter in this regard. In recent years, the growth and the combination of meta-heuristic algorithms with
high accuracy have brought about a great achievement in software engineering. Meta-heuristic algorithms
which can analyze data from multiple dimensions and identify the optimum solution between them are
analytical tools for the analysis of data. In this paper, we have used the Harmony Search (HS)algorithm for
SCE. The proposed model which is a collection of 60 standard projects from Dataset NASA60 has been
assessed.The experimental results show that HS algorithm is a good way for determining the weight
similarity measures factors of software effort, and reducing the error of MRE.
AGENT ENABLED MINING OF DISTRIBUTED PROTEIN DATA BANKSijfcstjournal
Mining biological data is an emergent area at the intersection between bioinformatics and data mining
(DM). The intelligent agent based model is a popular approach in constructing Distributed Data Mining
(DDM) systems to address scalable mining over large scale distributed data. The nature of associations
between different amino acids in proteins has also been a subject of great anxiety. There is a strong need to
develop new models and exploit and analyze the available distributed biological data sources. In this study,
we have designed and implemented a multi-agent system (MAS) called Agent enriched Quantitative
Association Rules Mining for Amino Acids in distributed Protein Data Banks (AeQARM-AAPDB). Such
globally strong association rules enhance understanding of protein composition and are desirable for
synthesis of artificial proteins. A real protein data bank is used to validate the system.
International Journal on Foundations of Computer Science & Technology (IJFCST)ijfcstjournal
International Journal on Foundations of Computer Science & Technology (IJFCST) is a Bi-monthly peer-reviewed and refereed open access journal that publishes articles which contribute new results in all areas of the Foundations of Computer Science & Technology. Over the last decade, there has been an explosion in the field of computer science to solve various problems from mathematics to engineering. This journal aims to provide a platform for exchanging ideas in new emerging trends that needs more focus and exposure and will attempt to publish proposals that strengthen our goals. Topics of interest include, but are not limited to the following:
Because the technology is used largely in the last decades; cybercrimes have become a significant
international issue as a result of the huge damage that it causes to the business and even to the ordinary
users of technology. The main aims of this paper is to shed light on digital crimes and gives overview about
what a person who is related to computer science has to know about this new type of crimes. The paper has
three sections: Introduction to Digital Crime which gives fundamental information about digital crimes,
Digital Crime Investigation which presents different investigation models and the third section is about
Cybercrime Law.
DISTRIBUTION OF MAXIMAL CLIQUE SIZE UNDER THE WATTS-STROGATZ MODEL OF EVOLUTI...ijfcstjournal
In this paper, we analyze the evolution of a small-world network and its subsequent transformation to a
random network using the idea of link rewiring under the well-known Watts-Strogatz model for complex
networks. Every link u-v in the regular network is considered for rewiring with a certain probability and if
chosen for rewiring, the link u-v is removed from the network and the node u is connected to a randomly
chosen node w (other than nodes u and v). Our objective in this paper is to analyze the distribution of the
maximal clique size per node by varying the probability of link rewiring and the degree per node (number
of links incident on a node) in the initial regular network. For a given probability of rewiring and initial
number of links per node, we observe the distribution of the maximal clique per node to follow a Poisson
distribution. We also observe the maximal clique size per node in the small-world network to be very close
to that of the average value and close to that of the maximal clique size in a regular network. There is no
appreciable decrease in the maximal clique size per node when the network transforms from a regular
network to a small-world network. On the other hand, when the network transforms from a small-world
network to a random network, the average maximal clique size value decreases significantly
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 6
A REVIEW ON RDB TO RDF MAPPING FOR SEMANTIC WEB
1. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
DOI:10.5121/ijfcst.2016.6203 37
A REVIEW ON RDB TO RDF MAPPING FOR
SEMANTIC WEB
V.Sitharamulu1
and Dr. B. Raveendra Babu2
1
Associate Professor, Department of Computer Science & Engg, SBIT Khammam
2
Professor and HOD Department of Computer Science & Engg, Dean-Administration &
Finance at VNR VJIET, Hyderabad
ABSTRACT
In Databases one of the active research fields is mapping relational databases (RDB) into Resource
Description Framework (RDF). An enormous data is kept in the form of relational databases and accessing
of data is done in the semantic web. The data stored in RDB is to be efficiently mapped to the semantic web
or RDF for data availability to the users. There is a definite need for improvement in technologies for
efficient mapping languages from RDB to RDF in semantic web. This paper presents an up-to-date survey
of different RDB to RDF mapping languages proposed in recent times. It outlines the main features or
characteristics to be considered for efficient mapping in different scenarios. The main objective of this
content, pictures identification of limitations existing in the mapping languages. It also enhances the
comparisons between each language and helps researchers to propose further better proposals in their
future scope of work to improve better mapping techniques.
KEYWORDS
Data bases, relational databases (RDB), Semantic Web, RDB to RDF Mapping, Mapping Characteristic
Comparison.
1. INTRODUCTION
In Relational Databases, one of the active research field is mapping of databases. In [1], authors
have stressed on the need to fill the gap between RDF and relational model for accessing of data
on semantic web. The real time applicability of semantic web is extended to areas far than web.
The applications of semantic web in data exchange and integration from different data sources is
elaborated in [2-6].
Different researchers provided improved platforms for allowing semantic web based applications
for accessing data from relational databases. The proposed approaches have implemented a wide
range of techniques form simple to highly specific and logical ways to deal with mapping
language. This ongoing research had standardized protocols for efficient implementations of
W3C for RDB2RDF Working Group (WG). The main highlight [7] of RDB2RDF WG for
standardization is to have efficient mapping of relational database schemas and relational data
into RDF and OWL.
The organization of this paper is given as: in Section 2, the recent advances of mapping languages
are presented. In Section 3, study and comparison of mapping languages is presented and In
Section 4comparison framework is shown with different approaches. Finally, in Section 5, we
make our concluding and future remarks.
2. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
38
2. RECENT ADVANCES IN RDB-RDF MAPPING LANGUAGES
The idea behind this survey is to outline the main mapping languages for RDB to RDF. The
literature presented should be helpful to the research community for further exploration with
different application scenario for RDB to RDF language mapping. This survey reports different
comparison scenarios for RDB to RDF language mapping framework. The survey also classifies
the mapping languages into different groups such as: direct, read-only general purpose, read-write
general purpose and special purpose mapping.
In [8], W3C RDB2RDF Incubator Group has provided a detail survey on mapping RDB to RDF
for different techniques, software tools and applications. In [9], the author has defined a
mechanism that bridges between relational databases and OWL technologies using SQL.
In the section below, different techniques for mapping languages are descried with comparison on
a feature by feature basis.
2.1 Mapping Languages
The mapping languages can be divided as given below:
• Direct mapping
• General-purpose Read-only mapping
• General purpose Read-write mapping and
• Special - purpose mapping.
Direct Mapping:
Direct mapping [10] is a simple technique which is used to map data directly from RDB-to-RDF.
In direct approach, link table is one of the way of relational model which uses a direct means
without help of constructor. The virtue of this approach is its simplicity in implementation. The
applicable areas for direct mapping are simplicity which has more importance than complexity in
relational scheme representation.
Read-only General-purpose Mapping:
The requirement of only reading data, can be provided using read-only general-purpose mapping
languages such as Virtuoso, D2RQ, and R2RML. Virtuoso provides feasibility in applicability of
named graphs but not feasible for static metadata implementation. D2RQ provides feasibility in
applicability of static metadata but not feasible for named graphs implementation. R2RML
supports the implementation of both named graphs and static metadata.
This mapping language provides a great architecture for better representation of data from RDB
to RDF. As the complexity of representation is increased it makes impossible for bidirectional
data transfer. A great level of practice and expertise is needed for learning and understanding of
read-only general-purpose mapping language. These groups of mapping languages are preferred
for a vast category of applications where the access is limited to read only and support of specific
standards, compliances and implementation.
Read-Write General-purpose Mapping:
Read-write general purpose mapping can provide read and write data access from RDB to RDF.
This approach of mapping provides bidirectional data transfer from RDB to RDF. The differences
of read-only general purpose mapping language from read-write general-purpose mapping is
mapping of logical tables and are not allowed for view updating problems [11]. The supporting
3. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
39
features of read-write general-purpose mapping over read-only general purpose mapping
language is the integrity constraints such as not null attributes, detecting missing data and invalid
write request. In [12-14] authors have discussed an open problem of view updates in database
research.
Special-purpose Mapping:
Special-purpose mapping languages are developed for specific features support. The mapping
languages such as eD2R, R2O, and Triplify [15] are the mapping languages which are of special
purpose. The difference of special-purpose mapping languages with general purpose languages
are in few features only, in most of the ways they are similar to the general purpose mapping
languages. The supportability of features such as named graphs, blank nodes and metadata is not
supported completely. In rare cases, the implementation of blank nodes is available in eD2R and
R2O. In cases where general purpose mapping languages are not applicable, these special purpose
mapping languages are of great use.
3. STUDY AND COMPARISON OF MAPPING LANGUAGES
The comparison of different mapping languages are presented in the sub section below, which
helps in choosing a specific RDBs mapping technique for general applicability:
3.1 Direct Mapping
In direct mapping, RDBs data is presented in a simple form for semantic web representation [10].
One of the core objectives of the direct mapping allows us to elaborate the RDB on semantic web
relational table to classes and attributes to properties in RDF vocabulary. In [11], author has
proposed a direct automatic mapping language. An improved heuristic is also extended for simple
table-to-class and attribute-to-property mapping using RDB schema. In [10], author has proposed
a Squirrel RDF [12] for direct mapping. The RDB schema names are used for generating the RDF
vocabulary and RDF based tool performs the mapping of domain ontology. The mapping
language discussed above uses the basic technology of direct mapping for RDB-to-RDF.
eD2R: eD2R[12], permits the tables in the RDBs to publish in semantic web, which allows users
to access the relation data via de-reference able URIs using SPARQL queries. Another objective
of eD2Rs is to present RDB data in the form of interlinked datasets shown as RDF.
R2O: In [13], authors have proposed R2O which is used for efficient mapping of RDB schemes
with ontologism. It is also used for representation of different complex mappings in an efficient
way using generic structure.
Relational.OWL: Relational.OWL [15] was proposed for schema components and relational data
using OWL-based representation format. Relatioal.OWL uses ontology to implement schema of
data in RDF and it also uses tables to map complete details regarding the primary/foreign keys
and data types in the expression of ontology. The exchange of data in one to one databases is one
of the best applicable fields for the Relatioal.OWL. The reuse of the existing domain vocabulary
is restricted and the structures, syntax of different relational schema of the RDF representations
are shown. The on demand transactions of SPARQL queries for SQL are provided by
Relational.OWL using SPARQL interface.
Virtuoso RDF Views: The Virtuoso Universal server represents relational data on the semantic
web using open link software features for RDF views [16-17]. By applying a declarative meta
schema language, the mapping of SQL data to RDF vocabularies can be done.
4. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
40
R2RML: The ongoing mapping language for standardization of theW3C RDB2RDFWG for
RDB-to-RDF mappings is R2RML [16]. The key objective is to allow read-only data access using
a language that is vendor independent.
R3M: In ONTOACCESS [17], authors have implemented R3M [18] as the platform for
extending mapping language. R3M is a mapping language with the feature of bidirectional data
transfer from RDB to RDF. R3M can be applied for mapping tables to classes and attributes to
properties with a unique data for integrity constraints.
3.2 Bi-Directional Relational-to-RDF Translation
The recent proposals shown in the above section are all capable of transferring data in only one
direction with the features of read-only view of relational databases. Here are some of the
bidirectional proposals for transfer of data from RDB to RDF and vice versa. In [17], authors
have proposed a mapping language known as R3M which is an extension of D2RQ with the
support of SPARQL/Update language for data manipulation on ONTOACCESS. D2RQ++ [18],
also allows the bidirectional transfer of data from RDB to RDF using D2RQ's mapping
language.D2RQ++ also eliminated the problem with mismatch of data and it works efficiently for
open world assumptions of semantic web.
3.3 Other Research Projects
The need for efficient platform for RDB to RDF is projected by initiating the RDB2RDF
Incubator Group (XG) [19] for efficient mapping of data from RDB to RDF on semantic web. In
2009, R2RML [20] has given a good study on the RDB2RDF WG3 which was taken for further
extension by many researchers. The need for updating relational data is still in investigation. In
[21], the implementation of write operation for support of R2RML is shown as in feasible. Other
pushback services for web applications are used in Web 2.0 APIs.
4. COMPARISON FRAMEWORK
This section shows the framework of different approaches with respect to efficient
implementation of different features. In total 15 different characteristics as shown below are
studied on RDB to RDF mapping languages:
Data types (C1): The type of data used for mapping from SQL to XML used in RDF.
Logical Table to Class (C2): Mapping of logical table to a class in RDF.
Integrity Constraints (C3): Integrity constraints differentiate between a primary and foreign key
with not null, unique and check constraints.
Transformation Functions (C4): transformation functions are used for conversion of values from
RDB to RDF.
User-defined Instance URIs (C5): The User-defined instances are generated in the mapping or
conversion of the data from RDB to RDF.
Literal to URI (C6): The URI generated to be stored in specific literals known as URI literal.
Vocabulary Reuse (C7): Existing RDF vocabulary used.
Named Graphs (C8): The mapping of the specific parts of RDB to RDF can be done using Named
Graphs.
Blank Nodes (C9): The nodes are used to store unprocessed information.
5. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
41
Static Metadata (C10): The meta-data is used to describe RDF components which don’t have the
equivalent in RDB.
One Table to n Classes (C11): The table in RDF can be mapped numerous times with subset of
attributes and records to numerous classes.
Project Attributes (C12): The attribute used to map different irrelevant and sensitive attributes
such as passwords.
Select Conditions (C13): The select conditions use certain constraints to validate the outdated
data to exclude from the RDF.
M:N Relationships (C14): M:N relationships are used for linking or join tables for mapping to
RDF.
Write Support (C15): The facility for write support should exist for an efficient RDB to RDF
mapping.
The set of 10 mapping languages are compared on the above given fifteen characteristics:
Direct Mapping: In Direct Mapping out of 15 only one characteristic i.e., Write Support (C15) is
efficient and Logical Table to Class (C2) is partially supported. The remaining 13 characteristics
are not supported by the direct mapping technique.
eD2R: In eD2R mapping out of 15 only 11 characteristics namely Data types (C1), Logical Table
to Class (C2), Transformation Functions (C4), User-defined Instance URIs (C5), Literal to URI
(C6), Vocabulary Reuse (C7), Blank Nodes (C9), Project Attributes(C12), Select
Conditions(C13), M:N Relationships(C14), Write Support (C15) are efficient and integrity
constraints(C3) is partially supported and 3 characteristics i.e. Named Graphs (C8), Static
Metadata (C10) and Write Support (C15) are not supported by the eD2R mapping technique.
R2O: In R2O mapping out of 15 only 9 characteristic Data types (C1), Logical Table to Class
(C2), Transformation Functions (C4), User-defined Instance URIs (C5), Literal to URI (C6),
Vocabulary Reuse (C7), Named Graphs (C8), Blank Nodes (C9), One Table to n Classes (C11),
Project Attributes(C12) and Select Conditions(C13)i.e are efficient and 2 characteristic Integrity
Constraints (C3) and One Table to n Classes (C11) are partially supported and 3characteristic
Static Metadata (C10),, M:N Relationships(C14) and Write Support (C15) are not supported by
the R2O mapping technique.
Relational.OWL: In Relational.OWL mapping out of 15 only 4 characteristic i.e. Data types
(C1), Blank Nodes (C9), Project Attributes(C12) and Write Support (C15) are efficient and 2
characteristic Logical Table to Class (C2) and Integrity Constraints (C3) are partially supported
and 9characteristic Transformation Functions (C4), User-defined Instance URIs (C5), Literal to
URI (C6), Vocabulary Reuse (C7), Named Graphs (C8),Static Metadata (C10), One Table to n
Classes (C11), Select Conditions(C13) and M:N Relationships(C14) are not supported by the
Relational.OWL mapping technique.
Virtuoso RDF Views: In Virtuoso RDF Views mapping out of 15 only 12 characteristic i.e. Data
types (C1), Logical Table to Class (C2), Transformation Functions (C4), User-defined Instance
URIs (C5), Literal to URI(C6), Vocabulary Reuse (C7), Named Graphs (C8), Blank Nodes (C9),
One Table to n Classes (C11), Project Attributes(C12), Select Conditions(C13) and M:N
Relationships(C14) are efficient and 1 characteristic Integrity Constraints (C3)is partially
supported and 2characteristic Static Metadata (C10) and Write Support (C15) are not supported
by the Virtuoso RDF Views mapping technique.
6. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
42
D2RQ: In D2RQ mapping out of 15 only 12 characteristic i.e., Data types (C1), Logical Table to
Class (C2), Transformation Functions (C4), User-defined Instance URIs (C5), Literal to URI
(C6), Vocabulary Reuse (C7), Blank Nodes (C9), Static Metadata (C10), One Table to n Classes
(C11), Project Attributes(C12), Select Conditions(C13) and M:NRelationships(C14) are efficient
and 1 characteristic Integrity Constraints (C3)is partially supported and 2characteristic Named
Graphs (C8) and Write Support (C15) are not supported by the D2RQ mapping technique.
Triplify: In Triplify mapping out of 15 only 10 characteristic i.e. Data types (C1), Logical Table
to Class (C2), Transformation Functions (C4), User-defined Instance URIs (C5), Literal to URI
(C6), Vocabulary Reuse (C7), One Table to n Classes (C11), Project Attributes(C12), Select
Conditions(C13), M:N Relationships(C14) are efficient and 1 characteristic , Integrity
Constraints (C3)is partially supported and 4characteristic Named Graphs (C8), Blank Nodes (C9),
Static Metadata (C10) and Write Support (C15)are not supported by the Triplify mapping
technique.
R2RML: In R2RML mapping out of 15 only 13 characteristic i.e. Data types (C1), Logical
Table to Class (C2), Transformation Functions (C4), User-defined Instance URIs (C5), Literal to
URI (C6), Vocabulary Reuse (C7), Named Graphs (C8), Blank Nodes (C9), Static Metadata
(C10), One Table to n Classes (C11), Project Attributes(C12), Select Conditions(C13), M:N
Relationships(C14) are efficient and 1 characteristic Integrity Constraints (C3)is partially
supported and 1characteristic Write Support (C15)is not supported by the R2RML mapping
technique.
R2D: In R2D mapping out of 15 only 12 characteristic i.e. Data types (C1), Logical Table to
Class (C2), Integrity Constraints (C3), Transformation Functions (C4), User-defined Instance
URIs (C5), Literal to URI (C6), Vocabulary Reuse (C7), Blank Nodes (C9), One Table to n
Classes (C11), Project Attributes(C12), Select Conditions(C13), M:N Relationships(C14) are
efficient and 1 characteristic Write Support (C15)is partially supported and 2characteristic
Named Graphs (C8) and Static Metadata (C10) are not supported by the R2D mapping
technique.
ONTOACCESS (R2M3): In R2M3 mapping out of 15 only 9 characteristic i.e. Data types (C1),
Integrity Constraints (C3), Transformation Functions (C4), User-defined Instance URIs (C5),
Literal to URI (C6), Vocabulary Reuse (C7), One Table to n Classes (C11), Project
Attributes(C12) and M:N Relationships(C14) are efficient and 3 characteristic Logical Table to
Class (C2), Select Conditions(C13) and Write Support (C15) are partially supported and
3characteristic Named Graphs (C8), Blank Nodes (C9) and Static Metadata (C10) are not
supported by the R2M3 mapping technique.
5. CONCLUSIONS
This paper presents a detailed review, comparisons and characteristics of various mapping
languages for RDB to RDF. The categorization of different mapping languages is shown with the
support of recent proposals in that domain. The advantages and limitations of the mapping
languages used for RDB to RDF are briefly outlined for the use of research community in this
field. The complete set of features or characteristics used for describing and proposing new
mapping languages are presented. The set of limitations in existing mapping languages for RDB
to RDF in semantic web can lead for efficient proposals in future work for researchers.
7. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
43
REFERENCES
[1]. K. C.-C. Chang, B. He, C. Li, M. Patel, and Z. Zhang. (2004), Structured Databases on the Web:
Observations and Implications. SIGMOD Record.
[2]. A. Langegger, W. WÖss, and M. BlÖchl. (2008), A Semantic Web Middleware for Virtual Data
Integration on the Web. In Proceedings of the 5th European Semantic Web Conference.
[3]. L. Ma, X. Sun, F. Cao, C. Wang, and X. Wang. (2009), Semantic Enhancement for Enterprise Data
Management. In Proceedings of the 8th International Semantic Web Conference.
[4]. F. Manola and E. Miller. (February 2004) RDF Primer. W3C Recommendation.
http://www.w3.org/TR/2004/REC-rdf-primer-20040210/.
[5]. C. Ogbuji. (May 2011) SPARQL 1.1 Graph Store HTTP Protocol. http://www.w3.org/TR/2011/ WD-
sparql11-http-rdf-update-20110512/.
[6]. C. Patel, S. Khan, and K. Gomadam. (2009). TrialX: Using Semantic Technologies to Match Patients
to Relevant Clinical Trials Based on Their Personal Health Records. In Proceedings of the 8th
InternationalSemantic Web Conference.
[7]. H. Halpin and I. Herman. RDB2RDF Working Group Charter. http://www.w3.org/2009/08/rdb2rdf-
charter. Last visited July 2011.
[8]. S. S. Sahoo, W. Halb, S. Hellmann, K. Idehen, T. T. Jr, S. Auer, J. Sequeda, and A. Ezzat. (2009) A
Survey ofCurrent Approaches for Mapping of Relational Databases to
RDF.http://www.w3.org/2005/Incubator/rdb2rdf/RDB2RDF_SurveyReport.pdf,.Last visited July
2011.
[9]. GuntarsBumans” (2010. Vol. 756, P:99–117). Mapping between Relational Databases and OWL
Ontologies: an Example”, Computer Science and Information Technologies, Scientific Papers,
University of Latvia ,.
[10]. T. Berners-Lee. (2009) Relational Databases on the Semantic
Web.http://www.w3.org/DesignIssues/RDB-RDF.html,. Last visited July 2011.
[11]. F. Bancilhon and N. Spyratos. (1981) Update Semantics of Relational Views. In ACM Transactions
on DatabaseSystems,.
[12]. W. Hu and Y. Qu. (2007), Discovering Simple Mappings Between Relational Database Schemas and
Ontologies.In Proceedings of the 6th International and 2nd Asian Semantic Web Conference,.
[13]. J. Barrasa, O. Corcho, and A. G´omez-P´erez.( 2003), Fund Finder: A Case Study of Database-to-
OntologyMapping. In Proceedings of the Semantic Integration Workshop,.
[14]. SquirrelRDF. http://jena.sourceforge.net/SquirrelRDF/.Last visited July 2011.
[15]. J. Barrasa, O. Corcho, and A. G´omez-P´erez. (2004). R2O, an Extensible and Semantically Based
Database-to-Ontology Mapping Language. In Proceedings of the 2nd Workshop on Semantic Weband
Databases.
[16]. C. Bizer. (2003). D2R MAP – A Database to RDF Mapping Language. In Proceedings of the 12th
InternationalWorld Wide Web Conference.
[17]. D. Beckett and J. Grant. SWAD-Europe Deliverable 10.2: Mapping Semantic Web Data with
RDBMSes.http://www.w3.org/2001/sw/Europe/reports/scalable_rdbms_mapping_report/, January
2003. Last visited July 2011.
[18]. S. Auer, S. Dietzold, J. Lehmann, S. Hellmann, and D. Aumueller (2009). Triplify – Light-Weight
Linked Data Publication from Relational Databases. In Proceedingsof the 18th International World
Wide WebConference,.
[19]. C. P. de Laborda and S. Conrad (2005). Relational.OWL – A Data and Schema Representation
Format Based on OWL. In Proceedings of the 2nd Asia-Pacific Conference on Conceptual Modelling.
[20]. Erling and I. Mikhailov. RDF Support in the Virtuoso DBMS. In Proceedings of the
SABREConference on Social Semantic Web, 2007.
[21]. O. Software. Mapping Relational Data to RDF with Virtuoso’s RDF Views.
http://virtuoso.openlinksw.com/whitepapers/relational%20rdf%20views%20mapping.html. Last
visited July 2011
8. International Journal in Foundations of Computer Science & Technology (IJFCST) Vol.6, No.2, March 2016
44
AUTHORS
1. V. Sitharamulu has obtained his Master of Technology in Computer Science and
Engineering from Rajasthan Vidyapeeth University, Udaipur. He is currently working
as an Associate Professor in the Department of computer Science & Engineering,
Swarna Bharathi Institute of Science & Technology, Khammam, Telangana. He has
over 15 years of teaching experience. His area of research interest is Data Mining and
Semantic Web.
2. Dr. B. Raveendra Babu has obtained his Masters in Computer Science and
Engineering from Anna University, Chennai. He received his Ph.D. in Applied
Mathematics at S.V University, Tirupathi. He is now working as Professor and HOD
Department of Computer Science and Engineering, Dean-Administration & Finance
at VNR VJIET, Hyderabad. He has over 30 years of teaching experience. He has
about 60 International and National publications to his credit. His area of research
interest includes Data Mining, Software Engineering, Image Processing, Pattern
Analysis and Information Security.