XML (Extensible Mark up language) is emerging as a tool for representing and exchanging data over the
internet. When we want to store and query XML data, we can use two approaches either by using native
databases or XML enabled databases. In this paper we deal with XML enabled databases. We use
relational databases to store XML documents. In this paper we focus on mapping of XML DTD into
relations. Mapping needs three steps: 1) Simplify Complex DTD’s 2) Make DTD graph by using simplified
DTD’s 3) Generate Relational schema. We present an inlining algorithm for generating relational schemas
from available DTD’s. This algorithm also handles recursion in an XML document.
HOLISTIC EVALUATION OF XML QUERIES WITH STRUCTURAL PREFERENCES ON AN ANNOTATE...ijseajournal
With the emergence of XML as de facto format for storing and exchanging information over the Internet, the search for ever more innovative and effective techniques for their querying is a major and current concern of the XML database community. Several studies carried out to help solve this problem are mostly oriented towards the evaluation of so-called exact queries which, unfortunately, are likely (especially in the case of semi-structured documents) to yield abundant results (in the case of vague queries) or empty results (in the case of very precise queries). From the observation that users who make requests are not necessarily interested in all possible solutions, but rather in those that are closest to their needs, an important field of research has been opened on the evaluation of preferences queries. In this paper, we propose an approach for the evaluation of such queries, in case the preferences concern the structure of the document. The solution investigated revolves around the proposal of an evaluation plan in three phases: rewriting-evaluation-merge. The rewriting phase makes it possible to obtain, from a partitioningtransformation operation of the initial query, a hierarchical set of preferences path queries which are holistically evaluated in the second phase by an instrumented version of the algorithm TwigStack. The merge phase is the synthesis of the best results.
Database systems that were based on the object data model were known originally as object-oriented databases (OODBs).These are mainly used for complex objects
HOLISTIC EVALUATION OF XML QUERIES WITH STRUCTURAL PREFERENCES ON AN ANNOTATE...ijseajournal
With the emergence of XML as de facto format for storing and exchanging information over the Internet, the search for ever more innovative and effective techniques for their querying is a major and current concern of the XML database community. Several studies carried out to help solve this problem are mostly oriented towards the evaluation of so-called exact queries which, unfortunately, are likely (especially in the case of semi-structured documents) to yield abundant results (in the case of vague queries) or empty results (in the case of very precise queries). From the observation that users who make requests are not necessarily interested in all possible solutions, but rather in those that are closest to their needs, an important field of research has been opened on the evaluation of preferences queries. In this paper, we propose an approach for the evaluation of such queries, in case the preferences concern the structure of the document. The solution investigated revolves around the proposal of an evaluation plan in three phases: rewriting-evaluation-merge. The rewriting phase makes it possible to obtain, from a partitioningtransformation operation of the initial query, a hierarchical set of preferences path queries which are holistically evaluated in the second phase by an instrumented version of the algorithm TwigStack. The merge phase is the synthesis of the best results.
Database systems that were based on the object data model were known originally as object-oriented databases (OODBs).These are mainly used for complex objects
NOSQL IMPLEMENTATION OF A CONCEPTUAL DATA MODEL: UML CLASS DIAGRAM TO A DOCUM...ijdms
The relational databases have shown their limits to the exponential increase in the volume of manipulated
and processed data. New NoSQL solutions have been developed to manage big data. These approaches are
an interesting way to build no-relational databases that can support large amounts of data. In this work,
we use conceptual data modeling (CDM), based on UML class diagrams, to create a logical structure of a
NoSQL database, taking account the relationships and constraints that determine how data can be stored
and accessible. The NoSQL logical data model obtained is based on the Document-Oriented Model
(DOM). to eliminate joins, a total and structured nesting is done on the collections of the documentoriented
database.
Rules of passage from the CDM to the Logical Oriented-Document Model (LODM) are also proposed in
this paper to transform the different types of associations between class. An application example of this
NoSQL BDD design method is realised to the case of an organization working in the e-commerce business
sector.
Overview of Object-Oriented Concepts Characteristics by vikas jagtapVikas Jagtap
Object-oriented data base systems are proposed as alternative to relational systems and are aimed at application domains where complex objects play a central role.
The approach is heavily influenced by object-oriented programming languages and can be understood as an attempt to add DBMS functionality to a programming language environment
The Challenges of Describing Best Tagging Practices for JATS
Jeffrey Beck, Technical Information Specialist, National Center for Biotechnology Information (NCBI), U.S. National Library of Medicine, National Institutes of Health; Co-chair, NISO Journal Article Tag Suite (JATS) Standing Committee
Day by day data is increasing, and most of the data stored in a database after manual transformations and derivations. Scientists can facilitate data intensive applications to study and understand the behaviour of a complex system. In a data intensive application, a scientific model facilitates raw data products to produce new data products and that data is collected from various sources such as physical, geological, environmental, chemical and biological etc. Based on the generated output, it is important to have the ability of tracing an output data product back to its source values if that particular output seems to have an unexpected value. Data provenance helps scientists to investigate the origin of an unexpected value. In this paper our aim is to find a reason behind the unexpected value from a database using query inversion and we are going to propose some hypothesis to make an inverse query for complex aggregation function and multiple relationship (join, set operation) function.
O NTOLOGY B ASED D OCUMENT C LUSTERING U SING M AP R EDUCE ijdms
Nowadays, document clustering is considered as a da
ta intensive task due to the dramatic, fast increas
e in
the number of available documents. Nevertheless, th
e features that represent those documents are also
too
large. The most common method for representing docu
ments is the vector space model, which represents
document features as a bag of words and does not re
present semantic relations between words. In this
paper we introduce a distributed implementation for
the bisecting k-means using MapReduce programming
model. The aim behind our proposed implementation i
s to solve the problem of clustering intensive data
documents. In addition, we propose integrating the
WordNet ontology with bisecting k-means in order to
utilize the semantic relations between words to enh
ance document clustering results. Our presented
experimental results show that using lexical catego
ries for nouns only enhances internal evaluation
measures of document clustering; and decreases the
documents features from thousands to tens features.
Our experiments were conducted using Amazon ElasticMapReduce to deploy the Bisecting k-means
algorithm
An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object-relational databases are a hybrid of both approaches
Optimisation towards Latent Dirichlet Allocation: Its Topic Number and Collap...IJECEIAES
Latent Dirichlet Allocation (LDA) is a probability model for grouping hidden topics in documents by the number of predefined topics. If conducted incorrectly, determining the amount of K topics will result in limited word correlation with topics. Too large or too small number of K topics causes inaccuracies in grouping topics in the formation of training models. This study aims to determine the optimal number of corpus topics in the LDA method using the maximum likelihood and Minimum Description Length (MDL) approach. The experimental process uses Indonesian news articles with the number of documents at 25, 50, 90, and 600; in each document, the numbers of words are 3898, 7760, 13005, and 4365. The results show that the maximum likelihood and MDL approach result in the same number of optimal topics. The optimal number of topics is influenced by alpha and beta parameters. In addition, the number of documents does not affect the computation times but the number of words does. Computational times for each of those datasets are 2.9721, 6.49637, 13.2967, and 3.7152 seconds. The optimisation model has resulted in many LDA topics as a classification model. This experiment shows that the highest average accuracy is 61% with alpha 0.1 and beta 0.001.
Image-Based Literal Node Matching for Linked Data IntegrationIJwest
This paper proposes a method of identifying and aggregating literal nodes that have the same meaning in Linked Open Data (LOD) in order to facilitate cross-domain search. LOD has a graph structure in which most nodes are represented by Uniform Resource Identifiers (URIs), and thus LOD sets are connected and searched through different domains.However, 5% of the values are literal values (strings without URI) even in a de facto hub of LOD, DBpedia. In SPARQL Protocol and RDF Query Language (SPARQL) queries, we need to rely on regular expression to match and trace the literal nodes. Therefore, we propose a novel method, in which part of the LOD graph structure is regarded as a block image, and then the matching is calculated by image features of LOD. In experiments, we created about 30,000 literal pairs from a Japanese music category of DBpedia Japanese and Freebase, and confirmed that the proposed method determines literal identity with F-measure of 76.1-85.0%.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
SERVICE SELECTION BASED ON DOMINANT ROLE OF THE CHOREOGRAPHYijwscjournal
Web services are playing dominant role on Internet for e-business. The compositions of these services are used to meet business objectives. The web service choreography describes the external observable behavior
of these compositions. Many compositions may available for same functionality. These compositions cannot be distinguished on the basis of functional properties. This Quality of services (QoS) may help the user to select web services and to analyze composition of the web services. Web service choreography is going to dictate implementation of workflow. This workflow consists of several tasks. Each task is implemented by web services. These services are hosted in large numbers by different service providers on different service clusters. The mapping of service and task is difficult issue in run time environment. The interoperability between services is also a great problem. The selection of services is very big issue. In this paper we have proposed a bio-inspired selection algorithm based on dominant role and proposed a discovery infrastructure. We have also used the client behavior to improve the failure of the composition of the service.
NOSQL IMPLEMENTATION OF A CONCEPTUAL DATA MODEL: UML CLASS DIAGRAM TO A DOCUM...ijdms
The relational databases have shown their limits to the exponential increase in the volume of manipulated
and processed data. New NoSQL solutions have been developed to manage big data. These approaches are
an interesting way to build no-relational databases that can support large amounts of data. In this work,
we use conceptual data modeling (CDM), based on UML class diagrams, to create a logical structure of a
NoSQL database, taking account the relationships and constraints that determine how data can be stored
and accessible. The NoSQL logical data model obtained is based on the Document-Oriented Model
(DOM). to eliminate joins, a total and structured nesting is done on the collections of the documentoriented
database.
Rules of passage from the CDM to the Logical Oriented-Document Model (LODM) are also proposed in
this paper to transform the different types of associations between class. An application example of this
NoSQL BDD design method is realised to the case of an organization working in the e-commerce business
sector.
Overview of Object-Oriented Concepts Characteristics by vikas jagtapVikas Jagtap
Object-oriented data base systems are proposed as alternative to relational systems and are aimed at application domains where complex objects play a central role.
The approach is heavily influenced by object-oriented programming languages and can be understood as an attempt to add DBMS functionality to a programming language environment
The Challenges of Describing Best Tagging Practices for JATS
Jeffrey Beck, Technical Information Specialist, National Center for Biotechnology Information (NCBI), U.S. National Library of Medicine, National Institutes of Health; Co-chair, NISO Journal Article Tag Suite (JATS) Standing Committee
Day by day data is increasing, and most of the data stored in a database after manual transformations and derivations. Scientists can facilitate data intensive applications to study and understand the behaviour of a complex system. In a data intensive application, a scientific model facilitates raw data products to produce new data products and that data is collected from various sources such as physical, geological, environmental, chemical and biological etc. Based on the generated output, it is important to have the ability of tracing an output data product back to its source values if that particular output seems to have an unexpected value. Data provenance helps scientists to investigate the origin of an unexpected value. In this paper our aim is to find a reason behind the unexpected value from a database using query inversion and we are going to propose some hypothesis to make an inverse query for complex aggregation function and multiple relationship (join, set operation) function.
O NTOLOGY B ASED D OCUMENT C LUSTERING U SING M AP R EDUCE ijdms
Nowadays, document clustering is considered as a da
ta intensive task due to the dramatic, fast increas
e in
the number of available documents. Nevertheless, th
e features that represent those documents are also
too
large. The most common method for representing docu
ments is the vector space model, which represents
document features as a bag of words and does not re
present semantic relations between words. In this
paper we introduce a distributed implementation for
the bisecting k-means using MapReduce programming
model. The aim behind our proposed implementation i
s to solve the problem of clustering intensive data
documents. In addition, we propose integrating the
WordNet ontology with bisecting k-means in order to
utilize the semantic relations between words to enh
ance document clustering results. Our presented
experimental results show that using lexical catego
ries for nouns only enhances internal evaluation
measures of document clustering; and decreases the
documents features from thousands to tens features.
Our experiments were conducted using Amazon ElasticMapReduce to deploy the Bisecting k-means
algorithm
An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object-relational databases are a hybrid of both approaches
Optimisation towards Latent Dirichlet Allocation: Its Topic Number and Collap...IJECEIAES
Latent Dirichlet Allocation (LDA) is a probability model for grouping hidden topics in documents by the number of predefined topics. If conducted incorrectly, determining the amount of K topics will result in limited word correlation with topics. Too large or too small number of K topics causes inaccuracies in grouping topics in the formation of training models. This study aims to determine the optimal number of corpus topics in the LDA method using the maximum likelihood and Minimum Description Length (MDL) approach. The experimental process uses Indonesian news articles with the number of documents at 25, 50, 90, and 600; in each document, the numbers of words are 3898, 7760, 13005, and 4365. The results show that the maximum likelihood and MDL approach result in the same number of optimal topics. The optimal number of topics is influenced by alpha and beta parameters. In addition, the number of documents does not affect the computation times but the number of words does. Computational times for each of those datasets are 2.9721, 6.49637, 13.2967, and 3.7152 seconds. The optimisation model has resulted in many LDA topics as a classification model. This experiment shows that the highest average accuracy is 61% with alpha 0.1 and beta 0.001.
Image-Based Literal Node Matching for Linked Data IntegrationIJwest
This paper proposes a method of identifying and aggregating literal nodes that have the same meaning in Linked Open Data (LOD) in order to facilitate cross-domain search. LOD has a graph structure in which most nodes are represented by Uniform Resource Identifiers (URIs), and thus LOD sets are connected and searched through different domains.However, 5% of the values are literal values (strings without URI) even in a de facto hub of LOD, DBpedia. In SPARQL Protocol and RDF Query Language (SPARQL) queries, we need to rely on regular expression to match and trace the literal nodes. Therefore, we propose a novel method, in which part of the LOD graph structure is regarded as a block image, and then the matching is calculated by image features of LOD. In experiments, we created about 30,000 literal pairs from a Japanese music category of DBpedia Japanese and Freebase, and confirmed that the proposed method determines literal identity with F-measure of 76.1-85.0%.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
“ help.mbaassignments@gmail.com ”
or
Call us at : 08263069601
SERVICE SELECTION BASED ON DOMINANT ROLE OF THE CHOREOGRAPHYijwscjournal
Web services are playing dominant role on Internet for e-business. The compositions of these services are used to meet business objectives. The web service choreography describes the external observable behavior
of these compositions. Many compositions may available for same functionality. These compositions cannot be distinguished on the basis of functional properties. This Quality of services (QoS) may help the user to select web services and to analyze composition of the web services. Web service choreography is going to dictate implementation of workflow. This workflow consists of several tasks. Each task is implemented by web services. These services are hosted in large numbers by different service providers on different service clusters. The mapping of service and task is difficult issue in run time environment. The interoperability between services is also a great problem. The selection of services is very big issue. In this paper we have proposed a bio-inspired selection algorithm based on dominant role and proposed a discovery infrastructure. We have also used the client behavior to improve the failure of the composition of the service.
Providing web services from smart phones is current recent research topic, this happened because of
smart phones are used almost every area, where today’s user uses for mobile banking, emailing, searching
location and data. Smart phones are advanced in terms of processing power, memory with an embedded
camera, sensors and same time parallel advancement in wireless network and software web technologies.
Due to these advancement enables the mobile smart phones work as a web service provider instead of web
service consumer. The idea of hosting web services on mobile host is not new, last one decade researchers
working on mobile web service provisioning. This paper extends the our previous research work in the
cellular domain to the current generation platform technologies and standards such as Android OS and
REST. This paper deals with mobile host scalability and experimental result analysis for how many
concurrent users access mobile host.
SOME INTEROPERABILITY ISSUES IN THE DESIGNING OF WEB SERVICES : CASE STUDY ON...ijwscjournal
In today’s environment most of the commercial web based project developed in the industry as well
enumerous number of funded project/and studies taken as part of research oriented initiatives in the
academia suffer from major technical issues as to how design, develop and deploy the Web Services that
can run in variety of heterogeneous environments. In this paper we address the issues of
interoperability between Web Services, the metrics which can be used to measure the interoperability
and simulate the Online shopping application by developing the Credit Card Verification Software
using Luhn’s Mod 10 algorithm having Java Client written in NetBeans and the BankWebService in
C# .NET.
SPATIAL ANALYSIS ABOUT USERS COLLABORATION ON GEO-SOCIAL NETWORKS IN A BRAZIL...ijwscjournal
Geo-Social Networks (GSNs) are collaborative systems that has the geolocated information as main component. The geolocation resource integrates virtual and real worlds, allowing the comprehension about these two scenarios at same time. Based on that, this work define a process of spatial analysis of shared information on a GSN. The present work proposes the usage of six spatial features as feedback about
collaborative behaviour on city. The spatial analysis aims understand if users’ collaboration change among city census sectors. Understanding how users deal with GSNs in an area, will help about collaborative patterns per urban region. As result, this work detected spatial patterns among users in the
GSN Foursquare of a Brazilian city. These patterns indicates that users’ collaboration receive influences of extrinsic and intrinsic features of GSN and the comprehension about their users is a complex task.
A Literature Review on Trust Management in Web Services Access Controlijwscjournal
Web Service is a reusable component which has set of related functionalities that service requesters can
programmatically access from the service provider and manipulate through the Web. One of the main
security issue is to secure web services from the malicious requesters. Since trust plays an important role in
many kinds of human communication, it allows people to work under insecurity and with the risk of
negative cost, many researchers have proposed different trust based web services access control model to
prevent malicious requesters. In this literature review, various existing trust based web services access
control model have been studied also investigated how the concept of a trust level is used in the access
control policy of a service provider to allow service requester to access the web services.
Semantic web approach towards interoperability and privacy issues in social n...ijwscjournal
The Social Web is a set of social relations that link people through World Wide Web. This Social Web
encompasses how the websites and software are designed and developed to support social relations. The
new paradigms, tools and web services introduced by Social Web are widely accepted by internet users.
The main drawbacks of these tools are it acts as independent data silos; hence interoperability among
applications is a complex issue. This paper focuses on this issue and how best we can use semantic web
technologies to achieve interoperability among applications.
Geriausi analinio sekso patarimai, pradžiamokslis tiems kas dar nebandėte ir nežinote. Visos sekso prekes, erotines prekes, vibratoriai, www.intymu.com
ASPECTUAL PATTERNS FOR WEB SERVICES ADAPTATIONijwscjournal
The security policies of an application can change at runtime for some reasons such as the changes on the
user preferences, the performance reasons or the negotiation of security levels between the interacting
parties. If these security policies are embedded in the services, their modifications require to modify the
services, stop and deploy new version. Aspect oriented paradigm provides the possibility to define
separated components that is named aspect. In this paper, in order to fulfill security requirements, we will
classify required changes of services and for each classifications, how aspects injection will be described.
Finally, we will present a pattern for each aspect of each classification.
Exploiting Entity Linking in Queries For Entity RetrievalFaegheh Hasibi
Slides for the ICTIR 2016 paper: "Exploiting Entity Linking in Queries For Entity Retrieval"
The premise of entity retrieval is to better answer search queries by returning specific entities instead of documents. Many queries mention particular entities; recognizing and linking them to the corresponding entry in a knowledge base is known as the task of entity
linking in queries. In this paper we make a first attempt at bringing together these two, i.e., leveraging entity annotations of queries in the entity retrieval model. We introduce a new probabilistic component and show how it can be applied on top of any term based entity retrieval model that can be emulated in the Markov Random Field framework, including language models, sequential dependence models, as well as their fielded variations. Using a standard entity retrieval test collection, we show that our extension brings consistent improvements over all baseline methods, including the current state-of-the-art. We further show that our extension is robust against parameter settings.
Semantic Knowledge Acquisition of Information for Syntactic web dannyijwest
Information retrieval is one of the most common web service used. Information is knowledge. In earlier
days one has to find a resource person or resource library to acquire knowledge. But today just by typing a
keyword on a search engine all kind of resources are available to us. Due to this mere advancement there
are trillions of information available on net. So, in this era we are in need of search engine which also
search with us by understanding the semantics of given query by the user. One such design is only possible
only if we provide semantic to our ordinary HTML web page. In this paper we have explained the concept
of converting an HTML page to RDFS/OWL page. This technique is incorporated along with natural
language technology as we have to provide the Hyponym and Meronym of the given HTML pages. Through
this automatic conversion the concept of intelligent information retrieval is framed.
FEATURES MATCHING USING NATURAL LANGUAGE PROCESSINGIJCI JOURNAL
The feature matching is a basic step in matching different datasets. This article proposes shows a new hybrid model of a pretrained Natural Language Processing (NLP) based model called BERT used in parallel with a statistical model based on Jaccard similarity to measure the similarity between list of features from two different datasets. This reduces the time required to search for correlations or manually match each feature from one dataset to another.
Catalog-based Conversion from Relational Database into XML Schema (XSD)CSCJournals
Where we are in the age of information revolution, exchange information, and transport data effectively among various sectors of government, commercial, service and industrial, etc., the uses of a new databases model to support this trend has become very important because inability of traditional databases models to support it. eXtensible Markup Language (XML) considers a new standard model for data interchange through internet and mobiles devices networks, it has become a common language to exchange and share the data of traditional models in easy and inexpensive ways. In this research, we propose a new technique to convert the relational database contents and schema into XML schema (XSD- XML Schema Definition), the main idea of the technique is extracting relational database catalog using Structured Query Language (SQL). We follow three steps to complete the conversion process. First, extracting relation instance (actual content) and schema catalog using SQL query language, which consider the required information to implement XML document and its schema. Second, transform the actual content into XML document tree. The idea of this step is converting table columns of the relations (tables) into the elements of XML document. Third, transform schema catalog into XML schema for describing the structure of the XML document. To do so, we transform datatype of the elements and the variant data constrains such as data length, not null, check and default, moreover define primary foreign keys and the referential integrity between the tables. Overall results of the technique are very promise while the technique is very clear and does not require complex procedures that could adversely effect on the results accuracy. We performed many experiments and report their elapsed CPU times.
EFFICIENT SCHEMA BASED KEYWORD SEARCH IN RELATIONAL DATABASESIJCSEIT Journal
Keyword search in relational databases allows user to search information without knowing database
schema and using structural query language (SQL). In this paper, we address the problem of generating
and evaluating candidate networks. In candidate network generation, the overhead is caused by raising the
number of joining tuples for the size of minimal candidate network. To reduce overhead, we propose
candidate network generation algorithms to generate a minimum number of joining tuples according to the
maximum number of tuple set. We first generate a set of joining tuples, candidate networks (CNs). It is
difficult to obtain an optimal query processing plan during generating a number of joins. We also develop a
dynamic CN evaluation algorithm (D_CNEval) to generate connected tuple trees (CTTs) by reducing the
size of intermediate joining results. The performance evaluation of the proposed algorithms is conducted
on IMDB and DBLP datasets and also compared with existing algorithms.
Duplicate Detection in Hierarchical Data Using XPathiosrjce
There were many techniques for identifying duplicates in relational data, but only a few solutions
focus on identifying duplicates which has complex hierarchical structure, as XML data. In this paper, we
present a new technique for identifying XML duplicates, so-called XML duplication using Xpath. XML
duplication using Xpath technique uses a Bayesian network to conclude the possibility that two xml elements are
duplicates, based on the information within the elements and other information organized in the XML. In
addition, to increase the proficiency of the web usage, a new pruning strategy was created. This pruning
strategy will help to gain maximum benefits over non-computing algorithm. This technique can be used to
increase the proficiency of identifying duplicates and remove it, so no duplicate record will be there. Through
many experiments, our algorithm is able to achieve high accuracy and retrieve count in several XML dataset.
XML duplication using Xpath technique is able to outclass another technique for identifying duplicates, both in
proficiency and potency.
Very few research works have been done on XML security over relational databases despite that XML became the de facto standard for the data representation and exchange on the internet and a lot of XML documents are stored in RDBMS. In [14], the author proposed an access control model for schema-based storage of XML documents in relational storage and translating XML access control rules to relational access control rules. However, the proposed algorithms had performance drawbacks. In this paper, we will use the same access control model of [14] and try to overcome the drawbacks of [14] by proposing an efficient technique to store the XML access control rules in a relational storage of XML DTD. The mapping of the XML DTD to relational schema is proposed in [7]. We also propose an algorithm to translate XPath queries to SQL queries based on the mapping algorithm in [7].
Very few research works have been done on XML security over relational databases despite that XML
became the de facto standard for the data representation and exchange on the internet and a lot of XML
documents are stored in RDBMS. In [14], the author proposed an access control model for schema-based
storage of XML documents in relational storage and translating XML access control rules to relational
access control rules. However, the proposed algorithms had performance drawbacks. In this paper, we will
use the same access control model of [14] and try to overcome the drawbacks of [14] by proposing an
efficient technique to store the XML access control rules in a relational storage of XML DTD. The mapping
of the XML DTD to relational schema is proposed in [7]. We also propose an algorithm to translate XPath
queries to SQL queries based on the mapping algorithm in [7].
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Knowledge engineering: from people to machines and back
SCHEMA BASED STORAGE OF XML DOCUMENTS IN RELATIONAL DATABASES
1. International Journal on Web Service Computing (IJWSC), Vol.4, No.2, June 2013
DOI : 10.5121/ijwsc.2013.4202 23
SCHEMA BASED STORAGE OF XML DOCUMENTS IN
RELATIONAL DATABASES
Dr. Pushpa Suri1
and Divyesh Sharma2
1
Department of Computer Science and Applications, Kurukshetra University,
Kurukshetra
pushpa.suri@yahoo.co.in
2
Department of Computer Science and Applications, Kurukshetra University,
Kurukshetra
divyeshsharma89@gmail.com
ABSTRACT
XML (Extensible Mark up language) is emerging as a tool for representing and exchanging data over the
internet. When we want to store and query XML data, we can use two approaches either by using native
databases or XML enabled databases. In this paper we deal with XML enabled databases. We use
relational databases to store XML documents. In this paper we focus on mapping of XML DTD into
relations. Mapping needs three steps: 1) Simplify Complex DTD’s 2) Make DTD graph by using simplified
DTD’s 3) Generate Relational schema. We present an inlining algorithm for generating relational schemas
from available DTD’s. This algorithm also handles recursion in an XML document.
KEYWORDS
XML, Relational databases, Schema Based Storage
1. INTRODUCTION
As XML (extensible markup language) documents needs efficient storage. The concept of
Relational data bases provides more mature way to store and query these documents. Research on
storage of XML documents using relational databases are classified by two kinds : Schema
oblivious storage, Schema Based storage .Schema oblivious storage does not use any DTD
(document type definition) or XML schema to map XML document in to
relations[4][5][6][7][8][9][10]. Schema based storage requires DTD attached with the XML
documents itself . In this paper we discuss mapping of XML DTD into relations with handling of
recursion.
2. RELATED WORK
Shanmugasundaram [1] , provides three basic Inlining algorithims for mapping XML DTD into
relations :Basic, shared and Hybrid. In basic inlining, for each element in XML document, a
relation is created. This will lead to excessive relations and when we process queries to these
relations, this will lead to excessive joins. In shared inlining, the elements having single
occurance are inlined to its parent elements. This will reduce the number of relations as compared
to Basic. In Hybrid, shared element i.e. the element having single occurance but shared by many
2. International Journal on Web Service Computing (IJWSC), Vol.4, No.2, June 2013
24
parents are also inlined to its parent elements. Mustafa Atay [2] extends these algorithims by
providing the concept of endID. endID is the maximum id of the descendants of an element . For
every element that is to be inlined, an endID is stored. This is helpful when we reconstruct the
document. .Lu [3] provides a seperate edge table for storing elements having multiple occurances.
3. PROPOSED WORK
We discuss the concept of Inlining in detail. Inlining is basically to store the information of the
child element with its parent element in the same relation. Elements having indegree = 1 i.e. the
element having one parent can be inlined into same relation of its parent element. Indegree is the
number of incoming edges to a node where a node represents an element itself in XML document.
The first step to generate the relational schema from available DTD is to simplify the
complexities of DTD i.e. solve +, *, |,? operators of DTD.
+ Operator means one or many. i.e. one element can occur one or many times in XML document.
* Operator means zero or many i.e. one element can occur zero or more times in XML document.
| Operator means choice.
? Optional operator means whether element occurs or not
Suppose e, e1, e2 denotes elements itself. Then Resolve DTD by these rules :-
1. e** → e*
2. (e+)+ → e*
3. e+ → e*
4. e1e2→ (e1, e2)
5. (e, e1) (e, e2) → (e, e1, e2)
6. e? → e
7. e - - e - - → e*
8. e*-- -- e → e*
9. e - --- e* → e*
10. e* - - - e* → e*
Next step is to make the graph of the simplified DTD. A DTD graph is the graphical
representation of the XML document type definitions. In this graph, nodes are represented by
elements and attributes of DTD and edges represents their parent-child relationships. Cycles
represents recursion between elements of the documents. We represent two edges in this graph.
Single edge →
Multiple edge →*
3. International Journal on Web Service Computing (IJWSC), Vol.4, No.2, June 2013
25
Elements having single occurrences are denoted by single edge and having multiple occurrences
are denoted by multiple *edges. After making simplified DTD graph, last step is to generate
relational schema based on simplified DTD graph. This is the algorithm for generating relational
schema:-
Algorithm1: XML DTD Mapping
Input: DTD graph
Output: Relational Schema
1. For the nodes in the DTD graph having indegree = 0 ,Create Relation.
1.1. In relation, introduce an attribute id which serves as primary key in the relation.
1.2. Introduce an attribute Rootelem for that element to specify whether the element is a root
or not.
2. For the nodes having indegree = 1 Inline them with their parent elements.
3. For the nodes having indegree >1 with single occurrence, inline them with their parent
elements.
4. For the nodes having multiple occurrence i.e. *edge in DTD graph having indegree =1,
Create relation.
4.1. In Relation, introduce the attribute id.
4.2. Introduce attribute parent id to specify the id of parent element.
5. Nodes having multiple occurrences with indegree > 1 ,Create Relation.
5.1. In Relation, introduce an attribute id for identification for that node.
5.2. Introduce an attribute Parent type to specify the parent element.
5.3. Introduce an attribute Parent id to specify the id of the parent.
5.4. Introduce an attribute Node type to specify the name of node.
6. If the nodes having cycles in the DTD graph then
6.1. If cycle contains only single edges create relation for one of the element in the cycle.
6.2. If the cycle contains one *edge and one single edge. Make Relations for both of the
elements with an id reference of single edge element stored in *edge element.
6.3. If the cycle contains both *edges then create Relations for both of the elements.
4. International Journal on Web Service Computing (IJWSC), Vol.4, No.2, June 2013
26
4. A COMPLETE EXAMPLE
<! Doctype Document [
<! ELEMENT Document (company)>
<! ELEMENT company (dept*, employee+, projects+, cname)>
<! ELEMENT dept (dname?, employee+, company)>
<! ELEMENT employee (id, name, address, designation, project+)>
<! ELEMENT project (ptitle, pno location)>
<! ELEMENT name (firstname?, lastname?)>
<! ELEMENT ptitle (# PCDATA) >.
<! ELEMENT dname (# PCDATA) >.
<! ELEMENT pno (# PCDATA>)
<! ELEMENT location (# PCDATA) >
<! ELEMENT address (# PCDATA) >
<! ELEMENT designation (# PCDATA)>
<! ELEMENT cname (# PCDATA)>
<! ELEMENT firstname (# PCDATA)>
<! ELEMENT lastname (# PCDATA)>
]
After simplification of DTD’s by applying the rules given above, the DTD becomes:
<! ELEMENT company (dept*, employee*, project*, cname)>
<! ELEMENT dept (dname, employee*, company)>
<! ELEMENT employee (id, name, designation, project*)>
<! ELEMENT project(ptitle, pno, location)>
<! ELEMENT name (first name, last name)>
5. International Journal on Web Service Computing (IJWSC), Vol.4, No.2, June 2013
27
Figure 1: DTD graph
In this case, we are having cycles at root element. Two kinds of cycles exist here. One cycle
exists between employee and project element. Both elements have multiple occurances and
recursive to each other. Second cycle exists between company and dept. Company is the root
element of the document. Dept has multiple occurances. Both cycles will be handled according to
the algorithm given above.
The corresponding relational schema. becomes:-
Company (id, root elem : true, cname)
Dept (id, parent id, dname ,company. id)
Employee (id, parent type, parent id, node type, name.firstname, name.lastname, address,
designation)
Project (id, parent id, parent type, node type, pno, ptitle, location)
5. CONCLUSIONS
In this paper we have given the inlining algorithm which will generate relational schema of the
given XML DTD. Although recursion in XML documents is not very common but we have
discussed it and also elaborate that how it will be handled while generating the relations. In future
research will extend this concept in query processing of XML documents and translation of XML
queries in to SQL queries.
REFERENCES
[1] J.Shanmugasundaram, K. Tufte, C. Zhang, G.He, D. Dewitt, J. Naughton, Relational Databases for
Querying XML Documents: Limitations and opportunities, VLDB 1999, pp : 302-314.
[2] M. Atay, A Chebotko, D. L iu, S. Lu, F. Fotoubi , Efficient schema based XML to relational data
mapping, Information systems, Elesevier 2005.
[3] S.Lu,Y. Sun, M.Atay, F. Fotouhi, A New inlining algorithm for mapping XML DTDS to relational
schema, In Proceedings of the First International Work-shop on XML Schema and Data Management
, in conjuction with the 22nd ACM International Conference on Conceptual Modeling , Chicago, IL,
October 2003.
[4] M. YoshiKawa., T.Amagasa, T. Shtimura, XREL: A path based approach to storage and retrieval of
XML documents using relational databases, ACM Transactions on Internet Technology, 1(1) ,
pp:110-141, August 2001.
[5] J. Qin, S.Zhao, S. Yang, W. Dau, XPEV: A Storage Approach for Well-Formed XML Documents,
FKSD, LNAI 3613, 2005, pp.360-369.
6. International Journal on Web Service Computing (IJWSC), Vol.4, No.2, June 2013
28
[6] D.Florescu, D. Kossman, A Performance Evaluation of Alternative Mapping Schemes for storing
XML Data in a Relational Database, Rapport de Recherche No. 3680 INRIA, Rocquencourt,
France,1999.
[7] A.Salminen, F.Wm, Requirements for XML Document Database Systems. First ACM Symposium on
Document Engineering, Atlanta. 2001 pp:85-94.
[8] H. Zafari, K. Hasami, M.Ebrahim Shiri, Xlight, An efficient relational schema to store and query
XML data, In the proceedings of the IEEE International conference in Data Store and Data
Engineering 2011, pp:254-257.
[9] M.Ibrahim Fakharaldien, J.Mohamed Zain, N. Sulaiman, XRecursive: An efficient method to store
and query XML documents, Australian Journal of basic and Applied Sciences, 5(12) 2011 pp: 2910-
2916.
[10] M.Sharkawi, N. Tazi, LNV: Relational database Storage structure for XML documents, The 3rd
ACS/IEEE International Conference On Computer Systems and Applications, 2005, pp:49-56.