This document discusses semantic conflicts that can occur when integrating fuzzy relational databases and proposes a methodology for resolving these conflicts. It identifies five new types of conflicts specific to fuzzy databases: membership degree conflicts, inconsistent attribute values, missing attributes, missing fuzzy attribute values, and attribute domain conflicts. The methodology resolves these conflicts in a specific order to minimize the time needed for integration. It aims to resolve fuzzy database conflicts within the context of resolving other general integration conflicts.
Towards a Query Rewriting Algorithm Over Proteomics XML ResourcesCSCJournals
Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
The Statement of Conjunctive and Disjunctive Queries in Object Oriented Datab...Editor IJCATR
Entrance of object orienting concept in database caused the relation database gradually to replace with object oriented
database in various fields. On the other hand for solving the problem of real world uncertain data, several methods were presented.
One of these methods for modeling database is an approach wich couples object-oriented database modeling with fuzzy logic. Many
queries that users to pose are expressed on the basis of linguistic variables. Because of classical databases are not able to support these
variables, leads to fuzzy approaches are considered. We investigate databases queries in this study both simple and complex ways. In
the complex way, we use conjunctive and disjunctive queries. In the following, we use the XML labels to express inqueries into fuzzy.
We can also communicate with other sections of software by entering into XML world as the most reliable opportunity. Also we want
to correct conjunctive and disjunctive queries related to fuzzy object oriented database using the concept of dependency measure and
weight, and weight be assigned to different phrases of a query based on user emphasis. The other aim of this research is mapping fuzzy
queries to fuzzy-XML. It is expected to be simple implement of query, and output of execution of queries be greatly closer to users'
needs and fulfill her expect. The results show that the proposed method explains the possible conjunctive and disjunctive queries the
database in the form of Fuzzy-XML.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScsandit
Relational Databases (RDB) are used as the backend database by most of information systems.
RDB encapsulate conceptual model and metadata needed in the ontology construction. Schema
mapping is a technique that is used by all existing approaches for ontology building from RDB.
However, most of those methods use poor transformation rules that prevent advanced database
mining for building rich ontologies. In this paper, we propose transformation rules for building
owl ontologies from RDBs. It allows transforming all possible cases in RDBs into ontological
constructs. The proposed rules are enriched by analyzing stored data to detect disjointness and
totalness constraints in hierarchies, and calculating the participation level of tables in n-ary
relations. In addition, our technique is generic; hence it can be applied to any RDB. The
proposed rules were evaluated using a normalized and open RDB. The obtained ontology is
richer in terms of non- taxonomic relationships.
This document discusses techniques for detecting duplicate records from multiple web databases. It begins with an abstract describing an unsupervised approach that uses classifiers like the weighted component similarity summing classifier and support vector machine along with a Gaussian mixture model to iteratively identify duplicate records. The document then provides details on related work, including probabilistic matching models, supervised and unsupervised learning techniques, distance-based techniques, rule-based approaches, and methods for improving efficiency like blocking and the sorted neighborhood approach.
Clustering the results of a search helps the user to overview the information returned. In this paper, we
look upon the clustering task as cataloguing the search results. By catalogue we mean a structured label
list that can help the user to realize the labels and search results. Labelling Cluster is crucial because
meaningless or confusing labels may mislead users to check wrong clusters for the query and lose extra
time. Additionally, labels should reflect the contents of documents within the cluster accurately. To be able
to label clusters effectively, a new cluster labelling method is introduced. More emphasis was given to
/produce comprehensible and accurate cluster labels in addition to the discovery of document clusters. We
also present a new metric that employs to assess the success of cluster labelling. We adopt a comparative
evaluation strategy to derive the relative performance of the proposed method with respect to the two
prominent search result clustering methods: Suffix Tree Clustering and Lingo.
we perform the experiments using the publicly available Datasets Ambient and ODP-239
Expression of Query in XML object-oriented databaseEditor IJCATR
Upon invent of object-oriented database, the concept of behavior in database was propounded. Before, relational database only provided a logical modeling of data and paid no attention to the operations applied on data in the system. In this paper, a method is presented for query of object-oriented database. This method has appropriate results when the user explains restrictions in a combinational matter (disjunctive and conjunctive) and assumes a weight for each one of restrictions based on their importance. Later, the obtained results are sorted based on their belonging rate to the response set. In continue, queries are explained using XML labels. The purpose is simplifying queries and objects resulted from queries to be very close to the user need and meet his expectation.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
Iaetsd a survey on one class clusteringIaetsd Iaetsd
This document presents a new method for performing one-to-many data linkage called the One Class Clustering Tree (OCCT). The OCCT builds a tree structure with inner nodes representing features of the first dataset and leaves representing similar features of the second dataset. It uses splitting criteria and pruning methods to perform the data linkage more accurately than existing indexing techniques. The OCCT approach induces a decision tree using a splitting criteria and performs prepruning to determine which branches to trim. It then compares entities to match them between the two datasets and produces a final result.
Towards a Query Rewriting Algorithm Over Proteomics XML ResourcesCSCJournals
Querying and sharing Web proteomics data is not an easy task. Given that, several data sources can be used to answer the same sub-goals in the Global query, it is obvious that we can have many candidates rewritings. The user-query is formulated using Concepts and Properties related to Proteomics research (Domain Ontology). Semantic mappings describe the contents of underlying sources. In this paper, we propose a characterization of query rewriting problem using semantic mappings as an associated hypergraph. Hence, the generation of candidates’ rewritings can be formulated as the discovery of minimal Transversals of an hypergraph. We exploit and adapt algorithms available in Hypergraph Theory to find all candidates rewritings from a query answering problem. Then, in future work, some relevant criteria could help to determine optimal and qualitative rewritings, according to user needs, and sources performances.
The Statement of Conjunctive and Disjunctive Queries in Object Oriented Datab...Editor IJCATR
Entrance of object orienting concept in database caused the relation database gradually to replace with object oriented
database in various fields. On the other hand for solving the problem of real world uncertain data, several methods were presented.
One of these methods for modeling database is an approach wich couples object-oriented database modeling with fuzzy logic. Many
queries that users to pose are expressed on the basis of linguistic variables. Because of classical databases are not able to support these
variables, leads to fuzzy approaches are considered. We investigate databases queries in this study both simple and complex ways. In
the complex way, we use conjunctive and disjunctive queries. In the following, we use the XML labels to express inqueries into fuzzy.
We can also communicate with other sections of software by entering into XML world as the most reliable opportunity. Also we want
to correct conjunctive and disjunctive queries related to fuzzy object oriented database using the concept of dependency measure and
weight, and weight be assigned to different phrases of a query based on user emphasis. The other aim of this research is mapping fuzzy
queries to fuzzy-XML. It is expected to be simple implement of query, and output of execution of queries be greatly closer to users'
needs and fulfill her expect. The results show that the proposed method explains the possible conjunctive and disjunctive queries the
database in the form of Fuzzy-XML.
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScsandit
Relational Databases (RDB) are used as the backend database by most of information systems.
RDB encapsulate conceptual model and metadata needed in the ontology construction. Schema
mapping is a technique that is used by all existing approaches for ontology building from RDB.
However, most of those methods use poor transformation rules that prevent advanced database
mining for building rich ontologies. In this paper, we propose transformation rules for building
owl ontologies from RDBs. It allows transforming all possible cases in RDBs into ontological
constructs. The proposed rules are enriched by analyzing stored data to detect disjointness and
totalness constraints in hierarchies, and calculating the participation level of tables in n-ary
relations. In addition, our technique is generic; hence it can be applied to any RDB. The
proposed rules were evaluated using a normalized and open RDB. The obtained ontology is
richer in terms of non- taxonomic relationships.
This document discusses techniques for detecting duplicate records from multiple web databases. It begins with an abstract describing an unsupervised approach that uses classifiers like the weighted component similarity summing classifier and support vector machine along with a Gaussian mixture model to iteratively identify duplicate records. The document then provides details on related work, including probabilistic matching models, supervised and unsupervised learning techniques, distance-based techniques, rule-based approaches, and methods for improving efficiency like blocking and the sorted neighborhood approach.
Clustering the results of a search helps the user to overview the information returned. In this paper, we
look upon the clustering task as cataloguing the search results. By catalogue we mean a structured label
list that can help the user to realize the labels and search results. Labelling Cluster is crucial because
meaningless or confusing labels may mislead users to check wrong clusters for the query and lose extra
time. Additionally, labels should reflect the contents of documents within the cluster accurately. To be able
to label clusters effectively, a new cluster labelling method is introduced. More emphasis was given to
/produce comprehensible and accurate cluster labels in addition to the discovery of document clusters. We
also present a new metric that employs to assess the success of cluster labelling. We adopt a comparative
evaluation strategy to derive the relative performance of the proposed method with respect to the two
prominent search result clustering methods: Suffix Tree Clustering and Lingo.
we perform the experiments using the publicly available Datasets Ambient and ODP-239
Expression of Query in XML object-oriented databaseEditor IJCATR
Upon invent of object-oriented database, the concept of behavior in database was propounded. Before, relational database only provided a logical modeling of data and paid no attention to the operations applied on data in the system. In this paper, a method is presented for query of object-oriented database. This method has appropriate results when the user explains restrictions in a combinational matter (disjunctive and conjunctive) and assumes a weight for each one of restrictions based on their importance. Later, the obtained results are sorted based on their belonging rate to the response set. In continue, queries are explained using XML labels. The purpose is simplifying queries and objects resulted from queries to be very close to the user need and meet his expectation.
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
Iaetsd a survey on one class clusteringIaetsd Iaetsd
This document presents a new method for performing one-to-many data linkage called the One Class Clustering Tree (OCCT). The OCCT builds a tree structure with inner nodes representing features of the first dataset and leaves representing similar features of the second dataset. It uses splitting criteria and pruning methods to perform the data linkage more accurately than existing indexing techniques. The OCCT approach induces a decision tree using a splitting criteria and performs prepruning to determine which branches to trim. It then compares entities to match them between the two datasets and produces a final result.
EFFICIENT SCHEMA BASED KEYWORD SEARCH IN RELATIONAL DATABASESIJCSEIT Journal
Keyword search in relational databases allows user to search information without knowing database
schema and using structural query language (SQL). In this paper, we address the problem of generating
and evaluating candidate networks. In candidate network generation, the overhead is caused by raising the
number of joining tuples for the size of minimal candidate network. To reduce overhead, we propose
candidate network generation algorithms to generate a minimum number of joining tuples according to the
maximum number of tuple set. We first generate a set of joining tuples, candidate networks (CNs). It is
difficult to obtain an optimal query processing plan during generating a number of joins. We also develop a
dynamic CN evaluation algorithm (D_CNEval) to generate connected tuple trees (CTTs) by reducing the
size of intermediate joining results. The performance evaluation of the proposed algorithms is conducted
on IMDB and DBLP datasets and also compared with existing algorithms.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes research on vertical fragmentation, allocation, and re-fragmentation in distributed object relational database systems. It proposes an algorithm for vertical fragmentation and allocation that considers the usage of attributes and methods by queries at different sites. The algorithm forms usage matrices, calculates affinity between methods, clusters methods, and partitions the data into fragments that are allocated to sites where they see the most demand. It also describes handling update queries by redirecting them to a server for processing and then propagating the updates to relevant fragments.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
1) This document discusses different techniques for cross-domain data fusion, including stage-based, feature-level, probabilistic, and multi-view learning methods.
2) It reviews literature on data fusion definitions, implementations, and techniques for handling data conflicts. Common steps in data fusion are data transformation, schema mapping, and duplicate detection.
3) The proposed system architecture performs data cleaning, then applies stage-based, feature-level, probabilistic, and multi-view learning fusion methods before analyzing dataset, hardware, and software requirements.
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
Study on Theoretical Aspects of Virtual Data Integration and its ApplicationsIJERA Editor
Data integration is the technique of merging data residing at different sources at different locations, and
providing users with an integrated, reconciled view of these data. Such unified view is called global or mediated
schema. It represents the intentional level of the integrated and reconciled data. In the data integration system,
our area of interest in this paper is characterized by an architecture based on a global schema and a set of
sources or source schemas. The objective of this paper is to provide a study on the theoretical aspects of data
integration systems and to present a comprehensive review of the applications of data integration in various
fields including biomedicine, environment, and social networks. It also discusses a privacy framework for
protecting user’s privacy with privacy views and privacy policies.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
Nature Inspired Models And The Semantic WebStefan Ceriu
In this paper we present a series of nature inspired models used as alternative solutions for Semantic Web concerns. Some of the methods presented in this article perform better than classic algorithms by enhancing response time and computational costs. Others are just proof of concept, first steps towards new techniques that will improve their respective field. The intricate nature of the Semantic Web urges the need for faster, more intelligent algorithms and nature inspired models have been proven to be more than suitable for such complex tasks.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIEScsandit
This document summarizes a research paper that proposes a system to enhance keyword search over relational databases using ontologies. The system builds structures during pre-processing like a reachability index to store connectivity information and an ontology concept graph. During querying, it maps keywords to concepts, uses the ontology to find related concepts and tuples, and generates top-k answer trees combining syntactic and semantic matches while limiting redundant results. The system is expected to perform better than existing approaches by reducing storage requirements through its approach to materializing neighborhood information in the reachability index.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
Challenging Issues and Similarity Measures for Web Document ClusteringIOSR Journals
This document discusses challenging issues and similarity measures for web document clustering. It begins with an introduction to text mining and document clustering. It then reviews related work on similarity approaches and measures. Some key challenging issues in web document clustering are discussed, such as measuring semantic similarity between words and evaluating cluster validity. Various types of similarity measures are also described, including string-based measures like Jaro-Winkler distance and corpus-based measures like latent semantic analysis. The conclusion states that accurate clustering requires a precise definition of similarity between document pairs and discusses different similarity measures that can be used.
SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITYIJDKP
This document summarizes an approach to improve source code retrieval using structural information from source code. A lexical parser is developed to extract control statements and method identifiers from Java programs. A similarity measure is proposed that calculates the ratio of fully matching statements to partially matching statements in a sequence. Experiments show the retrieval model using this measure improves retrieval performance over other models by up to 90.9% relative to the number of retrieved methods.
In this paper we describe NoSQL, a series of non-relational database technologies and products developed to address the current problems the RDMS system are facing: lack of true scalability, poor performance on high data volumes and low availability. Some of these products have already been involved in production and they perform very well: Amazon’s Dynamo, Google’s Bigtable, Cassandra, etc. Also we provide a view on how these systems influence the applications development in the social and semantic Web sphere.
Design & optimization of LNG-CNG cylinder for optimum weightijsrd.com
In current automobile sector, the weight of the vehicle is too important to increase the efficiency of the vehicle. There are too many component or subassemblies are in the automobile vehicle. In this paper the weight of the HYDROGEN fuel tank is optimized by applying the composite material concept with the existing material of the fuel tank. Initially the dimensional calculation for the existing pressure vessel and compare with the existing cylinder and then the FEA ( Finite Element Analysis) applied on the cylinder and material optimize up to the stress reaching equivalent to the stress of the existing cylinder. After that the dimension of the cylinder are finalize. The analysis yields a weight reduction of fuel tank.
Fault Injection Approach for Network on Chipijsrd.com
Packet-based on-chip interconnection networks, or Network-on-Chips (NoCs) are progressively replacing global on-chip interconnections in Multi-processor System-on-Chips (MP-SoCs) thanks to better performances and lower power consumption. However, modern generations of MP-SoCs have an increasing sensitivity to faults due to the progressive shrinking technology. Consequently, in order to evaluate the fault sensitivity in NoC architectures, there is the need of accurate test solution which allows evaluating the fault tolerance capability of NoCs. Presents an innovative test architecture based on a dual-processor system which is able to extensively test mesh based NoCs. The proposed solution improves previously developed methods since it is based on a NoC physical implementation which allows investigating the effects induced by several kind of faults thanks to the execution of on-line fault injection within all the network interface and router resources during NoC run-time operations. The solution has been physically implemented on an FPGA platform using a NoC emulation model adopting standard communication protocols. The obtained results demonstrated the effectiveness of the developed solution in term of testability and diagnostic capabilities and make our solutions suitable for testing large scale.
A Report on Prevalence, Abundance and Intensity of Fish Parasites in Cat Fish...ijsrd.com
The present investigation was on occurrence on different Parasite founds in 38 different cat fishes of River Siang. The present study on helminth parasite of cat fisheswith respect to length and weight revealed that Cestode infection was the highest in all fish sample of the fish species. The high worm burden was located in the gut mainly the intestine of the fish. Also some eggs were detected in the liver of two host fishes. In this study 38 specimen fishes were examined which contained both male and female specimens. Wallagoattuspecimen shows highest prevalence of about 100% than the other cat fishes specimens.
A Review on optimization of Dry Electro Discharge Machining Process Parametersijsrd.com
This document reviews optimization of dry electric discharge machining (EDM) process parameters. It discusses how dry EDM replaces liquid dielectric with gas, improving environmental friendliness. The literature review examines studies on how factors like voltage, current, and gas pressure affect material removal rate, surface roughness, and tool wear rate. Response surface methodology and algorithms like NSGA-II and genetic algorithms have been used to develop models and optimize the conflicting objectives of maximizing material removal rate while minimizing surface roughness. Overall, the document provides an overview of dry EDM optimization research aimed at improving process performance and efficiency.
Based on Heterogeneity and Electing Probability of Nodes Improvement in LEACHijsrd.com
In heterogeneous sensor networks, certain nodes become cluster heads which aggregate the data of their cluster nodes and transfer it to the sink. An Improved Energy leach protocol for cluster head selection in a hierarchically clustered heterogeneous network to reorganize the network topology efficiently is proposed in this research work. The proposed algorithm will use thresholding to improve the cluster head selection. The presented algorithm considers the sensor nodes in wireless network and randomly distributed in the heterogeneous network. The coordinates of the sink and the dimensions of the sensor field are known in prior.
Study on Flexural Behaviour of Activated Fly Ash Concreteijsrd.com
Cement concrete is the most widely used construction material in many infrastructure projects. The development and use of mineral admixture for cement replacement is growing in construction industry mainly due to the consideration of cost saving, energy saving, environmental production and conservation of resources. Present study is aimed at replacing cement in concrete with activated fly ash. The paper highlights the chemical activation of low calcium fly ash. Today activation of fly ash is playing an important role for enhancing the effectiveness of fly ash and accelerating the pozzolanic properties of fly ash. Activated fly ash certainly improves the early age strength and durability of concrete and corrosion tolerance. Many methods such as mechanical (physical), thermal and chemical activation are in use to activate the fly ash. The chemical activation is one of the easiest methods where fly ash can be activated by alkaline activators (i.e. alkaline solutions of high alkaline concentration chemicals like gypsum, sodium silicate and calcium oxide, KOH, etc.), which enhances the effectiveness of fly ash by disintegrating the glassy layer of fly ash molecules in cement concrete, thereby increasing its corrosion resistance. In the present dissertation, quality of fly ash is improved by chemical treatment by using chemical activators. The mechanical properties like compressive strength, split tensile strength, flexural strength of activated fly ash concrete and flexural strength of activated fly ash reinforced concrete beams are studied. For this project work, the chemicals like sodium silicate, calcium oxide are used to activate the fly ash in the ratio 1:8.
Selection of Energy Efficient Clustering Algorithm upon Cluster Head Failure ...ijsrd.com
Wireless Sensor Network (WSN) applications have increased in recent times in fields such as environmental sensing, area monitoring, air pollution monitoring, forest res detection, machine health monitoring, and landslide detection. In such applications, there is a high need of secure communication among sensor nodes. There are different techniques to secure network data transmissions, but due to power constraints of WSN, group key based mechanism is the most preferred one. Hence, to implement scalable energy efficient secure group communication, the best approach would be hierarchical based like Clustering. In most of the WSN designs based on clustering, Base Station (BS) is the central point of contact to the outside world and in case of its failure; it may lead to total disconnection in the communication. Critical applications like these cannot afford to have BS failure as it is a gateway from sensor networks to the outside world. In order to provide better fault tolerant immediate action, a new BS at some other physical location will have to take the charge. This may lead to a total change in the hierarchical network topology, which in turn leads to re-clustering the entire network and in turn formation of new security keys. Therefore, there is a need to find a suitable algorithm which clusters sensor nodes in such a way that when a BS fails and a new BS takes the charge, new group key gets established with minimum computation and less energy consumption.
EFFICIENT SCHEMA BASED KEYWORD SEARCH IN RELATIONAL DATABASESIJCSEIT Journal
Keyword search in relational databases allows user to search information without knowing database
schema and using structural query language (SQL). In this paper, we address the problem of generating
and evaluating candidate networks. In candidate network generation, the overhead is caused by raising the
number of joining tuples for the size of minimal candidate network. To reduce overhead, we propose
candidate network generation algorithms to generate a minimum number of joining tuples according to the
maximum number of tuple set. We first generate a set of joining tuples, candidate networks (CNs). It is
difficult to obtain an optimal query processing plan during generating a number of joins. We also develop a
dynamic CN evaluation algorithm (D_CNEval) to generate connected tuple trees (CTTs) by reducing the
size of intermediate joining results. The performance evaluation of the proposed algorithms is conducted
on IMDB and DBLP datasets and also compared with existing algorithms.
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
The document summarizes research on vertical fragmentation, allocation, and re-fragmentation in distributed object relational database systems. It proposes an algorithm for vertical fragmentation and allocation that considers the usage of attributes and methods by queries at different sites. The algorithm forms usage matrices, calculates affinity between methods, clusters methods, and partitions the data into fragments that are allocated to sites where they see the most demand. It also describes handling update queries by redirecting them to a server for processing and then propagating the updates to relevant fragments.
Concept hierarchy is the backbone of ontology, and the concept hierarchy acquisition has been a hot topic in the field of ontology learning. this paper proposes a hyponymy extraction method of domain ontology concept based on cascaded conditional random field(CCRFs) and hierarchy clustering. It takes free text as extracting object, adopts CCRFs identifying the domain concepts. First the low layer of CCRFs is used to identify simple domain concept, then the results are sent to the high layer, in which the nesting concepts are recognized. Next we adopt hierarchy clustering to identify the hyponymy relation between domain ontology concepts. The experimental results demonstrate the proposed method is efficient.
1) This document discusses different techniques for cross-domain data fusion, including stage-based, feature-level, probabilistic, and multi-view learning methods.
2) It reviews literature on data fusion definitions, implementations, and techniques for handling data conflicts. Common steps in data fusion are data transformation, schema mapping, and duplicate detection.
3) The proposed system architecture performs data cleaning, then applies stage-based, feature-level, probabilistic, and multi-view learning fusion methods before analyzing dataset, hardware, and software requirements.
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
Study on Theoretical Aspects of Virtual Data Integration and its ApplicationsIJERA Editor
Data integration is the technique of merging data residing at different sources at different locations, and
providing users with an integrated, reconciled view of these data. Such unified view is called global or mediated
schema. It represents the intentional level of the integrated and reconciled data. In the data integration system,
our area of interest in this paper is characterized by an architecture based on a global schema and a set of
sources or source schemas. The objective of this paper is to provide a study on the theoretical aspects of data
integration systems and to present a comprehensive review of the applications of data integration in various
fields including biomedicine, environment, and social networks. It also discusses a privacy framework for
protecting user’s privacy with privacy views and privacy policies.
INTELLIGENT SOCIAL NETWORKS MODEL BASED ON SEMANTIC TAG RANKINGdannyijwest
Social Networks has become one of the most popular platforms to allow users to communicate, and share their interests without being at the same geographical location. With the great and rapid growth of Social Media sites such as Facebook, LinkedIn, Twitter…etc. causes huge amount of user-generated content. Thus, the improvement in the information quality and integrity becomes a great challenge to all social media sites, which allows users to get the desired content or be linked to the best link relation using improved search / link technique. So introducing semantics to social networks will widen up the representation of the social networks. In this paper, a new model of social networks based on semantic tag ranking is introduced. This model is based on the concept of multi-agent systems. In this proposed model the representation of social links will be extended by the semantic relationships found in the vocabularies which are known as (tags) in most of social networks.The proposed model for the social media engine is based on enhanced Latent Dirichlet Allocation(E-LDA) as a semantic indexing algorithm, combined with Tag Rank as social network ranking algorithm. The improvements on (E-LDA) phase is done by optimizing (LDA) algorithm using the optimal parameters. Then a filter is introduced to enhance the final indexing output. In ranking phase, using Tag Rank based on the indexing phase has improved the output of the ranking. Simulation results of the proposed model have shown improvements in indexing and ranking output.
Nature Inspired Models And The Semantic WebStefan Ceriu
In this paper we present a series of nature inspired models used as alternative solutions for Semantic Web concerns. Some of the methods presented in this article perform better than classic algorithms by enhancing response time and computational costs. Others are just proof of concept, first steps towards new techniques that will improve their respective field. The intricate nature of the Semantic Web urges the need for faster, more intelligent algorithms and nature inspired models have been proven to be more than suitable for such complex tasks.
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
ENHANCING KEYWORD SEARCH OVER RELATIONAL DATABASES USING ONTOLOGIEScsandit
This document summarizes a research paper that proposes a system to enhance keyword search over relational databases using ontologies. The system builds structures during pre-processing like a reachability index to store connectivity information and an ontology concept graph. During querying, it maps keywords to concepts, uses the ontology to find related concepts and tuples, and generates top-k answer trees combining syntactic and semantic matches while limiting redundant results. The system is expected to perform better than existing approaches by reducing storage requirements through its approach to materializing neighborhood information in the reachability index.
Immune-Inspired Method for Selecting the Optimal Solution in Semantic Web Ser...IJwest
The increasing interest in developing efficient and effective optimization techniques has conducted researchers to turn their attention towards biology. It has been noticed that biology offers many clues for designing novel optimization techniques, these approaches exhibit self-organizing capabilities and permit the reachability of promising solutions without the existence of a central coordinator. In this paper we handle the problem of dynamic web service composition, by using the clonal selection algorithm. In order to assess the optimality rate of a given composition, we use the QOS attributes of the services involved in the workflow as well as, the semantic similarity between these components. The experimental evaluation shows that the proposed approach has a better performance in comparison with other approaches such as the genetic algorithm.
Semantic Annotation Framework For Intelligent Information Retrieval Using KIM...dannyijwest
Due to the explosion of information/knowledge on the web and wide use of search engines for desired
information,the role of knowledge management(KM) is becoming more significant in an organization.
Knowledge Management in an Organization is used to create ,capture, store, share, retrieve and manage
information efficiently. The semantic web, an intelligent and meaningful web, tend to provide a promising
platform for knowledge management systems and vice versa, since they have the potential to give each
other the real substance for machine-understandable web resources which in turn will lead to an
intelligent, meaningful and efficient information retrieval on web. Today,the challenge for web community
is to integrate the distributed heterogeneous resources on web with an objective of an intelligent web
environment focusing on data semantics and user requirements. Semantic Annotation(SA) is being widely
used which is about assigning to the entities in the text and links to their semantic descriptions. Various
tools like KIM, Amaya etc may be used for semantic Annotation.
Challenging Issues and Similarity Measures for Web Document ClusteringIOSR Journals
This document discusses challenging issues and similarity measures for web document clustering. It begins with an introduction to text mining and document clustering. It then reviews related work on similarity approaches and measures. Some key challenging issues in web document clustering are discussed, such as measuring semantic similarity between words and evaluating cluster validity. Various types of similarity measures are also described, including string-based measures like Jaro-Winkler distance and corpus-based measures like latent semantic analysis. The conclusion states that accurate clustering requires a precise definition of similarity between document pairs and discusses different similarity measures that can be used.
SOURCE CODE RETRIEVAL USING SEQUENCE BASED SIMILARITYIJDKP
This document summarizes an approach to improve source code retrieval using structural information from source code. A lexical parser is developed to extract control statements and method identifiers from Java programs. A similarity measure is proposed that calculates the ratio of fully matching statements to partially matching statements in a sequence. Experiments show the retrieval model using this measure improves retrieval performance over other models by up to 90.9% relative to the number of retrieved methods.
In this paper we describe NoSQL, a series of non-relational database technologies and products developed to address the current problems the RDMS system are facing: lack of true scalability, poor performance on high data volumes and low availability. Some of these products have already been involved in production and they perform very well: Amazon’s Dynamo, Google’s Bigtable, Cassandra, etc. Also we provide a view on how these systems influence the applications development in the social and semantic Web sphere.
Design & optimization of LNG-CNG cylinder for optimum weightijsrd.com
In current automobile sector, the weight of the vehicle is too important to increase the efficiency of the vehicle. There are too many component or subassemblies are in the automobile vehicle. In this paper the weight of the HYDROGEN fuel tank is optimized by applying the composite material concept with the existing material of the fuel tank. Initially the dimensional calculation for the existing pressure vessel and compare with the existing cylinder and then the FEA ( Finite Element Analysis) applied on the cylinder and material optimize up to the stress reaching equivalent to the stress of the existing cylinder. After that the dimension of the cylinder are finalize. The analysis yields a weight reduction of fuel tank.
Fault Injection Approach for Network on Chipijsrd.com
Packet-based on-chip interconnection networks, or Network-on-Chips (NoCs) are progressively replacing global on-chip interconnections in Multi-processor System-on-Chips (MP-SoCs) thanks to better performances and lower power consumption. However, modern generations of MP-SoCs have an increasing sensitivity to faults due to the progressive shrinking technology. Consequently, in order to evaluate the fault sensitivity in NoC architectures, there is the need of accurate test solution which allows evaluating the fault tolerance capability of NoCs. Presents an innovative test architecture based on a dual-processor system which is able to extensively test mesh based NoCs. The proposed solution improves previously developed methods since it is based on a NoC physical implementation which allows investigating the effects induced by several kind of faults thanks to the execution of on-line fault injection within all the network interface and router resources during NoC run-time operations. The solution has been physically implemented on an FPGA platform using a NoC emulation model adopting standard communication protocols. The obtained results demonstrated the effectiveness of the developed solution in term of testability and diagnostic capabilities and make our solutions suitable for testing large scale.
A Report on Prevalence, Abundance and Intensity of Fish Parasites in Cat Fish...ijsrd.com
The present investigation was on occurrence on different Parasite founds in 38 different cat fishes of River Siang. The present study on helminth parasite of cat fisheswith respect to length and weight revealed that Cestode infection was the highest in all fish sample of the fish species. The high worm burden was located in the gut mainly the intestine of the fish. Also some eggs were detected in the liver of two host fishes. In this study 38 specimen fishes were examined which contained both male and female specimens. Wallagoattuspecimen shows highest prevalence of about 100% than the other cat fishes specimens.
A Review on optimization of Dry Electro Discharge Machining Process Parametersijsrd.com
This document reviews optimization of dry electric discharge machining (EDM) process parameters. It discusses how dry EDM replaces liquid dielectric with gas, improving environmental friendliness. The literature review examines studies on how factors like voltage, current, and gas pressure affect material removal rate, surface roughness, and tool wear rate. Response surface methodology and algorithms like NSGA-II and genetic algorithms have been used to develop models and optimize the conflicting objectives of maximizing material removal rate while minimizing surface roughness. Overall, the document provides an overview of dry EDM optimization research aimed at improving process performance and efficiency.
Based on Heterogeneity and Electing Probability of Nodes Improvement in LEACHijsrd.com
In heterogeneous sensor networks, certain nodes become cluster heads which aggregate the data of their cluster nodes and transfer it to the sink. An Improved Energy leach protocol for cluster head selection in a hierarchically clustered heterogeneous network to reorganize the network topology efficiently is proposed in this research work. The proposed algorithm will use thresholding to improve the cluster head selection. The presented algorithm considers the sensor nodes in wireless network and randomly distributed in the heterogeneous network. The coordinates of the sink and the dimensions of the sensor field are known in prior.
Study on Flexural Behaviour of Activated Fly Ash Concreteijsrd.com
Cement concrete is the most widely used construction material in many infrastructure projects. The development and use of mineral admixture for cement replacement is growing in construction industry mainly due to the consideration of cost saving, energy saving, environmental production and conservation of resources. Present study is aimed at replacing cement in concrete with activated fly ash. The paper highlights the chemical activation of low calcium fly ash. Today activation of fly ash is playing an important role for enhancing the effectiveness of fly ash and accelerating the pozzolanic properties of fly ash. Activated fly ash certainly improves the early age strength and durability of concrete and corrosion tolerance. Many methods such as mechanical (physical), thermal and chemical activation are in use to activate the fly ash. The chemical activation is one of the easiest methods where fly ash can be activated by alkaline activators (i.e. alkaline solutions of high alkaline concentration chemicals like gypsum, sodium silicate and calcium oxide, KOH, etc.), which enhances the effectiveness of fly ash by disintegrating the glassy layer of fly ash molecules in cement concrete, thereby increasing its corrosion resistance. In the present dissertation, quality of fly ash is improved by chemical treatment by using chemical activators. The mechanical properties like compressive strength, split tensile strength, flexural strength of activated fly ash concrete and flexural strength of activated fly ash reinforced concrete beams are studied. For this project work, the chemicals like sodium silicate, calcium oxide are used to activate the fly ash in the ratio 1:8.
Selection of Energy Efficient Clustering Algorithm upon Cluster Head Failure ...ijsrd.com
Wireless Sensor Network (WSN) applications have increased in recent times in fields such as environmental sensing, area monitoring, air pollution monitoring, forest res detection, machine health monitoring, and landslide detection. In such applications, there is a high need of secure communication among sensor nodes. There are different techniques to secure network data transmissions, but due to power constraints of WSN, group key based mechanism is the most preferred one. Hence, to implement scalable energy efficient secure group communication, the best approach would be hierarchical based like Clustering. In most of the WSN designs based on clustering, Base Station (BS) is the central point of contact to the outside world and in case of its failure; it may lead to total disconnection in the communication. Critical applications like these cannot afford to have BS failure as it is a gateway from sensor networks to the outside world. In order to provide better fault tolerant immediate action, a new BS at some other physical location will have to take the charge. This may lead to a total change in the hierarchical network topology, which in turn leads to re-clustering the entire network and in turn formation of new security keys. Therefore, there is a need to find a suitable algorithm which clusters sensor nodes in such a way that when a BS fails and a new BS takes the charge, new group key gets established with minimum computation and less energy consumption.
Image Encryption Based on Pixel Permutation and Text Based Pixel Substitutionijsrd.com
Digital image Encryption techniques play a very important role to prevent image from unauthorized access. There are many types of methods available that can do Image Encryption, and the majority of them are scrambling algorithms based on pixel shuffling, which cannot change the histogram of an image. Hence, their security performances are not good. The encryption method that combines the pixel exchanging and gray level changing can handles reach a good chaotic effect. In this paper we focus on an image encryption technique based on pixel wise shuffling with the help of skew tent map and text based pixel substitution. The PSNR, NPCR and CC obtained by our technique shows that the proposed technique gives better result than the existing techniques.
SQL injection attack is the most common and difficult to handle attacks now days. SQL injection attack is of five types. In these paper details of SQL injection is mentioned.
Power Transient Response of EDFA as a function of Wavelength in the scenario ...ijsrd.com
In this paper power transient is investigated as function of add/drop wavelength and surviving channel wavelength. We have reported that power excursions varies with different wavelength allocations of the add/drop channels. Transient response is reduced by 73.39% in case when add/drop channels are taken in L band instead of C band. Also power transient response is calculated as a function of wavelengths of surviving channel. It has been observed that at higher wavelengths power excursions are less than at shorter wavelengths of C band.
Requirement to Improve C.B.R value in Black Cotton Soil in Saline Condition: ...ijsrd.com
The Aim of this paper to define need to improve C.B.R value of subgrade in saline condition in black cotton soil. Expansive soil have tremendous strength but it become very soft when it getting wet, It expands/swell due to its mineralogical composition during its wet condition, It creates cracks or consolidated when it is dry. The stability and performance of the pavements are greatly influenced by the sub grade and embankment as they serve as foundations for pavements. Expansive soils can be found on almost all the continents on the Earth. Destructive results caused by this type of soils have been reported in many countries. The Saline soils have excessive concentration of natural soluble salts, mainly of chlorides, sulphates and carbonates of calcium, magnesium and sodium. The magnesium in magnesium chloride may react with the cement paste in concrete, weakening the pavement structure. Rutting or Pot holing in granular Pavement & Differential shape resulting in rough pavement. Both Expansion and Salinity influences pavement failure due to failure in sub grade so it is required to detail study on stabilization of black cotton soil. Flexible Pavement design is based on C.B.R value and m.s.a value. If value of C.B.R is low than thickness of material is going to increase hence for economical thickness need to improve C.B.R value where it is low.
A Review on Laser marking by Nd-Yag Laser and Fiber Laserijsrd.com
Laser marking provides a unique combination of speed, permanence and versatility. Laser engraving is a manufacturing method for those applications where previously Electrical Discharge Machining (EDM) was the only choice. Laser engraving technology removes material layer-by-layer and the thickness of layers is usually in the range of few microns. Also there is many types of laser machines are available in recent time. Therefore for Optimum Use of Laser energy it is necessary to optimum use of process parameters to get best marking speed, Quality. This review paper presents various important works on Laser marking and its parameters i.e. Width, Depth, Contrast of Marking.
Analysis of Heat Transfer in Spiral Plate Heat Exchanger Using Experimental a...ijsrd.com
Heat transfer is the key to several processes in industrial application. In a present days maximum efficient heat transfer equipment are in demand due to increasing energy cost. For achieving maximum heat transfer, the engineers are continuously upgrading their knowledge and skills by their past experience. Present work is a skip in the direction of demonstrating the use of the computational technique as a tool to substitute experimental techniques. For this purpose an experimental set up has been designed and developed. Analysis of heat transfer in spiral plate heat exchanger is performed and same Analysis of heat transfer in spiral plate heat exchanger can be done by commercially procurable computational fluid dynamic (CFD) using ANSYS CFX and validated based on this forecasting. Analysis has been carried out in parallel and counter flow with inward and outward direction for achieving maximum possible heat transfer. In this problem of heat transfer involved the condition where Reynolds number again and again varies as the fluid traverses inside the section of flow from inlet to exit, mass flow rate of working fluid is been modified with time. By more and more analysis and experimentation and systematic data degradation leads to the conclusion that the maximum heat transfer rates is obtained in case of the inward parallel flow configuration compared to all other counterparts, which observed to vary with small difference in each section. Furthermore, for the increase heat transfer rate in spiral plate heat exchanger is obtain by cascading system.
Implementation of Full Adder Cell Using High Performance CMOS Technologyijsrd.com
This document presents the design and implementation of a full adder cell using a high-performance CMOS technology to improve speed and reduce power consumption. It begins with an introduction to CMOS technology and enhancements. It then discusses the design and architecture of a traditional full adder before proposing a new design using CMOS transistors. Simulation results show the proposed design has lower power consumption of around 100 microwatts, a 35% reduction compared to the existing design. The document concludes that reducing supply voltage is an effective way to lower power dissipation for low-performance applications like sensor networks.
Synthesis & FPGA Implementation of UART IP Soft Coreijsrd.com
this paper presents synthesis and hardware implementation of fully functional Universal Asynchronous Receiver Transmitter Intellectual Property core using XILINX SPARTAN-3 XC3S400 series FPGA. The UART soft core module consists of a transmitter along with baud rate generator and a receiver module with false start bit detection features. This has been implemented using VERILOG hardware description language and synthesized using XILINX ISE development tools. All behavioral simulation of UART module performed using MODELSIM simulator. After successful FPGA implementation transmitter and receiver module was tested by connecting FPGA board with Hyper Terminal software via RS232 interface at a data speed of 9.6 kbps.
Lateral Load Analysis of Shear Wall and Concrete Braced Multi-Storeyed R.C Fr...ijsrd.com
Generally RC framed structures are designed without regards to structural action of masonry infill walls present. Masonry infill walls are widely used as partitions. These buildings are generally designed as framed structures without regard to structural action of masonry infill walls. They are considered as non- structural elements. RC frame building with open first storey is known as soft storey, which performs poorly during strong earthquake shaking. Past earthquakes are evident that collapses due to soft storeys are most often in RC buildings. In the soft storey, columns are severely stressed and unable to provide adequate shear resistance during the earthquake. . In this study, 3D analytical model of twelve storeyed buildings have been generated for different buildings Models and analyzed using structural analysis tool 'ETABS'. To study the effect of infill, ground soft, bare frame and models with ground soft having concrete core wall and shear walls and concrete bracings at different positions during earthquake; seismic analysis using both linear static, linear dynamic (response spectrum method) has been performed. The analytical model of the building includes all important components that influence the mass, strength, stiffness and deformability of the structure.
Mathematical Modeling 15kW Standard Induction Motor using MATLAB/SIMULINKijsrd.com
Electric motors and motor systems in industrial and infrastructure applications with pumps, fans and compressors in buildings are responsible for 45% of the world's total electricity consumption. New and existing technologies offer the potential to reduce the energy demand of motor systems across the global economy by 20% to 30% with short payback period. This paper addresses the impact of load modeling in particular induction motor. The objective of paper is to analyze the performance of 15kw standard induction motor and extraction of parameter such as stator resistance, rotor resistance, stator and rotor inductance, torque, speed.
Wear Behaviour of Mg Alloy Reinforced with Aluminium Oxide and Silicon Carbid...ijsrd.com
Light weight metals like magnesium and its alloys are in more use these days in automotive & aerospace industries. Magnesium based hybrid structures which are combinations of magnesium and another material like Aluminium can offer optimal technical performance due to the favorable strength-weight ratio. Materials with improved tribological properties have become the pre-requisite of advanced engineering design. Metal matrix composites (MMCs) exhibit a unified combination of good tribological properties and high toughness of the interior bulk metal when compared with monolithic materials. Stir processing, a microstructure modification technique, has emerged as one of the processes used for fabrication of MMCs. Commercial cast or wrought type Mg-Al-Zn AZ-series alloys, such as AZ91 9 wt.% Al and 1 wt.% Zn, have been widely used in automobiles or electronic appliances. Tribological performance of the fabricated composite will be investigated using pin-on-disc wear & friction monitor. In this paper, a novel approach of making hybrid preforms with two types of reinforcements, i.e., lowcost and different sized particles, for magnesium-based composites is planned. This paper investigates the wear behavior of magnesium alloy (similar to commercially available AZ91) based metal-matrix composites (MMCs) reinforced with Silicon Carbide (SiC) & Aluminium oxide (Al2O3) particulates during dry/wet sliding.
An approach for transforming of relational databases to owl ontologyIJwest
Rapid growth of documents, web pages, and other types of text content is a huge challenge for the modern content management systems. One of the problems in the areas of information storage and retrieval is the lacking of semantic data. Ontologies can present knowledge in sharable and repeatedly usable manner and provide an effective way to reduce the data volume overhead by encoding the structure of a particular domain. Metadata in relational databases can be used to extract ontology from database in a special domain. According to solve the problem of sharing and reusing of data, approaches based on transforming relational database to ontology are proposed. In this paper we propose a method for automatic ontology construction based on relational database. Mining and obtaining further components from relational database leads to obtain knowledge with high semantic power and more expressiveness. Triggers are one of the database components which could be transformed to the ontology model and increase the amount of power and expressiveness of knowledge by presenting part of the knowledge dynamically.
Data Integration in Multi-sources Information Systemsijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Information residing in relational databases and delimited file systems are inadequate for reuse and sharing over the web. These file systems do not adhere to commonly set principles for maintaining data harmony. Due to these reasons, the resources have been suffering from lack of uniformity, heterogeneity as well as redundancy throughout the web. Ontologies have been widely used for solving such type of problems, as they help in extracting knowledge out of any information system. In this article, we focus on extracting concepts and their relations from a set of CSV files. These files are served as individual concepts and grouped into a particular domain, called the domain ontology. Furthermore, this domain ontology is used for capturing CSV data and represented in RDF format retaining links among files or concepts. Datatype and object properties are automatically detected from header fields. This reduces the task of user involvement in generating mapping files. The detail analysis has been performed on Baseball tabular data and the result shows a rich set of semantic information.
This document summarizes E.F. Codd's 1970 paper which proposed the relational model for data management in large database systems. It introduces some of the key concepts of the relational model, including representing data as n-ary relations and the use of a normal form to reduce data dependencies and inconsistencies. It also discusses some of the limitations of previous hierarchical and network models of data and how the relational model provides advantages in terms of data independence and a clearer logical representation of data.
USING ONTOLOGIES TO OVERCOMING DRAWBACKS OF DATABASES AND VICE VERSA: A SURVEYcseij
This document summarizes research on using ontologies to overcome drawbacks of databases and vice versa. It discusses how ontologies can be used to store and manage large numbers of database instances to improve performance. It also explains how databases can help address issues with ontologies, such as a lack of semantics, by providing structured storage. The document reviews drawbacks of both databases and ontologies and how each can help address limitations of the other through integration. This mutual benefit is an active area of research at the intersection of databases and ontologies.
E.F. Codd (1970). Evolution of Current Generation Database Tech.docxjacksnathalie
E.F. Codd (1970). Evolution of Current Generation Database Technologies. Communications of the ACM archive. Vol 13. Issue 6(June 1970
The database technology system can be traced back to a flat file, whose function was to keep data. These flat files surfaced in the 1960s and had many disadvantages. During the period 1968 to 1980, referred to as the Hierarchical error since the Hierarchical Database was developed, the IBM joined NAA to come up with GUAM, the first DBMS called the Information Management System (IMS). The following are the step by step evolution of the three major database systems since then, with regard to the specific model layers:
(a)Hierarchical Model: Here, this IMS was built by IBM in collaboration with Rockwell, and it becomes the major database system in the 1970s and 1980s. This model entailed the relation of files in a child -parent example, whereby at most one parent file hosted each child file.
(b)The Network Model was first developed by Charles Bachmann at Honeywell called the Integrated Data Store (IDS), which was considered fit for international use (standardization) in the year 1971 by the CODASYL body, meaning the Conference on Data Systems Languages. Here, files are perceived as members and owners, whereby each member may harbor many owners. Network Schema, Data Management Language and Sub-Schema are the three components associated to this model. The CODASYL DBTG was the most popular network model during those times.
(c) The Relational Database Model was developed in 1970 by E.F. Codd. He proposed the construction of this model, which saw to the onset of two major projects in the IBM's San Jose Lab. The model developed several branches or other forms of it like the INGRES, a University of California invention, which became widespread and the POSTGRES which was later developed into the Informix. The lab in San Jose had developed a System R, and this gradually shifted into the DB2, a relational model product. This model constitutes the use of instance and schema, with instance comprising of a table with columns and rows and the schema guiding on the structure used. It was wholly developed under the mathematical concept of set theory and the predicate logic. This model became stronger and fully effective in the 1980s, marked with the increase in the development of more relational based DBMS and the introduction of the SQL standard in the ISO and ANSI models.
The other database models were developed similarly in stages. Peter Chen came up with the Entity-Relationship model in the year 1976. This was followed by the 1985 development of the Object-Oriented Database Models. In the 1990s, an important development occurred, which entailed the addition of object-orientation in relational models. This time frame also witnessed the addition of more application areas or units such as OLAP, data warehouse, the web, enterprise resource planning and even the internet. The year 1991 saw the creation of Microsoft Ships Acce ...
TRANSFORMATION RULES FOR BUILDING OWL ONTOLOGIES FROM RELATIONAL DATABASEScscpconf
This document proposes transformation rules for building OWL ontologies from relational databases. It begins by classifying database tables into six categories based on their attributes and relationships. Transformation rules are then applied to each category to map the database schema into ontological components. The rules cover various database modeling constructs such as one-to-many relationships, simple and multiple inheritance, many-to-many relationships with and without attributes, and n-ary relationships. Additionally, the proposed approach analyzes stored data to detect disjointness and totalness constraints in class hierarchies and calculate participation levels in n-ary relations. The rules aim to generate richer ontologies than existing methods by handling more complex database cases and incorporating additional semantic information from data analysis.
Query Optimization Techniques in Graph Databasesijdms
Graph databases (GDB) have recently been arisen to overcome the limits of traditional databases for
storing and managing data with graph-like structure. Today, they represent a requirementfor many
applications that manage graph-like data,like social networks.Most of the techniques, applied to optimize
queries in graph databases, have been used in traditional databases, distribution systems,… or they are
inspired from graph theory. However, their reuse in graph databases should take care of the main
characteristics of graph databases, such as dynamic structure, highly interconnected data, and ability to
efficiently access data relationships. In this paper, we survey the query optimization techniques in graph
databases. In particular,we focus on the features they have in
OUTCOME ANALYSIS IN ACADEMIC INSTITUTIONS USING NEO4Jijcsity
Databases are an integral part of a computing system and users heavily rely on the services they provide.When interact with a computing system, we expect that data be stored for future use, that the data is able to be looked up fastly, and we can perform complex queries against the data stored in the database. Many
different emerging database types available for use such as relational databases, object databases, keyvalue databases, graph databases, and RDF databases. Each type of database provides unique qualities that have applications in certain domains. Our work aims to investigate and compare the performance and
scalability of relational databases to graph databases in terms of handling multilevel queries such as finding the impact of a particular subject with the working area of pass out students. MySQL was chosen as the relational database, Neo4j as the graph database.
Data Linkage is an important step that can provide valuable insights for evidence-based decision making, especially for crucial events. Performing sensible queries across heterogeneous databases containing millions of records is a complex task that requires a complete understanding of each contributing database’s schema to define the structure of its information. The key aim is to approximate the structure
and content of the induced data into a concise synopsis in order to extract and link meaningful data-driven facts. We identify such problems as four major research issues in Data Linkage: associated costs in pairwise matching, record matching overheads, semantic flow of information restrictions, and single order classification limitations. In this paper, we give a literature review of research in Data Linkage. The
purpose for this review is to establish a basic understanding of Data Linkage, and to discuss the
background in the Data Linkage research domain. Particularly, we focus on the literature related to the recent advancements in Approximate Matching algorithms at Attribute Level and Structure Level. Their efficiency, functionality and limitations are critically analysed and open-ended problems have been
exposed.
Student POST Database processing models showcase the logical s.docxorlandov3
Student POST:
Database processing models showcase the logical structure of a database. The most commonly used model is the Relational database model that sorts the data in a table that consist of rows and columns. The column holds the attributes of the entity and rows hold the data of a particular instance of the entities. The major advantage of the Relational model is that it is in the table form and hence easier for users to understand, manage and work with the data. And, with the primary key and foreign key concepts, the data can be uniquely identified, stored in different entities and retrieved effectively with the relationships. The other advantage is that with the relational model, SQL language can be used to work with the data which is simple to understand and most widely used. The disadvantage of relational model could be the financial cost that is higher in comparison as the specific software needs to be in place and the regular maintenance needs to be performed that requires highly skilled manpower. And, the complexity of the database can be further increased when the volume of the data keep in increasing. Also, there is the limitation in the length of fields stored as different data types in relational model (Joseph & Paul, 2009).
The other processing model is the Object-oriented model that depicts database as the collection of objects. The advantage of this model is that it is compatible to work with complex data sets with the use of Object IDs and object-oriented programming. It’s disadvantage is that object databases are not commonly used and the complexity can hamper the performance of database. The other type of database model is the Entity-Relationship model which is mostly used for the conceptual design of database. It pictures the entities, several attributes that falls within the domain of that entity and the cardinality of relationship between them. It’s advantage is that the E-R diagram is easily understandable by the users at the first glance and thus can effectively work with the data in no time and can point out the discrepancies in the data. The other advantage is that it can be easily converted to other models if required by the business. The disadvantage of Entity-Relationship is that the industry standard notations for the diagram is not defined and thus can create confusion to the users. This model is only suitable for high-level database design (S.J.D.,2020).
2Nd Student POST :
Database models or commonly referred to as schemas help represent the structure of a database and its format which is run by a DBMS. Database model uses vary depending on user specifications.
Types of database models
1.
Network model
This network model uses a structure similar to that of a hierarchical model. The model permits multiple parents, which is a tree-like structure model. This model emphasizes two basic concepts; records and sets. Records hold file hierarchy and sets define the many-to-many relationship .
This document discusses challenging issues and similarity measures for web document clustering. It begins with an introduction to text mining and document clustering. Some key challenges discussed include ambiguity in natural language, efficiently measuring semantic similarity between words, and cluster validity. Various string-based, term-based, and corpus-based similarity measures are then described that can be used for document clustering, including Jaro-Winkler distance, cosine similarity, latent semantic analysis, and pointwise mutual information. The conclusion states that accurate clustering requires a precise definition of similarity between document pairs.
The aim of this paper is to evaluate, through indexing techniques, the performance of Neo4j and
OrientDB, both graph databases technologies and to come up with strength and weaknesses os each
technology as a candidate for a storage mechanism of a graph structure. An index is a data structure that
makes the searching faster for a specific node in concern of graph databases. The referred data structure
is habitually a B-tree, however, can be a hash table or some other logic structure as well. The pivotal
point of having an index is to speed up search queries, primarily by reducing the number of nodes in a
graph or table to be examined. Graphs and graph databases are more commonly associated with social
networking or “graph search” style recommendations. Thus, these technologies remarkably are a core
technology platform for some Internet giants like Hi5, Facebook, Google, Badoo, Twitter and LinkedIn.
The key to understanding graph database systems, in the social networking context, is they give equal
prominence to storing both the data (users, favorites) and the relationships between them (who liked
what, who ‘follows’ whom, which post was liked the most, what is the shortest path to ‘reach’ who). By a
suitable application case study, in case a Twitter social networking of almost 5,000 nodes imported in
local servers (Neo4j and Orient-DB), one queried to retrieval the node with the searched data, first
without index (full scan), and second with index, aiming at comparing the response time (statement query
time) of the aforementioned graph databases and find out which of them has a better performance (the
speed of data or information retrieval) and in which case. Thereof, the main results are presented in the
section 6.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Implementation of Matching Tree Technique for Online Record LinkageIOSR Journals
This document discusses the implementation of a matching tree technique for online record linkage. It begins with an introduction to the record linkage problem and issues that arise when linking records across distributed, heterogeneous databases. It then reviews related work on record linkage techniques. The objective is to develop a matching tree approach to reduce the communication overhead of online record linkage while providing accurate matching decisions. The document outlines the proposed technique and discusses how it was implemented and evaluated using real and synthetic databases.
International Journal of Research in Engineering and Science is an open access peer-reviewed international forum for scientists involved in research to publish quality and refereed papers. Papers reporting original research or experimentally proved review work are welcome. Papers for publication are selected through peer review to ensure originality, relevance, and readability.
A consistent and efficient graphical User Interface Design and Querying Organ...CSCJournals
We propose a software layer called GUEDOS-DB upon Object-Relational Database Management System ORDMS. In this work we apply it in Molecular Biology, more precisely Organelle complete genome. We aim to offer biologists the possibility to access in a unified way information spread among heterogeneous genome databanks. In this paper, the goal is firstly, to provide a visual schema graph through a number of illustrative examples. The adopted, human-computer interaction technique in this visual designing and querying makes very easy for biologists to formulate database queries compared with linear textual query representation.
The document discusses database system architecture and data models. It introduces the three schema architecture which separates the conceptual, logical and internal schemas. This provides logical data independence where the conceptual schema can change without affecting external schemas or applications. It also discusses various data models like hierarchical, network, relational and object-oriented models. Key aspects of each model like structure, relationships and operations are summarized.
The document describes a new graph-oriented database called the sones GraphDB. It enables efficient storage, management, and analysis of complex, highly interconnected data. Unlike relational databases, it can directly link different types of data without additional constructs. The database combines a high-performance graph-oriented data management system with an object-oriented storage solution to allow flexible, real-time analysis of structured, semi-structured, and unstructured data.
Similar to Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases (20)
Due to availability of internet and evolution of embedded devices, Internet of things can be useful to contribute in energy domain. The Internet of Things (IoT) will deliver a smarter grid to enable more information and connectivity throughout the infrastructure and to homes. Through the IoT, consumers, manufacturers and utility providers will come across new ways to manage devices and ultimately conserve resources and save money by using smart meters, home gateways, smart plugs and connected appliances. The future smart home, various devices will be able to measure and share their energy consumption, and actively participate in house-wide or building wide energy management systems. This paper discusses the different approaches being taken worldwide to connect the smart grid. Full system solutions can be developed by combining hardware and software to address some of the challenges in building a smarter and more connected smart grid.
A Survey Report on : Security & Challenges in Internet of Thingsijsrd.com
In the era of computing technology, Internet of Things (IoT) devices are now popular in each and every domains like e-governance, e-Health, e-Home, e-Commerce, and e-Trafficking etc. Iot is spreading from small to large applications in all fields like Smart Cities, Smart Grids, Smart Transportation. As on one side IoT provide facilities and services for the society. On the other hand, IoT security is also a crucial issues.IoT security is an area which totally concerned for giving security to connected devices and networks in the IoT .As, IoT is vast area with usability, performance, security, and reliability as a major challenges in it. The growth of the IoT is exponentially increases as driven by market pressures, which proportionally increases the security threats involved in IoT The relationship between the security and billions of devices connecting to the Internet cannot be described with existing mathematical methods. In this paper, we explore the opportunities possible in the IoT with security threats and challenges associated with it.
In today’s emerging world of Internet, each and every thing is supposed to be in connected mode with the help of billions of smart devices. By connecting all the devises used in our day to day life, make our life trouble less and easy. We are incorporated in a world where we are used to have smart phones, smart cars, smart gadgets, smart homes and smart cities. Different institutes and researchers are working for creating a smart world for us but real question which we need to emphasis on is how to make dumb devises talk with uncommon hardware and communication technology. For the same what kind of mechanism to use with various protocols and less human interaction. The purpose is to provide the key area for application of IoT and a platform on which various devices having different mechanism and protocols can communicate with an integrated architecture.
Study on Issues in Managing and Protecting Data of IOTijsrd.com
This paper discusses variety of issues for preserving and managing data produced by IoT. Every second large amount of data are added or updated in the IoT databases across the heterogeneous environment. While managing the data each phase of data processing for IoT data is exigent like storing data, querying, indexing, transaction management and failure handling. We also refer to the problem of data integration and protection as data requires to be fit in single layout and travel securely as they arrive in the pool from diversified sources in different structure. Finally, we confer a standardized pathway to manage and to defend data in consistent manner.
Interactive Technologies for Improving Quality of Education to Build Collabor...ijsrd.com
Today with advancement in Information Communication Technology (ICT) the way the education is being delivered is seeing a paradigm shift from boring classroom lectures to interactive applications such as 2-D and 3-D learning content, animations, live videos, response systems, interactive panels, education games, virtual laboratories and collaborative research (data gathering and analysis) etc. Engineering is emerging with more innovative solutions in the field of education and bringing out their innovative products to improve education delivery. The academic institutes which were once hesitant to use such technology are now looking forward to such innovations. They are adopting the new ways as they are realizing the vast benefits of using such methods and technology. The benefits are better comprehensibility, improved learning efficiency of students, and access to vast knowledge resources, geographical reach, quick feedback, accountability and quality research. This paper focuses on how engineering can leverage the latest technology and build a collaborative learning environment which can then be integrated with the national e-learning grid.
Internet of Things - Paradigm Shift of Future Internet Application for Specia...ijsrd.com
In the world more than 15% people are living with disability that also include children below age of 10 years. Due to lack of independent support services specially abled (handicap) people overly rely on other people for their basic needs, that excludes them from being financially and socially active. The Internet of Things (IoT) can give support system and a better quality of life as well as participation in routine and day to day life. For this purpose, the future solutions for current problems has been introduced in this paper. Daunting challenges have been considered as future research and glimpse of the IoT for specially abled person is given in the paper.
A Study of the Adverse Effects of IoT on Student's Lifeijsrd.com
Internet of things (IoT) is the most powerful invention and if used in the positive direction, internet can prove to be very productive. But, now a days, due to the social networking sites such as Face book, WhatsApp, twitter, hike etc. internet is producing adverse effects on the student life, especially those students studying at college Level. As it is rightly said, something which has some positive effects also has some of the negative effects on the other hand. In this article, we are discussing some adverse effects of IoT on student’s life.
Pedagogy for Effective use of ICT in English Language Learningijsrd.com
The use of information and communications technology (ICT) in education is a relatively new phenomenon and it has been the educational researchers' focus of attention for more than two decades. Educators and researchers examine the challenges of using ICT and think of new ways to integrate ICT into the curriculum. However, there are some barriers for the teachers that prevent them to use ICT in the classroom and develop supporting materials through ICT. The purpose of this study is to examine the high school English teachers’ perceptions of the factors discouraging teachers to use ICT in the classroom.
In recent years usage of private vehicles create urban traffic more and more crowded. As result traffic becomes one of the important problems in big cities in all over the world. Some of the traffic concerns are traffic jam and accidents which have caused a huge waste of time, more fuel consumption and more pollution. Time is very important parameter in routine life. The main problem faced by the people is real time routing. Our solution Virtual Eye will provide the current updates as in the real time scenario of the specific route. This research paper presents smart traffic navigation system, based on Internet of Things, which is featured by low cost, high compatibility, easy to upgrade, to replace traditional traffic management system and the proposed system can improve road traffic tremendously.
Ontological Model of Educational Programs in Computer Science (Bachelor and M...ijsrd.com
In this work there is illustrated an ontological model of educational programs in computer science for bachelor and master degrees in Computer science and for master educational program “Computer science as second competence†by Tempus project PROMIS.
Understanding IoT Management for Smart Refrigeratorijsrd.com
1) The document discusses a proposed design for an intelligent refrigerator that leverages sensor technology and wireless communication to identify food items and order more through an internet connection when supplies are low.
2) Key aspects of the proposal include using RFID to uniquely identify each food item, storing item and usage data in an XML database, monitoring usage patterns to determine reordering needs, and executing orders through an online retailer using stored payment details.
3) Security and privacy concerns with such an internet-connected refrigerator are discussed, such as potential hacking of personal information or unauthorized device control. The proposal aims to minimize human interaction for household management.
DESIGN AND ANALYSIS OF DOUBLE WISHBONE SUSPENSION SYSTEM USING FINITE ELEMENT...ijsrd.com
Double wishbone designs allow the engineer to carefully control the motion of the wheel throughout suspension travel. 3-D model of the Lower Wishbone Arm is prepared by using CAD software for modal and stress analysis. The forces and moments are used as the boundary conditions for finite element model of the wishbone arm. By using these boundary conditions static analysis is carried out. Then making the load as a function of time; quasi-static analysis of the wishbone arm is carried out. A finite element based optimization is used to optimize the design of lower wishbone arm. Topology optimization and material optimization techniques are used to optimize lower wishbone arm design.
A Review: Microwave Energy for materials processingijsrd.com
Microwave energy is a latest largest growing technique for material processing. This paper presents a review of microwave technologies used for material processing and its use for industrial applications. Advantages in using microwave energy for processing material include rapid heating, high heating efficiency, heating uniformity and clean energy. The microwave heating has various characteristics and due to which it has been become popular for heating low temperature applications to high temperature applications. In recent years this novel technique has been successfully utilized for the processing of metallic materials. Many researchers have reported microwave energy for sintering, joining and cladding of metallic materials. The aim of this paper is to show the use of microwave energy not only for non-metallic materials but also the metallic materials. The ability to process metals with microwave could assist in the manufacturing of high performance metal parts desired in many industries, for example in automotive and aeronautical industries.
Web Usage Mining: A Survey on User's Navigation Pattern from Web Logsijsrd.com
With an expontial growth of World Wide Web, there are so many information overloaded and it became hard to find out data according to need. Web usage mining is a part of web mining, which deal with automatic discovery of user navigation pattern from web log. This paper presents an overview of web mining and also provide navigation pattern from classification and clustering algorithm for web usage mining. Web usage mining contain three important task namely data preprocessing, pattern discovery and pattern analysis based on discovered pattern. And also contain the comparative study of web mining techniques.
APPLICATION OF STATCOM to IMPROVED DYNAMIC PERFORMANCE OF POWER SYSTEMijsrd.com
Application of FACTS controller called Static Synchronous Compensator STATCOM to improve the performance of power grid with Wind Farms is investigated .The essential feature of the STATCOM is that it has the ability to absorb or inject fastly the reactive power with power grid . Therefore the voltage regulation of the power grid with STATCOM FACTS device is achieved. Moreover restoring the stability of the power system having wind farm after occurring severe disturbance such as faults or wind farm mechanical power variation is obtained with STATCOM controller . The dynamic model of the power system having wind farm controlled by proposed STATCOM is developed . To validate the powerful of the STATCOM FACTS controller, the studied power system is simulated and subjected to different severe disturbances. The results prove the effectiveness of the proposed STATCOM controller in terms of fast damping the power system oscillations and restoring the power system stability.
Making model of dual axis solar tracking with Maximum Power Point Trackingijsrd.com
Now a days solar harvesting is more popular. As the popularity become higher the material quality and solar tracking methods are more improved. There are several factors affecting the solar system. Major influence on solar cell, intensity of source radiation and storage techniques The materials used in solar cell manufacturing limit the efficiency of solar cell. This makes it particularly difficult to make considerable improvements in the performance of the cell, and hence restricts the efficiency of the overall collection process. Therefore, the most attainable maximum power point tracking method of improving the performance of solar power collection is to increase the mean intensity of radiation received from the source used. The purposed of tracking system controls elevation and orientation angles of solar panels such that the panels always maintain perpendicular to the sunlight. The measured variables of our automatic system were compared with those of a fixed angle PV system. As a result of the experiment, the voltage generated by the proposed tracking system has an overall of about 28.11% more than the fixed angle PV system. There are three major approaches for maximizing power extraction in medium and large scale systems. They are sun tracking, maximum power point (MPP) tracking or both.
A REVIEW PAPER ON PERFORMANCE AND EMISSION TEST OF 4 STROKE DIESEL ENGINE USI...ijsrd.com
This document summarizes a review paper on performance and emission testing of a 4-stroke diesel engine using ethanol-diesel blends at different pressures. The paper reviews several previous studies that tested blends of 5-30% ethanol mixed with diesel fuel. The studies found that a 10-20% ethanol blend can improve brake thermal efficiency compared to pure diesel, while also reducing emissions like NOx and smoke. Higher ethanol blends required advancing the injection timing to allow the engine to run. Ethanol-diesel blends were found to have lower density, viscosity, pour point and higher flash point compared to pure diesel. Overall, ethanol shows potential as a renewable fuel to improve engine performance and reduce emissions when blended with diesel
Study and Review on Various Current Comparatorsijsrd.com
This paper presents study and review on various current comparators. It also describes low voltage current comparator using flipped voltage follower (FVF) to obtain the single supply voltage. This circuit has short propagation delay and occupies a small chip area as compare to other current comparators. The results of this circuit has obtained using PSpice simulator for 0.18 μm CMOS technology and a comparison has been performed with its non FVF counterpart to contrast its effectiveness, simplicity, compactness and low power consumption.
Reducing Silicon Real Estate and Switching Activity Using Low Power Test Patt...ijsrd.com
Power dissipation is a challenging problem for today's system-on-chip design and test. This paper presents a novel architecture which generates the test patterns with reduced switching activities; it has the advantage of low test power and low hardware overhead. The proposed LP-TPG (test pattern generator) structure consists of modified low power linear feedback shift register (LP-LFSR), m-bit counter, gray counter, NOR-gate structure and XOR-array. The seed generated from LP-LFSR is EXCLUSIVE-OR ed with the data generated from gray code generator. The XOR result of the sequence is single input changing (SIC) sequence, in turn reduces the switching activity and so power dissipation will be very less. The proposed architecture is simulated using Modelsim and synthesized using Xilinx ISE9.2.The Xilinx chip scope tool will be used to test the logic running on FPGA.
Defending Reactive Jammers in WSN using a Trigger Identification Service.ijsrd.com
In the last decade, the greatest threat to the wireless sensor network has been Reactive Jamming Attack because it is difficult to be disclosed and defend as well as due to its mass destruction to legitimate sensor communications. As discussed above about the Reactive Jammers Nodes, a new scheme to deactivate them efficiently is by identifying all trigger nodes, where transmissions invoke the jammer nodes, which has been proposed and developed. Due to this identification mechanism, many existing reactive jamming defending schemes can be benefited. This Trigger Identification can also work as an application layer .In this paper, on one side we provide the several optimization problems to provide complete trigger identification service framework for unreliable wireless sensor networks and on the other side we also provide an improved algorithm with regard to two sophisticated jamming models, in order to enhance its robustness for various network scenarios.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
Reimagining Your Library Space: How to Increase the Vibes in Your Library No ...Diana Rendina
Librarians are leading the way in creating future-ready citizens – now we need to update our spaces to match. In this session, attendees will get inspiration for transforming their library spaces. You’ll learn how to survey students and patrons, create a focus group, and use design thinking to brainstorm ideas for your space. We’ll discuss budget friendly ways to change your space as well as how to find funding. No matter where you’re at, you’ll find ideas for reimagining your space in this session.
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
How to Setup Warehouse & Location in Odoo 17 Inventory
Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases
1. IJSRD - International Journal for Scientific Research & Development| Vol. 2, Issue 07, 2014 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 688
Semantic Conflicts and Solutions in Integration of Fuzzy Relational
Databases
Sanjay Kumar1
Dr. A. K. Sharma2
Kamal Kant3
1,2,3
Department of Computer Science & Engineering
1,2,3
Madan Mohan Malviya University of Technology , Gorakhpur- 273010 ( U.P.), India
Abstract— Database schema integration is an important
discipline for constructing heterogeneous multidatabase
systems. Fuzzy information has also been introduced into
relational databases and has been extensively studied.
However, the issues of integrating local fuzzy relations are
rarely addressed. In this paper, we identify new types of
conflicts that may occur in schemas and data due to the
inclusion of fuzzy relational databases. We propose a
methodology that resolves these new types of conflicts in a
specific order to minimize the execution time of integration
process.
Key words: Multidatabase, Semantic conflicts, Methodology
Schema integration.
I. INTRODUCTION
Database integration has been a major research area in
recent years. Many issues related to schema integration have
been extensively studied. While some technical problems
have been fully addressed, some others still remain
unsolved. Fuzzy database system has the ability to represent
and to process uncertain and imprecise data. In this paper,
we discuss the schema integration process then investigate
the problem of integrating fuzzy relational databases. We
identify new types of conflicts that may occur in schemas
and data due to the inclusion of fuzzy relational databases
and propose the methodology to resolve these conflicts. The
methodology for the resolution has the following
properties. (a) It puts the resolution of these new conflicts
into the context of resolution of other types of conflicts not
caused by fuzzy databases. (b) It proposes a particular order
in which these types of conflicts resolved. To concentrate
on the main issues, we consider only the outer-join
integration operator, a most frequently used integration
operator in multidatabase system [9, 11].
Databases hold data that represent the properties of
real world objects. A set of real-world objects can be
described by the constructs of a single data model and stored
in one and only one database. Nevertheless, in reality, one
can usually find two or more databases storing information
about the same real-world objects. When two or more
databases represent overlapping sets of real-world objects,
there is a strong need to integrate these databases in order
to support applications of cross- functional information
systems. Database integration process is Among the issues,
schema integration has probably received the most
attention [6,7,12]. Many problems related to schema
integration such as name conflict, structural conflicts, scale
conflicts, data inconsistency etc. have been studies. Parallel
to the development of multidatabase system, fuzzy database
system have also been making their way to the main stream
database research in recent years [10].
As we know, information is often vague or
ambiguous in real world application. In order to represent
and process such imperfect information, fuzzy information
has been introduced into relational databases. Besides,
modeling fuzzy information in object-oriented databases and
conceptual data model such as ER, EER and IFO has also
received increasing attention at present. Therefore,
integration of fuzzy component databases is essentially a
need for the applications of fuzzy databases and the
development of integrated database systems [2]. An
important aspect of database integration is the definition of a
global schema that captures the description of the combined
(or integrated) database. Schema integration is the process
of merging schemas of databases, and instance integration to
be the process of integrating the database instances [1]. In
this paper, we identify the conflicts in fuzzy multidatabase
systems and provide a methodology for resolving these
conflicts in a specific order.
II. LITERATURE SURVEY
C. Batini, M. Lenzerini [3] describes the fundamental
principles of the database that a database allows a non
redundant, unified representation of all data managed in an
organization. This is achieved only when methodologies are
available to support integration across organizational and
application boundaries. Methodologies for database design
usually perform the design activity by separately producing
several schemas, representing parts of the application, which
are subsequently merged. Database schema integration is the
activity of integrating the schemas of existing or proposed
databases into a global, unified schema. The aim of the
paper is to provide first a unifying framework for the
problem of schema integration, then a comparative review
of the work done thus far in this area. Such a framework,
with the associated analysis of the existing approaches,
provides a basis for identifying strengths and weaknesses of
individual methodologies, as well as general guidelines for
future improvements and extensions.
Y. Breitbart [6] propose a new 5-layer model
representing information richness or expressivity to assist in
the integration of heterogeneous distributed database
systems. The model has been used in a current ESPRIT
project (MIPS), which utilises an embedded KBS to assist in
query reformulation and answer construction when
accessing heterogeneous distributed information sources,
and shown to be useful.
Access to a heterogeneous distributed collection of
databases can be simplified by providing users with a
logically integrated interface or global view. This paper
identifies several kinds of structural and data inconsistencies
that might exist. It describes a versatile view definition
facility for the functional data model and illustrates the use
of this facility for resolving inconsistencies.
B.P. Buckles and F.E Petry[10] introduced Fuzzy
Relational Databases (FRDB) are introduced in order to
overcome the lack of ability of relational databases to model
uncertain and incomplete data. The use of fuzzy sets and
2. Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases
(IJSRD/Vol. 2/Issue 07/2014/157)
All rights reserved by www.ijsrd.com 689
fuzzy logic to extend existing database models to include
these possibilities has been utilized since the 1980s. In and,
authors offer one of the first approaches to incorporate fuzzy
logic in ER model. Their model allows fuzzy attributes in
entities and relationships. Furthermore, the FRDB model
was developed in i.e. a way to use fuzzy EER model to
model the database and represent modeled fuzzy knowledge
using relational database in detail was founded. A more
complete survey of research in this area can be found in.
Following these attempts, in authors defined a new type of
fuzzy SQL language based on the FRDB model developed
specifically for this purpose.
A. Chen[8] explain Outer- join optimization. It is
used in distributed relational multidatabase systems for
integrating local schemas to a global schema. Queries
against the global schema need to be modified, optimized,
and decomposed into subqueries at local sites for
processing. Since outerjoin combines local relations in
different databases to form a global relation, it is expensive
to process. In this paper, based on the structure of the query
and the definition of the schemas, queries with outer-join,
join, select and project operations are optimized. Conditions
where outer-join can be avoided or be transformed into a
one-side outer-join are identified. By considering these
conditions the response time for query processing can be
reduced.
Won Kim, I.Choi, S.Gala and M.Scheevel [7]
describe the objective of a multidatabase system is to
provide a single uniform interface to accessing multiple
independent databases being managed by multiple
independent, and possibly heterogeneous, database systems.
One crucial element in the design of a multidatabase system
is the design of a data definition language for specifying a
schema that represents the integration of the schemas of
multiple independent databases. The design of such a
language in turn requires a comprehensive classification of
the conflicts (i.e., discrepancies) among the schemas of the
independent databases and development of techniques for
resolving (i.e., homogenizing) all of the conflicts in the
classification.
III. FUZZY RELATIONAL DATABASE
The basic data values in a fuzzy relational database are the
(conventional) crisp values and the fuzzy terms that
represents uncertainty and imprecision. Inconsistency,
imprecision, vagueness, uncertainty, and ambiguity are five
basic kinds of imperfect information in database systems.
Inconsistency is a kind of semantic conflict,
meaning the same aspect of the real world is irreconcilably
represented more than once in a database or in several
different databases. For example, the age of George is stored
as 34 and 37 simultaneously. Information inconsistency
usually comes from information integration.
Intuitively, the imprecision and vagueness are
relevant to the content of an attribute value and it means
that a choice must be made from a given range (interval
or set) of values but we do not know exactly which one
to choose at present. In general, vague information is
represented by linguistic values. For example, the age of
Michael is a set {18, 19, 20, 21}, a piece of imprecise
information, and the age of John is a linguistic “old”, a piece
of vague information.
The uncertainty is related to the degree of truth of
its attribute value, and it means that we can apportion some,
but not all, of our belief to a given value or a group of
values. For example, the possibility that the age of Chris
is 35 right now may be 98%. The random uncertainty
described with probability theory is not considered here.
The ambiguity means that some elements of the
model lack complete semantics leading to several possible
interpretations.
A fuzzy (sub) set F of a set U of a crisp
values is characterized by a membership function µF : U
[0,1]. For each element e ϵ U, µF (e) is the degree to
which e is a member of F. We say that e is in F only if
µF (e) > 0. A membership function can be defined using the
following parameterized generic function whose curve is of
a trapezoidal shape as shown in figure 1.
Where parameter a,b,c and d are value in U(A)
such that a ≤ b ≤ c ≤ d, and the interval [a,d] is the support
of the membership function.
IV. SCHEMA INTEGRATION AND CONFLICTS
There are several approaches for implementing schema
integration in heterogeneous multiple databases. The first
approach is to merge individual schemas of component
databases into a single global conceptual schema for all
independent databases by integrating their schemas [6]. This
approach requires complete integration, i.e., all local
schemas are mapped to the global schema. The second
approach is to adopt a so-called federated database system
[14 ]. Being different from the first approach, there is no
global schema for all component databases in federated
database system and only a schema for describing data to be
assessed by the application is created in the local databases,
which is called “a partial schema”. This approach only
requires a partial integration. Notice that the target databases
based on global schema and federal databases are physical
databases. There are solid mapping among component
databases and target databases. Because minor change of the
component database can cause large variation of the target
databases, it is difficult to maintain such mapping.
3. Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases
(IJSRD/Vol. 2/Issue 07/2014/157)
All rights reserved by www.ijsrd.com 690
Let r and s be component relations from different
component databases and t1 and t2 be their tuples, called
component tuples, respectively. If t1 and t2 describe the
same real-world object, namely, they have the same attribute
values on the common key, then t1 and t2 can be integrated
to produce a single tuple t, called target tuple with outer-join
[8] operation after resolving the conflicts. According to the
semantic relationship between t1[Ai] and t2[Aj], four types
of important conflicts are generalized as follows [15].
Naming conflicts. This type of conflicts can be
divided in two aspects. One is semantically related
with data items being named differently and the
other is semantically unrelated with data items
being named equivalently.
Data type conflicts. This case occurs when
semantically related data items are represented
in different data types.
Data scaling conflicts. This case occurs when
semantically related data items are represented in
different databases using different units of measure.
Missing data. This case occurs when the schemas
of component databases have different attribute
sets.
The conflict of missing data can be resolved by
using outer-union operation and null values appear in target
tuples. For other conflicts, the mappings of attribute values
from the attributes of component tuples to the virtual
attributes [2] of target tuples are necessary. According the
concrete conflicts, mappings one-to-one, many to- one, and
one-to-many can be identified. The naming conflicts and
data type conflicts can be resolved with one-to-one
mapping. The data scaling conflicts can be resolved with
either many-to-one mapping or one-to-many mapping,
depending on the actual situation. For the first two
mappings, the result is still an atomic value of virtual
attribute. For the last mapping, however, the result is to
produce a special value of virtual attribute, the partial value,
in which exactly one of the values is a true value [15].
V. SEMANTIC CONFLICTS IN FUZZY MULTIDATABASE
SYSTEMS
In this section, we investigate the conflicts that may occur in
the schema of fuzzy multidatabase systems. In the following
discussion, let r and s be fuzzy component relations from
different component databases and t1 and t2 be their tuples,
called component tuples, respectively. Let r and s have the
common key. Assume that there is no any fuzzy value for
the key and t1 and t t2 have the same key values and
attribute Mu is used to indicate the degree membership of
tuples.
A. Membership Degree Conflicts:
Membership degree conflicts occurred at the level of
tuples , which can be classified into two classes as
follows:-
1) Missing membership degree:
Among t1 and t2, one is associated with an attribute of
membership degree, i.e., tuple is fuzzy, but another is not,
i.e., tuple is crisp.
2) Inconsistent membership degree:
That is t1 [Mu] ≠ t2 [Mu] . It can occurred even if the
two tuples have identical values on all other attributes.
Example: Consider the following three relations
Student,
Sincere_Student and Smart_Student given in table
1. There exist a conflict of missing membership between
tuples of relation Student and Sincere_Student as well as
Student and Smart_Student where as a conflict of
inconsistent membership exists between the tuples of
relations Sincere_Student and Smart_Student.
B. Attribute value conflicts in identical attribute domains
Let Ai and Aj be attribute with same domain in relation
r and s , respectively, and t1[Ai ] and t2[Aj ] are
semantically related to each other .
Inconsistent Crisp Attribute Values: When
attribute t1 [Ai ] and t2 [Aj ] are all crisp but t1
(Ai ) ≠ t2 (Aj ) . Example: Age of “vijay" in
relation r is “33”, but in relation s it is ”43".
Inconsistent Fuzzy Attribute Values: When
attribute t1 [Ai ] and t2 [Aj ] are all fuzzy but t1
(Ai ) ≠ t2 (Aj ). Example: Age of “vijay” in Relation
r is “0.7/22” but in relation s it is” 0.8/22”.
Missing Fuzzy Attribute Values: Among attribute
t1 [Ai ] and t2 [Aj ] , one is fuzzy set while other
is crisp . Example: Age of “vijay” in Relation r is
“22”, but in relation s it is “0.8/22".
C. Missing attributes:
Missing attributes mean that relations r and s have
different attribute sets. In other word an attribute in a
component relation is not semantically related to any
attribute in other component relations.
Consider an example that r is a relation on the
schema {ID, Name, Age} and s is a relation on the schema
{ID, Name, Major}. Attribute "Age" in r is a missing
attribute of relation s and attribute "Major" in s is a missing
attribute of relation r.
D. Attribute Name Conflicts
Attribute name conflicts are the naming conflicts. Let Ai
and Aj be attributes in r and s, respectively. This type of
conflicts can be divided in two aspects.
Semantically related attributes are named
differently, i.e., synonyms.
Semantically unrelated data items are named i.e.,
homonyms.
It should be noticed that one is not concerned with
the conflicts of missing attributes and attribute names if
component relations are fuzzy.
E. Attribute domain conflicts
Data type conflict and data scaling conflict mentioned above
are caused by inconsistent attribute domains. When there are
fuzzy attribute values in component tuples, the attribute
domain conflicts become more complicated. It is noticed
4. Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases
(IJSRD/Vol. 2/Issue 07/2014/157)
All rights reserved by www.ijsrd.com 691
that there is no attribute domain conflict in membership
degree attributes.
Let Ai and Aj be attributes with different domains
in r and s, respectively, and t1[Ai ] and t2[Aj] are
semantically related to each other.
Data format conflicts. Although Ai and Aj have
the same data type and data unit, they have
different expressive formats. For example, t1 [Ai ]
and t2 [Aj] all represent date, but t1 [Ai ] is in the
form of “22/05/98” while t2 [Aj] is “05/22/98”.
Data unit conflicts. Attributes Ai and Aj have the
same data type, but their units of measure are
different. For example, t1[Ai ] and t2[Aj ] are all
real data, but t1[Ai ] is “22.4 kilogram” while t2[Aj ]
is “22.9 pound”.
Data type conflicts. Attributes Ai and Aj have
different data type. Therefore, we may have t1[Ai ]
= 22 and t2[Aj ] = 21.9, which are integer and real,
respectively.
VI. PROPOSED METHODOLOGY
We now present a methodology for integrating two
fuzzy relation , which resolve various types of conflicts.
The methodology for the resolution has the following
properties. (1) It puts the resolution of conflicts into the
context of resolution of other types of conflicts not caused
by fuzzy databases. (2) It proposes a particular order in
which these types of conflicts resolved. This presented
integration methodology increases the performance of
integration of component fuzzy relations.
A. Integration Methodology:
1) Identify and resolve any conflicts between attribute
names (that is, synonyms and homonyms).
2) Resolve any missing membership degree attribute
conflicts.
3) For each pair of corresponding local attributes resolve
the attribute domain inconsistency in the following steps-
Create a global universe by resolving the
following types of domain conflicts between
the two attributes.
Data type conflicts.
Data unit conflicts.
For each of the two local attributes determine a
mapping and inverse mapping between its values and
that of the global attributes.
4) Integrate the data from the two local relations by using
the outer-join operator. All data inconsistency will be
resolve in this step.
The basis for resolving various conflicts in the
given order is that the identification of conflicts in step
usually depends on the resolution of the conflicts in the
previous step. For example, Resolving attribute name
conflicts allows one to identify any domain conflicts
between local attributes that are resolved to the same
name, and without resolving the universe conflicts first.
VII. RESOLUTIONS OF SEMANTIC CONFLICTS
Among the above-mentioned conflicts, some of them,
including missing attributes, attribute name conflicts,
inconsistent crisp attribute values on identical attribute
domains and inconsistent crisp attribute values on different
attribute domains, have been investigated and resolved [2,
13]. In this section, we focus on some new types of conflicts
in connection to fuzzy databases.
Let r and s be fuzzy component relations from
different component databases. Let t1 and t2 be component
tuples belonging to r and s, respectively, and t1and t2 have
the same crisp key values, namely, they describe the
identical object in the real world.
Now, we integrate t1 and t2 to form a tuple t. It is
clear that t has the same key and key values as t1 (or t2). The
other attribute values of t are formed after resolving the
conflicts between semantically related attribute values.
Here, we assume that there is no attribute name conflicts in r
and s because they can be resolved beforehand.
A. Resolving Membership Degree Conflicts
Missing Membership Degree: Missing membership degree
conflicts can be resolved by giving the global relation the
Mu attribute, and assigning a membership degree 1 to each
tuple of the local crisp relation.. Let t1 and t2 be tuples in r
(K, C) and s (K, C, Mu), respectively, where K stands for a
key, C represents a set of common attribute, and Mu is a
membership degree attribute.
Let t1 [K] and t2 [K] are crisp and t1 [K] = t2 [K].
Then t1 and t2 denote the same real-world object.
Assume that t1 [C] and t2 [C] are crisp or fuzzy
simultaneously. If t1 [C] and t2 [C] are fuzzy, then
they must be equivalent to each other. It is clear
that there is a conflict of missing membership
degree between t1 and t2. For tuple t formed by
integrating t1 and t2, its schema is {K, C, Mu},
and t [K] = t1 [K] = t2 [K], t [C] = t1[C] = t2 [C] ,
and t [Mu] = max (1, t2 [Mu]) = 1.
Inconsistent Membership Degree: Inconsistent
membership degree conflict can be resolved by
giving the maximum value of membership
degree attribute of both relation to the global
relation Mu attribute.
Now consider two relations r and s under
relation schema R(K,C,Mu) and S(K,C,Mu)
respectively, where the attributes K,C, Mu are the
same as defined above.
Let t1[K ] and t2[K ] are crisp and t1[K ] = t2[K ].
Then t1 and t2 denote the same real world object.
Hence it is resolved that the integrated tuple t
would be under relation schema G(K,C,MD)
such that – t[K ] = t1 [ K] = t2[ K], t[C ] = t1[ C]=
t2[ C] and t [Mu ] = max(t1[Mu],t2[Mu]).
B. Resolving Attribute Value Conflicts In Identical
Attribute Domains
Let t1 and t2 be component tuples in r (K, C) and s (K, C),
respectively, where K is key and C is a set of common
attribute. In order to simplify the discussion, here,
membership degree attributes are not considered. If they are
included, the potential conflicts can be resolved by applying
the above methods.
Assume that t1 [K] and t2[K] are crisp and t1 [K] =
t2 [K]. At this moment, the schema of integrated
target relation is {K, X} and t [K] = t1 [K] = t2 [K].
Let A A ϵ C, then—
5. Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases
(IJSRD/Vol. 2/Issue 07/2014/157)
All rights reserved by www.ijsrd.com 692
When both local attributes are crisp, then global
attribute shall also be crisp terms.Let t1[Ai ] and
t2[Aj ] are both crisp and if both are equal as
t1[A ] = t2[A ].Then after integration it would as
t[A ] = t1[A ] = t2[A ]. But if t1 [A] # t2[A] , then
conflict of inconsistent attribute values occurs
and t[A ] = [t1[A ],t2 [ A]], being a partial value
shall be adopted.
When one of local attribute is crisp and other is
fuzzy, then global attribute shall be fuzzy terms
(including their membership function shall be
adopted by global attribute ). Let t1 [A ] and
t2[A ] are crisp and fuzzy respectively , then
conflict of missing fuzzy attribute value occurs,
and t[A] = t2[A].
When both local attributes t1[A ] and t2[A ] are
fuzzy and t1[A] ≠ t2[A] , then Conflict of
inconsistent fuzzy attribute values occurs and
t[A] = t1[A] f t2[ A] ie; fuzzy union which is
adopted for global attribute.
C. Resolving Attribute Value Conflicts In Inconsistent
Attribute Domains
In order to resolve attribute value conflicts in inconsistent
attribute domain, the conflicts of attribute domains should
be resolved firstly. For this purpose, the component relations
are converted into other relations, called virtual component
relations. The attributes in virtual component relations are
called virtual attributes [9, 23]. Note that there are no
attribute domain conflicts in virtual component relations
because they have been resolved by mapping an attribute
concerned with domain conflicts in an original component
relation to the corresponding virtual attribute. It is clear that
such mappings must also been done between a tuple in
original component relation and the corresponding tuple in
virtual component relation, called virtual tuple, or more
precisely between an attribute value and a value of the
corresponding virtual attribute.
Instead of integrating original component relations,
their virtual component relations are integrated to form the
target relation.
According to different types of attribute domain
conflicts, the above-mentioned mappings can be classified
into one-to-one mapping, many-to-one mapping, or oneto-
many mapping. The one-to-one mapping produces certain
result for mapping one data item. Therefore, a crisp attribute
value in original component relation is mapped into another
crisp value of the corresponding virtual attribute. In
addition, a fuzzy attribute value in original component
relation is mapped into another fuzzy value of the
corresponding virtual attribute.
VIII. RESULT &DISCUSSION:
The proposed integration methodology is implemented in
java platform. In the simulation jdk1.8 and Netbeans8.0 act
as front end while Mysql5.1.36 act as back end. In our
proposed methodology, we integrate two fuzzy component
relations and semantic conflicts arise in this integration is
resolved in specific order to minimize the execution time of
integration .After integration of these fuzzy relations we get
a global relation.
Fig. 2: select the relation which is integrated.
Fig. 3: All employee names in global relation
Fig. 4: Comparison of conflict resolution time
In the above figure4, we describe the comparison
of conflicts resolution time using integration methodology
vs without methodology. In integration methodology
conflicts are resolved in a specific order. The specific order
does not create any new conflict on resolution.
IX. CONCLUSION
In this paper, we describe the problem of integrating fuzzy
relational databases into a multidatabase system. We
identify all new schematic and data representational
conflicts that arise in such a system due to the inclusion
of fuzzy relational databases. We propose a methodology to
resolve the new types of conflicts. This methodology
imposes a specific order in which the conflicts should be
resolved to increase performance of integration process of
fuzzy relational databases. Our study serves as the first step
towards building multidatabase systems that are capable of
processing not only crisp information but also uncertain
and imprecise information.
REFERENCES
[1] Awadhesh Kumar Sharma, A.Goswami, D.K.
Gupta. “Integration of Fuzzy Databases: Problem
& Solutions”. International Journal of Computer
Applications (0975- 8887) Volume 2- N0.3, May
2010.
[2] Zongmin M. Ma, Wenjun J. Zhanng, Weiyin Y.Ma.
“Semantic conflicts and solutions in Fuzzy
Multidatabase System”. S. Bhalla (Ed.): DNIS
2000, LNCS 1966, pp. 80-90, 2000.@ Springer-
Verlag Berlin Heidelberg 2000.
[3] Batini, C. Lenzerini, M., Navathe, S. B.: “A
Comparative Analysis of Methodologies for
database Schema Integration”. ACM Computing
Surveys 18 (1986) 323-364.
6. Semantic Conflicts and Solutions in Integration of Fuzzy Relational Databases
(IJSRD/Vol. 2/Issue 07/2014/157)
All rights reserved by www.ijsrd.com 693
[4] Z. M. MA, LI YAN.“ A Literature Overview of
Fuzzy Conceptual Data Modeling”. Journal of
Information Science and Engineering 26, 427-441
(2010).
[5] Amel Grissa Touzi, Mohamed Ali Ben
Hassine.“New Architecture of Fuzzy Database
Management Systems”. The International Arab
Journal of Information Technology, Vol. 6, No. 3,
July 2009.
[6] Y. Breitbart, P.L Olson and G.R Thompson. Data
base integration in a distributed heterogeneous
database system. In IEEE Int’l Conf. on Data
engineering, Los Angles, 1986.
[7] Won Kim, I.Choi, S.Gala and M.Scheevel. On
resolving schematic heterogeneity in multidatabase
system. In Won Kim (editor), Modern Database
System: The Object Model, Interoperability , and
Beyond, pages 521-550. Addison-Wesley/ACM
Press,1995.
[8] A. Chen. Outer join optimization in multi database
system. In 2nd
Int’l Symp. On Distributed and
Parallel Database System,1990.
[9] Lim, E.P., Srivastava, J., Prabhakar S., Richardson
J. (1993). Entity Identification problem in
database integration. In Proc, Intl. Conf. on data
Engineering, 294 – 301.
[10]B .P. Buckles and F.E Petry. A fuzy model for
relational database. Fuzzy Set and System,
Volume 7, Number 3, pages 213-226,1982.
[11]D. Heimbigner and D. McLeod. A federated
architecture for information management. ACM
TOIS, July 1985.
[12]U. Dayal and H-Y Hwang. View definition and
generalization for database integration in a
multidatabase system. IEEE TSE, 1984.
[13]H. Nakajima, T. Sogoh and M. Arao. Development
of an efficient fuzzy SQL for large scale fuzzy
relational databases. In The Fifth IFSA World
Congress, 1993.
[14]Heimbigner, D., McLeod, D.: A Federated
Architecture for Information Management..ACM
Transactions on Office Information Systems 3
(1985) 253-278.
[15]DeMichiel, L.G. (1989). Resolving database
incompatibility; an approach to performing
relational operations over mismatched domain.
IEEE Trans. On Knowledge and Data Engineering
1 (4), pp 485 – 493.