With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
Clustering the results of a search helps the user to overview the information returned. In this paper, we
look upon the clustering task as cataloguing the search results. By catalogue we mean a structured label
list that can help the user to realize the labels and search results. Labelling Cluster is crucial because
meaningless or confusing labels may mislead users to check wrong clusters for the query and lose extra
time. Additionally, labels should reflect the contents of documents within the cluster accurately. To be able
to label clusters effectively, a new cluster labelling method is introduced. More emphasis was given to
/produce comprehensible and accurate cluster labels in addition to the discovery of document clusters. We
also present a new metric that employs to assess the success of cluster labelling. We adopt a comparative
evaluation strategy to derive the relative performance of the proposed method with respect to the two
prominent search result clustering methods: Suffix Tree Clustering and Lingo.
we perform the experiments using the publicly available Datasets Ambient and ODP-239
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSEIJDKP
Metadata represents the information about data to be stored in Data Warehouses. It is a mandatory
element of Data Warehouse to build an efficient Data Warehouse. Metadata helps in data integration,
lineage, data quality and populating transformed data into data warehouse. Spatial data warehouses are
based on spatial data mostly collected from Geographical Information Systems (GIS) and the transactional
systems that are specific to an application or enterprise. Metadata design and deployment is the most
critical phase in building of data warehouse where it is mandatory to bring the spatial information and
data modeling together. In this paper, we present a holistic metadata framework that drives metadata
creation for spatial data warehouse. Theoretically, the proposed metadata framework improves the
efficiency of accessing of data in response to frequent queries on SDWs. In other words, the proposed
framework decreases the response time of the query and accurate information is fetched from Data
Warehouse including the spatial information
With the development of database, the data volume stored in database increases rapidly and in the large
amounts of data much important information is hidden. If the information can be extracted from the
database they will create a lot of profit for the organization. The question they are asking is how to extract
this value. The answer is data mining. There are many technologies available to data mining practitioners,
including Artificial Neural Networks, Genetics, Fuzzy logic and Decision Trees. Many practitioners are
wary of Neural Networks due to their black box nature, even though they have proven themselves in many
situations. This paper is an overview of artificial neural networks and questions their position as a
preferred tool by data mining practitioners.
Enhancement techniques for data warehouse staging areaIJDKP
Poor performance can turn a successful data warehousing project into a failure. Consequently, several
attempts have been made by various researchers to deal with the problem of scheduling the Extract-
Transform-Load (ETL) process. In this paper we therefore present several approaches in the context of
enhancing the data warehousing Extract, Transform and loading stages. We focus on enhancing the
performance of extract and transform phases by proposing two algorithms that reduce the time needed in
each phase through employing the hidden semantic information in the data. Using the semantic
information, a large volume of useless data can be pruned in early design stage. We also focus on the
problem of scheduling the execution of the ETL activities, with the goal of minimizing ETL execution time.
We explore and invest in this area by choosing three scheduling techniques for ETL. Finally, we
experimentally show their behavior in terms of execution time in the sales domain to understand the impact
of implementing any of them and choosing the one leading to maximum performance enhancement.
Clustering the results of a search helps the user to overview the information returned. In this paper, we
look upon the clustering task as cataloguing the search results. By catalogue we mean a structured label
list that can help the user to realize the labels and search results. Labelling Cluster is crucial because
meaningless or confusing labels may mislead users to check wrong clusters for the query and lose extra
time. Additionally, labels should reflect the contents of documents within the cluster accurately. To be able
to label clusters effectively, a new cluster labelling method is introduced. More emphasis was given to
/produce comprehensible and accurate cluster labels in addition to the discovery of document clusters. We
also present a new metric that employs to assess the success of cluster labelling. We adopt a comparative
evaluation strategy to derive the relative performance of the proposed method with respect to the two
prominent search result clustering methods: Suffix Tree Clustering and Lingo.
we perform the experiments using the publicly available Datasets Ambient and ODP-239
New proximity estimate for incremental update of non uniformly distributed cl...IJDKP
The conventional clustering algorithms mine static databases and generate a set of patterns in the form of
clusters. Many real life databases keep growing incrementally. For such dynamic databases, the patterns
extracted from the original database become obsolete. Thus the conventional clustering algorithms are not
suitable for incremental databases due to lack of capability to modify the clustering results in accordance
with recent updates. In this paper, the author proposes a new incremental clustering algorithm called
CFICA(Cluster Feature-Based Incremental Clustering Approach for numerical data) to handle numerical
data and suggests a new proximity metric called Inverse Proximity Estimate (IPE) which considers the
proximity of a data point to a cluster representative as well as its proximity to a farthest point in its vicinity.
CFICA makes use of the proposed proximity metric to determine the membership of a data point into a
cluster.
USING ONTOLOGIES TO IMPROVE DOCUMENT CLASSIFICATION WITH TRANSDUCTIVE SUPPORT...IJDKP
Many applications of automatic document classification require learning accurately with little training
data. The semi-supervised classification technique uses labeled and unlabeled data for training. This
technique has shown to be effective in some cases; however, the use of unlabeled data is not always
beneficial.
On the other hand, the emergence of web technologies has originated the collaborative development of
ontologies. In this paper, we propose the use of ontologies in order to improve the accuracy and efficiency
of the semi-supervised document classification.
We used support vector machines, which is one of the most effective algorithms that have been studied for
text. Our algorithm enhances the performance of transductive support vector machines through the use of
ontologies. We report experimental results applying our algorithm to three different datasets. Our
experiments show an increment of accuracy of 4% on average and up to 20%, in comparison with the
traditional semi-supervised model.
A CONCEPTUAL METADATA FRAMEWORK FOR SPATIAL DATA WAREHOUSEIJDKP
Metadata represents the information about data to be stored in Data Warehouses. It is a mandatory
element of Data Warehouse to build an efficient Data Warehouse. Metadata helps in data integration,
lineage, data quality and populating transformed data into data warehouse. Spatial data warehouses are
based on spatial data mostly collected from Geographical Information Systems (GIS) and the transactional
systems that are specific to an application or enterprise. Metadata design and deployment is the most
critical phase in building of data warehouse where it is mandatory to bring the spatial information and
data modeling together. In this paper, we present a holistic metadata framework that drives metadata
creation for spatial data warehouse. Theoretically, the proposed metadata framework improves the
efficiency of accessing of data in response to frequent queries on SDWs. In other words, the proposed
framework decreases the response time of the query and accurate information is fetched from Data
Warehouse including the spatial information
With the development of database, the data volume stored in database increases rapidly and in the large
amounts of data much important information is hidden. If the information can be extracted from the
database they will create a lot of profit for the organization. The question they are asking is how to extract
this value. The answer is data mining. There are many technologies available to data mining practitioners,
including Artificial Neural Networks, Genetics, Fuzzy logic and Decision Trees. Many practitioners are
wary of Neural Networks due to their black box nature, even though they have proven themselves in many
situations. This paper is an overview of artificial neural networks and questions their position as a
preferred tool by data mining practitioners.
Enhancement techniques for data warehouse staging areaIJDKP
Poor performance can turn a successful data warehousing project into a failure. Consequently, several
attempts have been made by various researchers to deal with the problem of scheduling the Extract-
Transform-Load (ETL) process. In this paper we therefore present several approaches in the context of
enhancing the data warehousing Extract, Transform and loading stages. We focus on enhancing the
performance of extract and transform phases by proposing two algorithms that reduce the time needed in
each phase through employing the hidden semantic information in the data. Using the semantic
information, a large volume of useless data can be pruned in early design stage. We also focus on the
problem of scheduling the execution of the ETL activities, with the goal of minimizing ETL execution time.
We explore and invest in this area by choosing three scheduling techniques for ETL. Finally, we
experimentally show their behavior in terms of execution time in the sales domain to understand the impact
of implementing any of them and choosing the one leading to maximum performance enhancement.
A statistical data fusion technique in virtual data integration environmentIJDKP
Data fusion in the virtual data integration environment starts after detecting and clustering duplicated
records from the different integrated data sources. It refers to the process of selecting or fusing attribute
values from the clustered duplicates into a single record representing the real world object. In this paper, a
statistical technique for data fusion is introduced based on some probabilistic scores from both data
sources and clustered duplicates
Recommendation system using bloom filter in mapreduceIJDKP
Many clients like to use the Web to discover product details in the form of online reviews. The reviews are
provided by other clients and specialists. Recommender systems provide an important response to the
information overload problem as it presents users more practical and personalized information facilities.
Collaborative filtering methods are vital component in recommender systems as they generate high-quality
recommendations by influencing the likings of society of similar users. The collaborative filtering method
has assumption that people having same tastes choose the same items. The conventional collaborative
filtering system has drawbacks as sparse data problem & lack of scalability. A new recommender system is
required to deal with the sparse data problem & produce high quality recommendations in large scale
mobile environment. MapReduce is a programming model which is widely used for large-scale data
analysis. The described algorithm of recommendation mechanism for mobile commerce is user based
collaborative filtering using MapReduce which reduces scalability problem in conventional CF system.
One of the essential operations for the data analysis is join operation. But MapReduce is not very
competent to execute the join operation as it always uses all records in the datasets where only small
fraction of datasets are applicable for the join operation. This problem can be reduced by applying
bloomjoin algorithm. The bloom filters are constructed and used to filter out redundant intermediate
records. The proposed algorithm using bloom filter will reduce the number of intermediate results and will
improve the join performance.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
One of the most important problems in modern finance is finding efficient ways to summarize and visualize
the stock market data to give individuals or institutions useful information about the market behavior for
investment decisions Therefore, Investment can be considered as one of the fundamental pillars of national
economy. So, at the present time many investors look to find criterion to compare stocks together and
selecting the best and also investors choose strategies that maximize the earning value of the investment
process. Therefore the enormous amount of valuable data generated by the stock market has attracted
researchers to explore this problem domain using different methodologies. Therefore research in data
mining has gained a high attraction due to the importance of its applications and the increasing generation
information. So, Data mining tools such as association rule, rule induction method and Apriori algorithm
techniques are used to find association between different scripts of stock market, and also much of the
research and development has taken place regarding the reasons for fluctuating Indian stock exchange.
But, now days there are two important factors such as gold prices and US Dollar Prices are more
dominating on Indian Stock Market and to find out the correlation between gold prices, dollar prices and
BSE index statistical correlation is used and this helps the activities of stock operators, brokers, investors
and jobbers. They are based on the forecasting the fluctuation of index share prices, gold prices, dollar
prices and transactions of customers. Hence researcher has considered these problems as a topic for
research.
Data mining , knowledge discovery is the process
of analyzing data from different perspectives and summarizing it
into useful information - information that can be used to increase
revenue, cuts costs, or both. Data mining software is one of a
number of analytical tools for analyzing data. It allows users to
analyze data from many different dimensions or angles, categorize
it, and summarize the relationships identified. Technically, data
mining is the process of finding correlations or patterns among
dozens of fields in large relational databases. The goal of
clustering is to determine the intrinsic grouping in a set of
unlabeled data. But how to decide what constitutes a good
clustering? It can be shown that there is no absolute “best”
criterion which would be independent of the final aim of the
clustering. Consequently, it is the user which must supply this
criterion, in such a way that the result of the clustering will suit
their needs.
For instance, we could be interested in finding
representatives for homogeneous groups (data reduction), in
finding “natural clusters” and describe their unknown properties
(“natural” data types), in finding useful and suitable groupings
(“useful” data classes) or in finding unusual data objects (outlier
detection).Of late, clustering techniques have been applied in the
areas which involve browsing the gathered data or in categorizing
the outcome provided by the search engines for the reply to the
query raised by the users. In this paper, we are providing a
comprehensive survey over the document clustering.
Elimination of data redundancy before persisting into dbms using svm classifi...nalini manogaran
Elimination of data redundancy before persisting into dbms using svm classification,
Data Base Management System is one of the
growing fields in computing world. Grid computing, internet
sharing, distributed computing, parallel processing and cloud
are the areas store their huge amount of data in a DBMS to
maintain the structure of the data. Memory management is
one of the major portions in DBMS due to edit, delete, recover
and commit operations used on the records. To improve the
memory utilization efficiently, the redundant data should be
eliminated accurately. In this paper, the redundant data is
fetched by the Quick Search Bad Character (QSBC) function
and intimate to the DB admin to remove the redundancy.
QSBC function compares the entire data with patterns taken
from index table created for all the data persisted in the
DBMS to easy comparison of redundant (duplicate) data in
the database. This experiment in examined in SQL server
software on a university student database and performance is
evaluated in terms of time and accuracy. The database is
having 15000 students data involved in various activities.
Keywords—Data redundancy, Data Base Management System,
Support Vector Machine, Data Duplicate.
I. INTRODUCTION
The growing (prenominal) mass of information
present in digital media has become a resistive problem for
data administrators. Usually, shaped on data congregate
from distinct origin, data repositories such as those used by
digital libraries and e-commerce agent based records with
disparate schemata and structures. Also problems regarding
to low response time, availability, security and quality
assurance become more troublesome to manage as the
amount of data grow larger. It is practicable to specimen
that the peculiarity of the data that an association uses in its
systems is relative to its efficiency for offering beneficial
services to their users. In this environment, the
determination of maintenance repositories with “dirty” data
(i.e., with replicas, identification errors, equal patterns,
etc.) goes greatly beyond technical discussion such as the
everywhere quickness or accomplishment of data
administration systems.
Nalini.M, nalini.tptwin@gmail.com, Anbu.S, anomaly detection,
data mining
big data
dbms
intrusion detection
dublicate detection
data cleaning
data redundancy
data replication, redundancy removel, QSBC, Duplicate detection, error correction, de-duplication, Data cleaning, Dbms, Data sets
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
A Novel Multi- Viewpoint based Similarity Measure for Document ClusteringIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
A Novel Approach for Clustering Big Data based on MapReduce IJECEIAES
Clustering is one of the most important applications of data mining. It has attracted attention of researchers in statistics and machine learning. It is used in many applications like information retrieval, image processing and social network analytics etc. It helps the user to understand the similarity and dissimilarity between objects. Cluster analysis makes the users understand complex and large data sets more clearly. There are different types of clustering algorithms analyzed by various researchers. Kmeans is the most popular partitioning based algorithm as it provides good results because of accurate calculation on numerical data. But Kmeans give good results for numerical data only. Big data is combination of numerical and categorical data. Kprototype algorithm is used to deal with numerical as well as categorical data. Kprototype combines the distance calculated from numeric and categorical data. With the growth of data due to social networking websites, business transactions, scientific calculation etc., there is vast collection of structured, semi-structured and unstructured data. So, there is need of optimization of Kprototype so that these varieties of data can be analyzed efficiently.In this work, Kprototype algorithm is implemented on MapReduce in this paper. Experiments have proved that Kprototype implemented on Mapreduce gives better performance gain on multiple nodes as compared to single node. CPU execution time and speedup are used as evaluation metrics for comparison.Intellegent splitter is proposed in this paper which splits mixed big data into numerical and categorical data. Comparison with traditional algorithms proves that proposed algorithm works better for large scale of data.
A statistical data fusion technique in virtual data integration environmentIJDKP
Data fusion in the virtual data integration environment starts after detecting and clustering duplicated
records from the different integrated data sources. It refers to the process of selecting or fusing attribute
values from the clustered duplicates into a single record representing the real world object. In this paper, a
statistical technique for data fusion is introduced based on some probabilistic scores from both data
sources and clustered duplicates
Recommendation system using bloom filter in mapreduceIJDKP
Many clients like to use the Web to discover product details in the form of online reviews. The reviews are
provided by other clients and specialists. Recommender systems provide an important response to the
information overload problem as it presents users more practical and personalized information facilities.
Collaborative filtering methods are vital component in recommender systems as they generate high-quality
recommendations by influencing the likings of society of similar users. The collaborative filtering method
has assumption that people having same tastes choose the same items. The conventional collaborative
filtering system has drawbacks as sparse data problem & lack of scalability. A new recommender system is
required to deal with the sparse data problem & produce high quality recommendations in large scale
mobile environment. MapReduce is a programming model which is widely used for large-scale data
analysis. The described algorithm of recommendation mechanism for mobile commerce is user based
collaborative filtering using MapReduce which reduces scalability problem in conventional CF system.
One of the essential operations for the data analysis is join operation. But MapReduce is not very
competent to execute the join operation as it always uses all records in the datasets where only small
fraction of datasets are applicable for the join operation. This problem can be reduced by applying
bloomjoin algorithm. The bloom filters are constructed and used to filter out redundant intermediate
records. The proposed algorithm using bloom filter will reduce the number of intermediate results and will
improve the join performance.
MAP/REDUCE DESIGN AND IMPLEMENTATION OF APRIORIALGORITHM FOR HANDLING VOLUMIN...acijjournal
Apriori is one of the key algorithms to generate frequent itemsets. Analysing frequent itemset is a crucial
step in analysing structured data and in finding association relationship between items. This stands as an
elementary foundation to supervised learning, which encompasses classifier and feature extraction
methods. Applying this algorithm is crucial to understand the behaviour of structured data. Most of the
structured data in scientific domain are voluminous. Processing such kind of data requires state of the art
computing machines. Setting up such an infrastructure is expensive. Hence a distributed environment
such as a clustered setup is employed for tackling such scenarios. Apache Hadoop distribution is one of
the cluster frameworks in distributed environment that helps by distributing voluminous data across a
number of nodes in the framework. This paper focuses on map/reduce design and implementation of
Apriori algorithm for structured data analysis.
One of the most important problems in modern finance is finding efficient ways to summarize and visualize
the stock market data to give individuals or institutions useful information about the market behavior for
investment decisions Therefore, Investment can be considered as one of the fundamental pillars of national
economy. So, at the present time many investors look to find criterion to compare stocks together and
selecting the best and also investors choose strategies that maximize the earning value of the investment
process. Therefore the enormous amount of valuable data generated by the stock market has attracted
researchers to explore this problem domain using different methodologies. Therefore research in data
mining has gained a high attraction due to the importance of its applications and the increasing generation
information. So, Data mining tools such as association rule, rule induction method and Apriori algorithm
techniques are used to find association between different scripts of stock market, and also much of the
research and development has taken place regarding the reasons for fluctuating Indian stock exchange.
But, now days there are two important factors such as gold prices and US Dollar Prices are more
dominating on Indian Stock Market and to find out the correlation between gold prices, dollar prices and
BSE index statistical correlation is used and this helps the activities of stock operators, brokers, investors
and jobbers. They are based on the forecasting the fluctuation of index share prices, gold prices, dollar
prices and transactions of customers. Hence researcher has considered these problems as a topic for
research.
Data mining , knowledge discovery is the process
of analyzing data from different perspectives and summarizing it
into useful information - information that can be used to increase
revenue, cuts costs, or both. Data mining software is one of a
number of analytical tools for analyzing data. It allows users to
analyze data from many different dimensions or angles, categorize
it, and summarize the relationships identified. Technically, data
mining is the process of finding correlations or patterns among
dozens of fields in large relational databases. The goal of
clustering is to determine the intrinsic grouping in a set of
unlabeled data. But how to decide what constitutes a good
clustering? It can be shown that there is no absolute “best”
criterion which would be independent of the final aim of the
clustering. Consequently, it is the user which must supply this
criterion, in such a way that the result of the clustering will suit
their needs.
For instance, we could be interested in finding
representatives for homogeneous groups (data reduction), in
finding “natural clusters” and describe their unknown properties
(“natural” data types), in finding useful and suitable groupings
(“useful” data classes) or in finding unusual data objects (outlier
detection).Of late, clustering techniques have been applied in the
areas which involve browsing the gathered data or in categorizing
the outcome provided by the search engines for the reply to the
query raised by the users. In this paper, we are providing a
comprehensive survey over the document clustering.
Elimination of data redundancy before persisting into dbms using svm classifi...nalini manogaran
Elimination of data redundancy before persisting into dbms using svm classification,
Data Base Management System is one of the
growing fields in computing world. Grid computing, internet
sharing, distributed computing, parallel processing and cloud
are the areas store their huge amount of data in a DBMS to
maintain the structure of the data. Memory management is
one of the major portions in DBMS due to edit, delete, recover
and commit operations used on the records. To improve the
memory utilization efficiently, the redundant data should be
eliminated accurately. In this paper, the redundant data is
fetched by the Quick Search Bad Character (QSBC) function
and intimate to the DB admin to remove the redundancy.
QSBC function compares the entire data with patterns taken
from index table created for all the data persisted in the
DBMS to easy comparison of redundant (duplicate) data in
the database. This experiment in examined in SQL server
software on a university student database and performance is
evaluated in terms of time and accuracy. The database is
having 15000 students data involved in various activities.
Keywords—Data redundancy, Data Base Management System,
Support Vector Machine, Data Duplicate.
I. INTRODUCTION
The growing (prenominal) mass of information
present in digital media has become a resistive problem for
data administrators. Usually, shaped on data congregate
from distinct origin, data repositories such as those used by
digital libraries and e-commerce agent based records with
disparate schemata and structures. Also problems regarding
to low response time, availability, security and quality
assurance become more troublesome to manage as the
amount of data grow larger. It is practicable to specimen
that the peculiarity of the data that an association uses in its
systems is relative to its efficiency for offering beneficial
services to their users. In this environment, the
determination of maintenance repositories with “dirty” data
(i.e., with replicas, identification errors, equal patterns,
etc.) goes greatly beyond technical discussion such as the
everywhere quickness or accomplishment of data
administration systems.
Nalini.M, nalini.tptwin@gmail.com, Anbu.S, anomaly detection,
data mining
big data
dbms
intrusion detection
dublicate detection
data cleaning
data redundancy
data replication, redundancy removel, QSBC, Duplicate detection, error correction, de-duplication, Data cleaning, Dbms, Data sets
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
A Novel Multi- Viewpoint based Similarity Measure for Document ClusteringIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
A Novel Approach for Clustering Big Data based on MapReduce IJECEIAES
Clustering is one of the most important applications of data mining. It has attracted attention of researchers in statistics and machine learning. It is used in many applications like information retrieval, image processing and social network analytics etc. It helps the user to understand the similarity and dissimilarity between objects. Cluster analysis makes the users understand complex and large data sets more clearly. There are different types of clustering algorithms analyzed by various researchers. Kmeans is the most popular partitioning based algorithm as it provides good results because of accurate calculation on numerical data. But Kmeans give good results for numerical data only. Big data is combination of numerical and categorical data. Kprototype algorithm is used to deal with numerical as well as categorical data. Kprototype combines the distance calculated from numeric and categorical data. With the growth of data due to social networking websites, business transactions, scientific calculation etc., there is vast collection of structured, semi-structured and unstructured data. So, there is need of optimization of Kprototype so that these varieties of data can be analyzed efficiently.In this work, Kprototype algorithm is implemented on MapReduce in this paper. Experiments have proved that Kprototype implemented on Mapreduce gives better performance gain on multiple nodes as compared to single node. CPU execution time and speedup are used as evaluation metrics for comparison.Intellegent splitter is proposed in this paper which splits mixed big data into numerical and categorical data. Comparison with traditional algorithms proves that proposed algorithm works better for large scale of data.
A comparative study on term weighting methods for automated telugu text categ...IJDKP
Automatic Text categorization refers to the process of assigning a category or some categories
automatically among predefined ones. Text categorization is challenging in Indian languages has rich in
morphology, a large number of word forms and large feature spaces. This paper investigates the
performance of different classification approaches using different term weighting approaches in order to
decide the most applicable one to Telugu text classification problem. We have investigated on different
term weighting methods for Telugu corpus in combination with Naive Bayes ( NB), Support Vector
Machine (SVM) and k Nearest Neighbor (kNN) classifiers.
An apriori based algorithm to mine association rules with inter itemset distanceIJDKP
Association rules discovered from transaction databases can be large in number. Reduction of association
rules is an issue in recent times. Conventionally by varying support and confidence number of rules can be
increased and decreased. By combining additional constraint with support number of frequent itemsets can
be reduced and it leads to generation of less number of rules. Average inter itemset distance(IID) or
Spread, which is the intervening separation of itemsets in the transactions has been used as a measure of
interestingness for association rules with a view to reduce the number of association rules. In this paper by
using average Inter Itemset Distance a complete algorithm based on the apriori is designed and
implemented with a view to reduce the number of frequent itemsets and the association rules and also to
find the distribution pattern of the association rules in terms of the number of transactions of non
occurrences of the frequent itemsets. Further the apriori algorithm is also implemented and results are
compared. The theoretical concepts related to inter itemset distance are also put forward.
The requirement of TV varies from people to people. To buy the best TV we need to analyze our requirement and also explore latest technology of television. To fulfill entertainment requirement Panasonic manufacture wide range of TV like LED TV, LCD TV, 3D TV and Plasma TV.
Integrating Web Services With Geospatial Data Mining Disaster Management for ...Waqas Tariq
Data Mining (DM) and Geographical Information Systems (GIS) are complementary techniques for describing, transforming, analyzing and modeling data about real world system. GIS and DM are naturally synergistic technologies that can be joined to produce powerful market insight from a sea of disparate data. Web Services would greatly simplify the development of many kinds of data integration and knowledge management applications. This research aims to develop a Spatial DM web service. It integrates state of the art GIS and DM functionality in an open, highly extensible, web-based architecture. The Interoperability of geospatial data previously focus just on data formats and standards. The recent popularity and adoption of Web Services has provided new means of interoperability for geospatial information not just for exchanging data but for analyzing these data during exchange as well. An integrated, user friendly Spatial DM System available on the internet via a web service offers exciting new possibilities for geo-spatial analysis to be ready for decision making and geographical research to a wide range of potential users.
SUITABILITY OF SERVICE ORIENTED ARCHITECTURE FOR SOLVING GIS PROBLEMSijait
Nowadays spatial data is becoming as a key element for effective planning and decision making in all aspects of societies. Spatial data are those data which are related to the features on the ground. In this way, a Geographic Information System (GIS) is a system that captures, analyzes, and manages any spatially referenced data. This paper analyzes the architecture and main features of Geographic Information Systems and aims at discussing some important problems emerged in the research of applying GIS in the organizations. It focuses on some of them such as lack of interoperability, agility and business alignment. We explain that SOA as a service oriented software architecture model can support the transformation of geographic information software from "system and function" to "service and application" and as the best practice of the architectural concepts can increase business alignment in the enterprise applications.
Query Optimization Techniques in Graph Databasesijdms
Graph databases (GDB) have recently been arisen to overcome the limits of traditional databases for
storing and managing data with graph-like structure. Today, they represent a requirementfor many
applications that manage graph-like data,like social networks.Most of the techniques, applied to optimize
queries in graph databases, have been used in traditional databases, distribution systems,… or they are
inspired from graph theory. However, their reuse in graph databases should take care of the main
characteristics of graph databases, such as dynamic structure, highly interconnected data, and ability to
efficiently access data relationships. In this paper, we survey the query optimization techniques in graph
databases. In particular,we focus on the features they have in
Mumbai University, T.Y.B.Sc.(I.T.), Semester VI, Principles of Geographic Information System, USIT604, Discipline Specific Elective Unit 2: Data Management and Processing System
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Survey of Agent Based Pre-Processing and Knowledge RetrievalIOSR Journals
Abstract: Information retrieval is the major task in present scenario as quantum of data is increasing with a
tremendous speed. So, to manage & mine knowledge for different users as per their interest, is the goal of every
organization whether it is related to grid computing, business intelligence, distributed databases or any other.
To achieve this goal of extracting quality information from large databases, software agents have proved to be
a strong pillar. Over the decades, researchers have implemented the concept of multi agents to get the process
of data mining done by focusing on its various steps. Among which data pre-processing is found to be the most
sensitive and crucial step as the quality of knowledge to be retrieved is totally dependent on the quality of raw
data. Many methods or tools are available to pre-process the data in an automated fashion using intelligent
(self learning) mobile agents effectively in distributed as well as centralized databases but various quality
factors are still to get attention to improve the retrieved knowledge quality. This article will provide a review of
the integration of these two emerging fields of software agents and knowledge retrieval process with the focus
on data pre-processing step.
Keywords: Data Mining, Multi Agents, Mobile Agents, Preprocessing, Software Agents
An elastic , effective, activety or intelligent ,graceful networking architecture layout be desired to make processing massive data. next to that ,existent network architectures be considerably incapable for
cleatting the huge data. massive data thrusts network exchequers into border it consequence with in network overcrowding ,needy achievement, then permicious employer exprtises. this offered the current state-of-the-art research affronts ,potential solutions into huge data networking notion. More specifically, present the state of networking problems into massive data connected intrequirements,capacity,running ,
data manipulating also will introduce the architectures of MapReduce , Hadoop paradigm within research
requirements, fabric networks and software defined networks which utilizized into making today’s idly growing digital world and compare and contrast into identify relevant drawbacks and solutions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A spatial data model for moving object databasesijdms
Moving Object Databases will have significant role in Geospatial Information Systems as they allow users
to model continuous movements of entities in the databases and perform spatio-temporal analysis. For
representing and querying moving objects, an algebra with a comprehensive framework of User Defined
Types together with a set of functions on those types is needed. Moreover, concerning real world
applications, moving objects move along constrained environments like transportation networks so that an
extra algebra for modeling networks is demanded, too. These algebras can be inserted in any data model if
their designs are based on available standards such as Open Geospatial Consortium that provides a
common model for existing DBMS’s. In this paper, we focus on extending a spatial data model for
constrained moving objects. Static and moving geometries in our model are based on Open Geospatial
Consortium standards. We also extend Structured Query Language for retrieving, querying, and
manipulating spatio-temporal data related to moving objects as a simple and expressive query language.
Finally as a proof-of-concept, we implement a generator to generate data for moving objects constrained
by a transportation network. Such a generator primarily aims at traffic planning applications.
Similar to A unified approach for spatial data query (20)
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
The Art of the Pitch: WordPress Relationships and Sales
A unified approach for spatial data query
1. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
A UNIFIED APPROACH FOR SPATIAL DATA QUERY
Mohammed Abdalla 1, Hoda M. O. Mokhtar 2, Mohamed Noureldin 3
1, 2, 3
Faculty of Computers and Information, Cairo University, Giza, Egypt
ABSTRACT
With the rapid development in Geographic Information Systems (GISs) and their applications, more and
more geo-graphical databases have been developed by different vendors. However, data integration and
accessing is still a big problem for the development of GIS applications as no interoperability exists among
different spatial databases. In this paper we propose a unified approach for spatial data query. The paper
describes a framework for integrating information from repositories containing different vector data sets
formats and repositories containing raster datasets. The presented approach converts different vector data
formats into a single unified format (File Geo-Database “GDB”). In addition, we employ “metadata” to
support a wide range of users’ queries to retrieve relevant geographic information from heterogeneous and
distributed repositories. Such an employment enhances both query processing and performance.
KEYWORDS
Spatial data interoperability; GIS; Geo-Spatial Metadata; Spatial Data Infrastructure; Geo-database.
1. INTRODUCTION
The need to store and process large amounts of diverse data, which is often geographically
distributed, is obvious is a wide range of application. Most GISs use specific data models and
databases for this purpose. This implies that making new data available to the system requires the
data to be transferred into the system’s specific data format and structure. However, this is a very
time consuming and tedious process. Data accessing, automatically or semi-automatically, often
makes large-scale investment in technical infrastructure and/or manpower inevitable. These
obstacles are some of the motivations behind the concept of information integration. With the
increase of location based services and geographically inspired applications, the integration of
raster and vector data becomes more and more important [24]. In general, a geo-database is a
database that is in some way referenced to locations on Earth [27]. Coupled with this data is
usually data known as attribute data. At-tribute data are generally defined as additional
information, which can then be tied to spatial data. GIS data can be separated into two categories:
spatially referenced data, which is represented by vector and raster forms (including imagery);
and attribute tables, which are represented in tabular format. Within the spatial referenced data
group, the GIS data can be further classified into two different types: vector and raster. Most GIS
applications mainly focus on the usage and manipulation of vector geo-databases with added
components to work with raster-based geo-databases. Basically, vector and raster models differ in
how they conceptualize, store, and represent the spatial locations of objects. The choice of vector,
raster, or combined forms for the spatial database is usually governed by the GIS system in use
and its ability to manipulate certain types of data. Nevertheless, integrated raster and vector
processing capabilities are most desirable and provide the greatest flexibility for data
manipulation and analysis. Many research papers discussed raster-vector integration as presented
in [24, 25, and 26]. In real world applications, the effective management and integration of
information across agency boundaries results in information being used more efficiently and
DOI : 10.5121/ijdkp.2013.3604
55
2. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
effectively [14]. Hence, developing interoperable platforms is a must. Several research work have
been directed towards establishing protocols and interface specifications offering support for the
discovery and retrieval of information that meets the user’s needs [3]. In [1], the authors refer to
spatial interoperability as the ability to communicate, run programs, or transfer spatial data
between diverse data without having prior knowledge about data sources characteristics.
Motivated by the importance of designing interoperable environments spatial data infra-structures
(SDI) were developed. A spatial data infrastructure (SDI) is a data infrastructure implementing a
framework of geographic data, metadata, users, and tools that interact to use spatial data in an
efficient way [3]. Another definition for SDI was presented in [7], in this paper the authors define
an SDI as the technology, policies, standards, human resources, and related activities necessary to
acquire process, distribute, use, maintain, and preserve spatial data. In general, SDI is required to
discover and deliver spatial data from a data repository, via a spatial service provider, to a user.
The authors in [2] defined the basic software components of an SDI as (1) a software client: to
display, query, and analyze spatial data (this could be a browser or a Desktop GIS), (2) a
catalogue service: to discover, browse, and query metadata or spatial services, spatial datasets,
and other resources, (3) a spatial data service: to allow the delivery of the data via the Internet, (4)
processing services: such as datum and projection transformations, (5) a (spatial) data repository:
to store data, e.g. a spatial database, and (6) a GIS software (client or desktop):to create and
update spatial data. Beside these software components, a range of (international) technical
standards are necessary that enable the interaction between the different software components.
Another vital component of an SDI is the “metadata” which can be viewed as a summarized
document providing content, quality, type, creation, and spatial information about a data set [8].
The importance of metadata in spatial data accessing, integration and management of distributed
GIS resources was explored in several works including [18, 19, 20, 21, 22]. Metadata can be
stored in any format including text file, Extensible Markup Language (XML), or database record.
The summarized view of the metadata enhances data sharing, availability, and reduces data
duplication. Inspired by the importance of developing an interoperable framework for spatial
queries, in this paper we present an interoperable architecture for spatial queries that utilizes
metadata to enhance the query performance. The proposed approach provides usage of modern
and open data access standards. It also helps to develop efficient ways to achieve inter-operability
including consolidation of links between data interoperability extensions and geo-graphic
metadata.
The main contributions of the paper are summarized as follows:
•
•
•
•
•
•
Developing an interoperable framework that converts the basic five vector data
formats (AutoCAD DWG, File Geo- database, Personal Geo-database, Shape file,
Coverage, and Geography Markup Language) into a single unified “gdb” format.
Presenting an easy to use tool for searching at the feature data level of spatial vector
data using metadata criteria.
Using XML-metadata style for expressing the feature metadata, such
representation is thus not restricted to a particular standard or profile.
Improving the quality and performance of spatial queries by filtering the number of
candidate results based on the features expressed in the metadata.
GIS users face an opportunity and a challenge in manipulating and accessing the
huge volume of data available from various GIS systems. The proposed approach can
help making it easier for them to find, access, and use other data sets. It also helps
them to easily advertise, distribute, reuse, and combine their data with other data sets.
The proposed approach provides effective and efficient data management for
processing heterogonous data. The power of the proposed model comes from
integrating sources and displaying to the human eye the proximity-based
relationships between objects of interest. Proximity can't be "seen" in the data, but it
can be seen on a map.
56
3. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
The rest of the paper is structured as follows: Section 2 presents an overview of related work. Section
3 defines the problem. Section 4, presents our proposed solution and architecture. In section 5 we
discuss the proposed system and the results achieved. In section 6 we discuss the analysis and testing
of our implemented system. Finally, section 7 concludes and presents directions for future work.
2. RELATED WORK
The need for geo-data from distributed GIS sources is seen in many applications including
decision making, location based services, and navigation applications. Integration of different
data models, types, and structures facilitates cross-data set analysis from both spatial and nonspatial perspectives. This needs motivated several prior work on spatial data interoperability. In
[4], a fuzzy geospatial data modelling technique for generation of fuzzy application schema is
introduced. This approach aims to formalize the fuzzy model using description logic. The
formalization facilitates automated schema mapping required for the integration process. In [5],
service-based methodology has been discussed for integrating distributed geospatial data
repositories in adherence to OGC specified open standards. The paper also describes the central
role of a geographic ontology in the development of an integrated information system which are
interoperable semantically, and utilizing it for service description and subsequent discovery of
services. In [6], an important initiative to achieve GIS interoperability is presented, this is the
OpenGIS Consortium. OpenGIS Consortium is an association looking to define a set of
requirements, standards, and specifications that will support GIS interoperability. An approach
for designing an integrated interoperability model based on the definition of a common template
that integrates seven interoperability levels is proposed in [7].In addition, several work targeted
SDI and Geo-Graphic metadata. Spatial data infrastructures (SDIs) are used to support the
discovery and retrieval of distributed geographic information (GI) services by providing
catalogue services through interoperability standards. A methodology proposed for improving
current GI service discovery is discussed in [8]. When searching spatial data, traditional queries
are no longer sufficient, because of the intrinsic complexity of data. As a matter of fact,
parameters such as filename and date allow users to pose queries which discriminate among data
solely on the basis of their organizational properties. In [9], a methodology for searching
geographic data is introduced which takes into account the various aspects previously discussed.
In [10], an approach to analyze geographic metadata for information search is introduced. In [11],
the shortcomings of conventional approaches to semantic data integration and of existing
metadata frameworks are discussed. On the other hand, the problem of vector and raster data
integration was also investigated. Traditional techniques for vector to raster conversion result in a
loss of information, the entities shape must follow the shape of the pixels. Thus, the information
about the position of the entities in the vector data structure is lost with the conversion. In [12], an
algorithm was developed to reconstruct the boundaries of the vector geographical entities using
the information stored in the raster Fuzzy Geographical Entities. The authors utilize the fact that
the grades of membership represent partial membership of the pixels to the entities, this
information is thus valuable to reconstruct the entities boundaries in the vector data structure,
generating boundaries of the obtained vector entities that are as close as possible to their original
position. In [15], a new data model named Triangular Pyramid framework for enhanced object
relational dynamic vector data model is proposed for representing the complete information
required for representing the data for GIS based application. A spatial data warehouse based
technique for data exchange from the spatial data warehouse is proposed in [13]. However, data
warehouse based approach has several disadvantages keeping in mind the huge volume of data
required to be updated regularly. Many of the problems associated with raster-to-vector and
vector-to-raster conversion are discussed in [27]. In [23], the authors examine the common
methods for converting spatial data sets between vector and raster formats and present the results
of extensive benchmark testing of the proposed procedures. Also, in [16], many of the problems
associated with raster-to-vector and vector-to-raster conversion are discussed. Raster maps are
57
4. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
considered an important source of information. Extracting vector data from raster maps usually
requires significant user input to achieve accurate results. In [17], an accurate road vectorization
technique that minimizes user input is discussed; it aims to extract accurate road vector data from
raster maps.
In this work we continue to explore possible approaches for vector and raster data integration to
develop an efficient spatial data query tool.
3. PROBLEM DEFINITION
The quality of any geo-spatial information system is the main feature that allows system clients to
fine-tune their search according to their specific needs and criteria. Nevertheless, disparate data
sets exist in different geo-spatial databases with different data formats and models. Ac-cessing
and integrating this heterogeneous data remains a challenge to efficiently answer user queries. In
addition, with the increase in the GIS applications that are based on geographic information
developing a unified approach for spatial query is a crucial requirement. Today, several formats
exist for vector data including: AutoCAD DWG, File Geo-database, Personal Geo-database,
Shape file, Coverage, and Geography Markup Language. Such diversity in data formats generates
a problem in communication and data transfer between different data sources. In addition,
geographical information may be stored using the vector or the raster data structure. The use of
either structure depends on the methods used to collect the data and on the application that will
use the information [12]. Also, such diversity in data models generates a problem in integration
and data access operations between different data repositories.
Example 1: Consider 3 different data sources (DS1, DS2, and DS3) where each source stores the
vector data in different format as shown in Figure 1.
DS3
GML
DS3
CAD
DS3
Shape File
GDB Format
Figure 1. Querying different data sources
Assume a user query that requires data from all three sources. Such a query will require the user
to physically pose three different queries to access the different formats. In addition, the user’s
query will eventually return different results in different formats. Motivated by the problem
presented in Example 1, developing an interoperable platform is an optimal solution that unifies
both the issued query and the query results. To achieve such operation, we need to convert the
different spatial data formats (AutoCAD DWG, File Geo-database, Personal Geo-database, Shape
file, Coverage, and Geography Markup Language, etc.) into a unified format. In this paper we
select the File Geo-database format to be the final unified format.
Example 2: Consider two different data repositories with different data models (R1, R2). Assume
that R1 has raster datasets and R2 has vector datasets as shown in Figure 2.
Assume a user query that requires data from both repositories regardless of data model
representation.
58
5. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
R1
R2
Raster data model
Vector data model
Metadata query
Figure 2. Querying different data Models
Using the same sources presented in Example 1, and issuing the same user query but assuming
the existence of the required unified model, we then need to obtain a single unified query in
“gdb” format. Again, motivated by the problem in Example 2, the query result still requires
access to all repositories that have data in different models to retrieve all relevant data. Such
access can be improved by understanding the query statement and filtering initial data to capture
only relevant data. Such understanding and filtering process can be achieved using metadata.
4. PROPOSED SOLUTION
As discussed in Section 3, querying different spatial databases that store spatial data in various
formats and models has a number of problems. In this paper we propose a new approach for
spatial query processing and data accessing. The proposed architecture is composed of six main
layers as shown in Figure 3.
Figure 3. Proposed architecture
59
6. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
The first layer represents different data sources with different vector data formats (.shp, .mif, .cad,
.gml, .mdb) and raster data formats. The second layer contains the spatial data converter
component that is responsible for unifying the vector data formats. The third layer contains the
resulting converted data in a single unified format. The fourth layer is the metadata searcher
component that is responsible to find and access the most suitable datasets regardless of the initial
data models and structures. The fifth layer contains the filtered items by the metadata component.
And finally, the sixth layer contains the final user query results. The main characteristic of our
proposed model is that we build a layer in our architecture that supports “interoperability”
operations by developing a spatial data converter component that converts different spatial data
formats (AutoCAD DWG, File Geo-database, Personal Geo-database, Shape file, Coverage, and
Geography Markup Language) into a single format (File Geo-database “gdb” ).
Nevertheless, the top reasons for choosing the file Geo-database as our final unified format are:
•
•
•
•
•
•
•
•
File geo-databases format is ideal for storing and managing geospatial data.
File geo-databases format offers structural, performance, and data management
advantages over personal geo- databases and shape files.
Vector data can be stored in a file geo-database in a compressed, read-only format
that reduces storage requirements.
Storing raster in geo-database format manages raster data by subdividing it into
small, manageable areas called tiles stored as large binary objects (BLOBs) in a
database.
File geo-databases format provides easy data migration.
File geo-databases format is inclusive: one environment for feature classes, raster
datasets, and tables
File geo-databases format is powerful: enables modelling of spatial and attribute
relationships.
File geo-databases format is scalable: can sup-port organization-wide usage and
workflows, and can be used with DBMS like Oracle, IBM DB2, and Microsoft SQL
Server Express.
In addition our model has a layer that provides usage of modern and open data access standards,
and helps to develop efficient ways to achieve inter-operability including consolidation of links
between geo-graphic data interoperability extensions and geo-graphic metadata by developing a
metadata searcher component that looks in repositories which have data in different spatial data
models, structure, and formats and finds the most proper datasets. In the following discussion we
present our proposed spatial data conversion algorithm.
Algorithm 1: Spatial Data Converter
Input: A different number of spatial databases with different vector data formats (GML, CAD,
MIF, mdb, and shp).
Output: A different number of spatial databases with unified vector data format (File Geodatabase).
Begin
Get the path of the input file;
Create an empty output file with the same name of the input file
and replace extension with “gdb”;
Define a new GeoProcessor object;
If(data format “gml” or “cad” or “mif”) Then{
Define a quick import object;
Set input file as input to QuickImport object;
Set the created empty output file as output to
quick import object;
60
7. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
Pass QuickImport object to GeoProcessor ;
}
ElseIf(data format is “mdb”) Then {
Initialize a CopyTool;
List all feature classes, data sets, and tables of the input file ;
Loop until no features, dataset, tables found
Begin
Set the feature or dataset or table as an input to the CopyTool;
Create the output path of the dataset or feature as the name of the
Created output file and append to it the name of item;
Set the item path as output to CopyTool;
Pass CopyTool object to GeoProcessor;
End loop
{ ElseIf (data format is “.shp”) Then}
Define new Feature class object with the path of the shape file ;
Define an Append object;
Set input to Append object as feature class created from shape file;
Set output to Append object the path of the created output gdb
appended to it the name of feature class name;
}
EndIf
Execute conversion using GeoProcessor;
End
By applying Algorithm 1 on the different data sources with different data formats in layer 1, we
obtain in layer 3 a single unified data format and structure (File Geo-database “gdb”). The
motivation behind choosing theses five formats for conversion is that these formats are very
flexible in terms of the ability to mix all sorts of geometry types in a single dataset, openly
documented, support geo-referenced coordinate systems, and are considered stable exchange
formats. A successful conversion between (AutoCAD DWG, Map Info., Personal Geo-database,
Shape file, Coverage, and Geography Markup Language) and File Geo-database format is done,
considering the same shape size, origin and orientation, the same results are obtained. The areas
occupied by entities inside the original file and the converted one are always the same. Then, in
layer 4 motivated by the problem presented in Example 2, we developed a “Metadata Searcher”
component as shown in Figure 4. The metadata searcher component defines some properties (for
example: number of features, creation date, geographic form, feature name, and reference
system), and searches in different data sources and Repositories for items that match those
properties. The metadata feature selection component proceeds as follows.
Algorithm 2: Metadata Feature Selection
Input: A different number of spatial databases with unified vector data format (File Geodatabase “.gdb“)
Output: A collection of features that match metadata criteria.
Begin
Define metadata search properties and values;
Define the path that contains the converted data “GDB;“
List all the converted gdb files
Loop until no files found
Begin
Loop FOR EACH features and datasets in gdb file
Begin
If item matches defined metadata properties and values Then
61
8. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
Add item to filtered item list
End If
End loop
End loop
End
We apply Algorithm 2 in layer 4 in our proposed architecture on a different number of spatial
databases with unified “.gdb” format and raster datasets. Then, for every data source the
algorithm searches for the features and data elements that match the metadata search criteria, and
save the selected items in the list of filtered items that eventually contribute towards the user
query result.
START
The Catalogue Path,
The system will look in
Define Search criteria and its values
Loop for all datasets in all repositories
If data set matches
search criteria
Add this item to filtered list
END
Figure 4. Metadata Searcher Component Flow Chart
Algorithm 3: Raster Query
Input: Raster dataset.
Output: Raster Result set.
Begin
Create the RasterExtractionOp object.
Declare the Raster input object.
Declare a RasterDescriptor object
Select the field used for extraction Using RasterDescriptor
Set RasterDescriptor as an input to RasterExtractionOp object.
Execute Query using RasterExtractionOp object.
Save the Results in new Geodataset.
End
Next, layer 5 maintains the filtered items resulting from the different data sources that match the
specific metadata properties and is ready to receive user query. The filtered raster dataset will be
queried by applying Algorithm 3 and filtered vector datasets will be queried either by Sptial data
qurey functions or attributte data statements. Finally, layer 6 contains the actual combined user
query results that composed of raster and vector datasets against the filtered items that are then
presented to user.
62
9. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
5. RESULTS AND DISCUSSIONS
In this paper we present a holistic approach to unify spatial data query schemas. Various data
accessing and metadata management steps have been used and subsequently employed to
contribute towards designing a framework for efficiently answering spatial data query. In our
design we focused on the following features that the proposed system satisfies:
•
Easy to access geospatial data repositories and retrieving data in transparent way. The
file Geo-database “gdb” format was chosen in our model for reasons discussed before
in section 4.
•
Developing an interoperable framework that links both semantic interoperability and
syntactic interoperability is a promising scenario for deriving data from multiple
sources with different data formats and models.
•
Metadata descriptions adopted in the proposed system are not reliant up on specific
profile or standard. XML-based metadata was chosen to ensure flexibility for
discovering resources and features.
Taking those constraints into consideration, we built an easy to use tool that unifies different
vector formats into a single “gdb” format, accesses different spatial data models (Raster and
Vector) repositories, and processes user queries using spatial metadata that helps to enhance the
query performance. Figure 5 and Figure 6 show the initial input to the system where data is
presented in different spatial formats and models. This initial format is then unified as shown in
Figure 7.
Figure 5.Vector Data before applying spatial data converter
63
10. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
Figure 6. Raster Data Repository
Figure 7. Unifed "GDB" Format
64
11. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
Once the data is unified, the system starts processing spatial queries. It accepts the criteria defined
by the user that constrain the required output. Those constraints along with the metadata help to
locate the candidate data in different files. For instance some users are interested in files that have
specific number of features, specific creation date, or feature name that start with specific pattern,
or contains specific pattern. Augmenting metadata in the system allows the user to select all the
criteria he needs, and search in the catalogue path to locate matching data sets and feature classes.
Example 3: Consider a MQ1 (Metadata Query) with the following selection criteria as shown in
Figure 8:
Data Representation equals vector digital data, Feature Name contains Streets, Feature Count
greater than 180, East bounding coordinate equals 31.219267, Data Form Value equals File Geodatabase Feature Class, Creation Date equals 20121118, and Reference System equals
WGS_1984_UTM_Zone_36N
Figure 8. Metadata Searcher Screen.
After metadata data query results are retrieved, the user has the ability for selecting features from
single or multiple vector feature classes and datasets retrieved. For single feature class the user
poses vector attribute data query based on specific values and selected criteria (VQ1), for multiple
feature classes and datasets the user poses spatial vector data query based on selected topological
relation between features and values used in selected features buffering.
For VQ1 (vector attribute data query) the user can also specify the values associated to each
feature as follows: “Ename ≠ 'NULL', Width > 15, Shape_length >200 and METERS > 0”
For VQ2 (vector spatial data query) the user can also specify the topological relation and values
associated to buffer features as follows: “Select features from “Fuel_Stations” are within a
distance of “Buildings” with a buffer to features in buildings of 190.000000 Meters”.
Using the sample dataset shown in Figure 7, the system will retrieve three feature classes that
match the user specified criteria and values as shown in Figure 8.
65
12. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
Figure 9. Vector Attribute table
Figure 10. Vector Query Result
Those matching classes are retrieved based on the metadata used in the query. The final results
are then displayed or presented to the user as shown in Figure 9 and Figure 10.Motivated by
Example 3, assume that user interested to find all datasets in all repositories regardless of data
representation model that have the following criteria “East bounding coordinate equals
31.219267” the User to AND the Query appear in Example 3 with the another one in Example 4
to find and access all required datasets.
Example 4: Consider a MQ2 (Metadata Query) a user change query selection criteria to be:
66
13. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
“Data Representation equals raster digital data; Feature Name contains call, east bounding
coordinate equals 31.219267, Data Form Value equals Raster Dataset, Creation Date greater than
20121220, and Reference System equals “IMAGINE GeoTIFF ERDAS, Inc. Al”
After metadata data query results are retrieved the user has the ability to query Raster data using
the cell value. To query a grid, the user has to use a logical expression such as RQ1: [Count]
>700 AND [Temp_C]>=40.34. It is also possible to query multiple grids by cell value.
Figure 11. Raster Attribute table
Figure 12.Raster Query result
Figure 13.Final Query results Integrated Map.
67
14. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
According to Example 4, the sample dataset is shown in Figure 11and Figure 12, the system will
retrieve one dataset that matches the user specified criteria and values. This matched dataset is
retrieved based on setting values used in Example 4 query. The final results are then displayed or
presented to the user as shown in Figure 13.
6. QUANTIFIABLE ANALYSIS AND TESTING
To clarify our justifications of using centralized file geo-database as a back end geospatial data
store, and for linking geographic metadata with data interoperability extensions, we proposed a
platform connecting different data sources and formats for implementing a unified approach for
spatial data query. A framework example was also implemented and tested. In this section we
investigate the design and features of the implemented system. Based on our previous discussion,
in this framework we develop two main components namely, a spatial data converter, and a
metadata searcher. In addition, we also developed the basic operations performed by those two
components as discussed earlier. The main characteristic of those developed operations is that
they hide implementation details from the user providing him with a transparent communication
with the system.
Following the architecture proposed in [1], our proposed system architecture is composed of four
layers; presentation layer, business logic layer, data access layer, and data management tier. The
function of each layer is as defined in [1]. Flyweight and façade design patterns were used for
implementing the four layers mentioned above [28] [29]. Were the system starts with the user
inputting a physical location path for the spatial dataset. Then, the spatial data irrespective to its
original format is converted using the spatial data converter into the unified GDB format. Once
the unified data is ready, the user is requested to input the metadata search criteria and
parameters. Finally, based on user requests, the metadata searcher component retrieves the results
from the unified geo-database and returns the results to the user.
Performance Test: The proposed framework was also tested using random features of sizes:
5000, 10000, 50000. Those features were first inserted and integrated into the centralized geodatabase along with their associated geo-graphic view and attribute tables. Then, to evaluate the
performance two queries were designed and posed against the system. We used the test queries to
test our proposed framework. The first query (Q1) aims to retrieve raster datasets and performs
“raster query by attribute” against result set. The other one (Q2) aims to retrieve vector feature
classes and then perform “Vector attribute Query” against result set.
For both queries, we measured the average run time and used it as a metric for evaluating the
performance. Tables 1 and 2 present the results obtained from both queries.
Table1. Performance test for retrieving features (Q1)
Number of Features
Number of features Retrieved
Time(Milliseconds)Retrieving and manipulating data with
implemented system
Time(Milliseconds)Retrieving and manipulating data without
implemented system
5000
178
195 ms
10000
231
210 ms
50000
343
350 ms
230 ms
360 ms
500 ms
68
15. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
Table2. Performance test for retrieving features (Q2)
Number of Features
Number of features Retrieved
Time(Milliseconds)Retrieving and manipulating data with
implemented system
Time(Milliseconds)Retrieving and manipulating data without
implemented system
5000
103
60 ms
10000
189
198 ms
50000
243
220 ms
105 ms
230 ms
380 ms
The results displayed above show that the proposed solution is an efficient solution for retrieving
and manipulation of spatial data.
7. CONCLUSIONS AND FUTURE WORK
Efficiency of the planning system needs accessible, affordable, adequate, accurate and timely
spatial and Non-spatial information. Information integration and sharing in turn needs an efficient
route that can give possible access to the needy. The potential route can be achieved and accessed
through the implementation of a well structured interoperable approach towards good information
management. This paper introduces the issues of data interoperability, advantages of Geo-Graphic
metadata, and its mechanism for data interoperability. In this paper we proposed an interoperable
framework for spatial data query. Developing spatial data converter component which enables the
proposed framework to accept vector data in various formats and unifies them into a single “gdb”
format, which can be integrated with different raster datasets. GDB format can give users the
capability to easily and dynamically publish and exchange data in an open, non-proprietary
industry-standard format, thus maximizing the re-use of geospatial data, eliminating timeconsuming data conversion and reducing associated costs. The resulting files are then input to a
metadata selection component that uses the spatial features metadata to answer the user queries
more efficiently. For future work we plan to extend our work to consider raster data in order to
present a complete interoperable platform for spatial data. We also think that testing the system
on various queries can strengthen our work. based on the search results we still need to develop a
“ranking component” based on data mining techniques that is able to integrate with our proposed
model, to sort results based on the importance of information value to the user is must. Finally,
the current proposed approach still cannot solve the problem of semantic interoperability,
investigating this point can be a good point for future work.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
Amirian, Pouria, and Ali A. Alesheikh (2008). “Implementation of a Geospatial Web service Using
Web Services Technologies and Native XML Databases”. Middle-East Journal of Scientific Research
Vol. 3, No. 1, pp. 36-48.
D. D. Nebert, Developing Spatial Data Infrastructures: The SDI Cookbook. GSDI: Global Spatial
Data Infrastructure Association, 2004.
P. B. Shah and N. J. Thakkar.Geo-spatial metadata services - ISRO's initiative, The International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.Vol. XXXVII.
Part B4. Beijing 2008.
Mukherjee, Soumya K. Ghosh, Formalizing Fuzzy Spatial Data Model for Integrating Heterogeneous
Spatial Data, 2nd International ACM Conference on Computing for Geospatial Research &
Applications (COM.GEO 2011), ACM, 25:1--25:6, Washington, DC, 23-25 May, 2011.
Manoj Paul, S. K. Ghosh, A Framework for Semantic Interoperability for Distributed Geospatial
Repositories, Journal of Computer and Informatics, Special Issue on Semantic e-Science, Vol. 26, 7392, 2008.
Open Geospatial Consortium. http://www.opengis.org, accessed on May 2013.
Manso, M. A.; Wachowicz, M.; Bernabé, M. A.: “Towards an Integrated Model of Interoperability for
Spatial Data Infrastructures”. Transactions in GIS, Vol. 13, No.1. 2009, pp. 43-67.
Luca Paolino, Monica Sebillo, Genoveffa Tortora, Giuliana Vitiello: Searching geographic resources
through metadata-based queries for expert user communities. GIR 2007: 83-88
69
16. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
Michael Lutz, Ontology-based service discovery in spatial data infrastructures. Geographic
Information Retrieval conference, 2005.
R Albertoni, A Bertone, M De Martino, Visualization and semantic analysis of geographic metadata.
Proceedings of the 2005 workshop on Geographic information retrieval, 9-16.
Nadine Schuurman, and Agnieszka Leszczynski.,Ontology-Based Metadata ,T. GIS 10(5):709-726
(2006)
Caetano, M. and Painho, M. (eds). Conversion between the vector and raster data structures using
Fuzzy Geographical Entities. Proceedings of the 7th International Symposium on Spatial Accuracy
Assessment in Natural Resources and Environmental Sciences, 5 – 7 July 2006, Lisboa, Instituto
Geográfico Português.
Roth, M.T, Wrapper Architecture for Legacy Data Sources. Proceedings of the International
Conference on Very Large Databases (23rd VLDB), 1997.
Abbas Rajabifard, Data Integration and Interoperability of Systems and Data.2nd Preparatory
Meeting of the Proposed UN Committee on Global Geographic Information Management, 2010.
Ms. Barkha Bahl1, Dr. Navin Rajpal and Dr. Vandana Sharma, Triangular Pyramid Framework For
Enhanced Object Relational Dynamic Data Model for GIS, IJCSI International Journal of Computer
Science Issues, Vol. 8, Issue 1, January 2011.
Russell G. Congalton , Exploring and Evaluating the Consequences of Vector-to-Raster and Raster-toVector Conversion, Photogrammetric Engineering & Remote Sensing, Vol. 63, No. 4, April 1997, pp.
425-434.
Chiang, Y.; and Knoblock, C. A. Extracting Road Vector Data from Raster Maps. In Graphics
Recognition: Achievements, Challenges, and Evolution, Selected Papers of the 8th International
Workshop on Graphics Recognition (GREC), Lecture Notes in Computer Science, 6020,
pp.
93-105. Springer, New York, 2009.
In-Hak Joo; Tae-Hyun Hwang; Kyung-Ho Choi, "Generation of video metadata supporting video-GIS
integration," ICIP '04. 2004 International Conference on Image Processing, 2004, vol.3, pp.16951698, 24-27 Oct. 2004.
Tae-Hyun Hwang; Kyoung-Ho Choi; In-Hak Joo; Jong-Hyun Lee, "MPEG-7 metadata for videobased GIS applications," Proceedings of IEEE International Geoscience and Remote Sensing
Symposium, IGARSS '03, pp.3641-3643, vol.6, 21-25 July 2003.
Goncalves Soares Elias, V.; Salgado, A.C., "A metadata-based approach to define a standard to visual
queries in GIS," Proceedings of the 11th International Workshop on Database and Expert Systems
Applications, pp.693-697, 2000.
Yingwei Luo; Xiaolin Wang; Zhuoqun Xu, "Extension of spatial metadata for navigating distributed
spatial data," Proceedings of the IEEE International Geoscience and Remote Sensing Symposium,
IGARSS '03, vol.6, pp.3721- 3723, 21-25 July 2003.
Spery, L.; Claramunt, C.; Libourel, T., "A lineage metadata model for the temporal management of a
cadastre application," Proceedings of the 10th International Workshop on Database and Expert
Systems Applications, pp. 466- 474, 1999.
Joseph M. Piwowar, Ellsworth F. Ledrew, Douglas J. Dudycha. Integration of spatial data in vector
and raster formats in a geographic information system environment. International Journal of
Geographical Information Science 01/1990; 4, pp. 429-444.
Feng Lin; Chaozhen Guo, "Raster-vector integration based on SVG on mobile GIS platform," 2011 6th
International Conference on Pervasive Computing and Applications (ICPCA), pp. 378-383,
26-28
Oct. 2011.
Changxiong Wang; Shanjun Mao; Mei Li; Huiyi Bao; Ying Zhang, "Integration of vector and rasterbased coal mines surface and underground contrast map," Second International Workshop on Earth
Observation and Remote Sensing Applications (EORSA), pp. 309-312, 8-11 June 2012.
Xuefeng Cao; Gang Wan; Feng Li, "Notice of Violation of IEEE Publication Principles 3D VectorRaster Data Integration Model Based on View Dependent Quadtree and GPU Friendly
Rendering Algorithm," International Joint Conference on Computational Sciences and Optimization,
CSO 2009, Vol. 2, pp.244-247, 24-26 April 2009.
Russell G.Congalto, Exploring and Evaluating the Consequences of Vector-to-Raster and Raster-toVector Conversion, Photogrammetric Engineering & Remote Sensing, Vol.63, No.4, April 1997,
pp.425-434.
Horner, M., 2006. Pro.NET 2.0 Code and Design Standards in C#. California, USA, APress
Publishing.
70
17. International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.3, No.6, November 2013
[29] O’Docherty, M., 2005. Object-oriented analysis and design: Understanding system development with
UML 2.0., New Jersey, USA, John Wiley and Sons, Inc.
AUTHORS
Mohammed Abdalla is a Software Engineer of Software Development at Hewlett-Packard Enterprise
services, Inc. Mr. Mohammed have +5 years experience in software development and analysis, with
previously experiences in developing ERP, E-commerce, mobile payment and e-payment applications, In
June 2008 earned a Bachelor of computer Science degree from the Cairo university with grade very good
and excellent grade in graduation project.
Dr. Hoda M. O. Mokhtar is currently an associate professor in the Information Systems Dept., Faculty of
Computers and Information, Cairo University. Dr. Hoda Mokhtar received her PhD in Computer science in
2005 from University of California Santa Barbara. She received her MSc. and BSc. in 2000 and 1997 resp.
from the Computer Engineering Dept., Faculty of Engineering - Cairo University. Her research interests are
database systems, moving object databases, data warehousing, and data mining.
71