IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes a research paper that proposes a new technique for web page clustering inspired by the cemetery organization behavior of ants. The technique involves 3 main steps:
1) Generating a term-document matrix to represent web pages and reducing the dimensionality of the matrix using Latent Semantic Indexing.
2) Transforming the web pages into a two-dimensional grid space based on the cemetery organization behavior of ants, where web pages are more likely to be placed near similar web pages.
3) Clustering the web pages represented in the two-dimensional grid space using k-means clustering. The paper claims this technique can improve web page clustering compared to other approaches.
A Semantic Approach to Retrieving, Linking, and Integrating Heterogeneous Ge...Craig Knoblock
This document proposes a semantic approach to retrieve, link, and integrate heterogeneous geospatial data. It models geospatial data using RDF and an ontology, links similar entities across data sources using geospatial relationships and similarity metrics, and integrates the data by eliminating redundancy and combining complementary properties with SPARQL queries. The approach aims to empower end-users to more easily extract, combine and use geospatial data from different sources.
Gravitational search algorithm with chaotic map (gsa cm) for solving optimiza...eSAT Journals
Abstract
Gravitational Search Algorithm (GSA) is a newly heuristic algorithm inspired by nature which utilizes Newtonian gravity law and
mass interactions. It has captured much attention since it has provided higher performance in solving various optimization
problems. This study hybridizes the GSA and chaotic equations. Ten chaotic-based GSA (GSA-CM) methods, which define the
random selections by different chaotic maps, have been developed. The proposed methods have been applied to the minimization
of benchmark problems and the results have been compared. The obtained numeric results show that most of the proposed
algorithms have increased the performance of GSA and have developed its quality of solution.
Keywords: Computational Intelligence, Evolutionary Computation, Heuristic Algorithms, Chaotic Maps, Optimization
Methods.
A hybrid approach for analysis of dynamic changes in spatial dataijdms
Any geographic location undergoes changes over a period of time. These changes can be observed by
naked eye, only if they are huge in number spread over a small area. However, when the changes are small
and spread over a large area, it is very difficult to observe or extract the changes. Presently, there are few
methods available for tackling these types of problems, such as GRID, DBSCAN etc. However, these
existing mechanisms are not adequate for finding an accurate changes or observation which is essential
with respect to most important geometrical changes such as deforestations and land grabbing etc.,. This
paper proposes new mechanism to solve the above problem. In this proposed method, spatial image
changes are compared over a period of time taken by the satellite. Partitioning the satellite image in to
grids, employed in the proposed hybrid method, provides finer details of the image which are responsible
for improving the precision of clustering compared to whole image manipulation, used in DBSCAN, at a
time .The simplicity of DBSCAN explored while processing portioned grid portion.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document summarizes a research paper that develops a data-driven stochastic model of zebrafish locomotion. The researchers analyzed tracking data from individual zebrafish to inform parameters for stochastic differential equations modeling the fish's turning and swimming speed over time. They extend previous models that assumed fixed swimming speed to incorporate experimentally observed speed regulation behavior, which is important for social interactions. The goal is to generate an empirical model of individual zebrafish movement that can form the basis for future models of group dynamics.
Principle Component Analysis Based on Optimal Centroid Selection Model for Su...ijtsrd
Clustering a large sparse and large scale data is an open research in the data mining. To discover the significant information through clustering algorithm stands inadequate as most of the data finds to be non actionable. Existing clustering technique is not feasible to time varying data in high dimensional space. Hence Subspace clustering will be answerable to problems in the clustering through incorporation of domain knowledge and parameter sensitive prediction. Sensitiveness of the data is also predicted through thresholding mechanism. The problems of usability and usefulness in 3D subspace clustering are very important issue in subspace clustering. . The Solutions is highly helpful benefit for police departments and law enforcement organisations to better understand stock issues and provide insights that will enable them to track activities, predict the likelihood. Also determining the correct dimension is inconsistent and challenging issue in subspace clustering .In this thesis, we propose Centroid based Subspace Forecasting Framework by constraints is proposed, i.e. must link and must not link with domain knowledge. Unsupervised Subspace clustering algorithm with inbuilt process like inconsistent constraints correlating to dimensions has been resolved through singular value decomposition. Principle component analysis is been used in which condition has been explored to estimate the strength of actionable to be particular attributes and utilizing the domain knowledge to refinement and validating the optimal centroids dynamically. An experimental result proves that proposed framework outperforms other competition subspace clustering technique in terms of efficiency, Fmeasure, parameter insensitiveness and accuracy. G. Raj Kamal | A. Deepika | D. Pavithra | J. Mohammed Nadeem | V. Prasath Kumar "Principle Component Analysis Based on Optimal Centroid Selection Model for SubSpace Clustering Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31374.pdf Paper Url :https://www.ijtsrd.com/computer-science/data-miining/31374/principle-component-analysis-based-on-optimal-centroid-selection-model-for-subspace-clustering-model/g-raj-kamal
Defining Homogenous Climate zones of Bangladesh using Cluster AnalysisPremier Publishers
Climate zones of Bangladesh are identified by using mathematical methodology of cluster analysis. Monthly data from 34 climate stations for rainfall from 1991 to 2013 are used in the cluster analysis. Five Agglomerative Hierarchical clustering measures based on mostly used six proximity measures are chosen to perform the regionalization. Besides three popular measures: K-means, Fuzzy and density based clustering techniques are applied initially to decide the most suitable method for the identification of homogeneous region. Stability of the cluster is also tested based on nine validity indices. It is decided that Ward method based on Euclidean distance, K-means, Fuzzy are the most likely to yield acceptable results in this particular case, as is often the case in climatological research. In this analysis we found seven different climate zones in Bangladesh.
This document summarizes a research paper that proposes a new technique for web page clustering inspired by the cemetery organization behavior of ants. The technique involves 3 main steps:
1) Generating a term-document matrix to represent web pages and reducing the dimensionality of the matrix using Latent Semantic Indexing.
2) Transforming the web pages into a two-dimensional grid space based on the cemetery organization behavior of ants, where web pages are more likely to be placed near similar web pages.
3) Clustering the web pages represented in the two-dimensional grid space using k-means clustering. The paper claims this technique can improve web page clustering compared to other approaches.
A Semantic Approach to Retrieving, Linking, and Integrating Heterogeneous Ge...Craig Knoblock
This document proposes a semantic approach to retrieve, link, and integrate heterogeneous geospatial data. It models geospatial data using RDF and an ontology, links similar entities across data sources using geospatial relationships and similarity metrics, and integrates the data by eliminating redundancy and combining complementary properties with SPARQL queries. The approach aims to empower end-users to more easily extract, combine and use geospatial data from different sources.
Gravitational search algorithm with chaotic map (gsa cm) for solving optimiza...eSAT Journals
Abstract
Gravitational Search Algorithm (GSA) is a newly heuristic algorithm inspired by nature which utilizes Newtonian gravity law and
mass interactions. It has captured much attention since it has provided higher performance in solving various optimization
problems. This study hybridizes the GSA and chaotic equations. Ten chaotic-based GSA (GSA-CM) methods, which define the
random selections by different chaotic maps, have been developed. The proposed methods have been applied to the minimization
of benchmark problems and the results have been compared. The obtained numeric results show that most of the proposed
algorithms have increased the performance of GSA and have developed its quality of solution.
Keywords: Computational Intelligence, Evolutionary Computation, Heuristic Algorithms, Chaotic Maps, Optimization
Methods.
A hybrid approach for analysis of dynamic changes in spatial dataijdms
Any geographic location undergoes changes over a period of time. These changes can be observed by
naked eye, only if they are huge in number spread over a small area. However, when the changes are small
and spread over a large area, it is very difficult to observe or extract the changes. Presently, there are few
methods available for tackling these types of problems, such as GRID, DBSCAN etc. However, these
existing mechanisms are not adequate for finding an accurate changes or observation which is essential
with respect to most important geometrical changes such as deforestations and land grabbing etc.,. This
paper proposes new mechanism to solve the above problem. In this proposed method, spatial image
changes are compared over a period of time taken by the satellite. Partitioning the satellite image in to
grids, employed in the proposed hybrid method, provides finer details of the image which are responsible
for improving the precision of clustering compared to whole image manipulation, used in DBSCAN, at a
time .The simplicity of DBSCAN explored while processing portioned grid portion.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
This document summarizes a research paper that develops a data-driven stochastic model of zebrafish locomotion. The researchers analyzed tracking data from individual zebrafish to inform parameters for stochastic differential equations modeling the fish's turning and swimming speed over time. They extend previous models that assumed fixed swimming speed to incorporate experimentally observed speed regulation behavior, which is important for social interactions. The goal is to generate an empirical model of individual zebrafish movement that can form the basis for future models of group dynamics.
Principle Component Analysis Based on Optimal Centroid Selection Model for Su...ijtsrd
Clustering a large sparse and large scale data is an open research in the data mining. To discover the significant information through clustering algorithm stands inadequate as most of the data finds to be non actionable. Existing clustering technique is not feasible to time varying data in high dimensional space. Hence Subspace clustering will be answerable to problems in the clustering through incorporation of domain knowledge and parameter sensitive prediction. Sensitiveness of the data is also predicted through thresholding mechanism. The problems of usability and usefulness in 3D subspace clustering are very important issue in subspace clustering. . The Solutions is highly helpful benefit for police departments and law enforcement organisations to better understand stock issues and provide insights that will enable them to track activities, predict the likelihood. Also determining the correct dimension is inconsistent and challenging issue in subspace clustering .In this thesis, we propose Centroid based Subspace Forecasting Framework by constraints is proposed, i.e. must link and must not link with domain knowledge. Unsupervised Subspace clustering algorithm with inbuilt process like inconsistent constraints correlating to dimensions has been resolved through singular value decomposition. Principle component analysis is been used in which condition has been explored to estimate the strength of actionable to be particular attributes and utilizing the domain knowledge to refinement and validating the optimal centroids dynamically. An experimental result proves that proposed framework outperforms other competition subspace clustering technique in terms of efficiency, Fmeasure, parameter insensitiveness and accuracy. G. Raj Kamal | A. Deepika | D. Pavithra | J. Mohammed Nadeem | V. Prasath Kumar "Principle Component Analysis Based on Optimal Centroid Selection Model for SubSpace Clustering Model" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-4 , June 2020, URL: https://www.ijtsrd.com/papers/ijtsrd31374.pdf Paper Url :https://www.ijtsrd.com/computer-science/data-miining/31374/principle-component-analysis-based-on-optimal-centroid-selection-model-for-subspace-clustering-model/g-raj-kamal
Defining Homogenous Climate zones of Bangladesh using Cluster AnalysisPremier Publishers
Climate zones of Bangladesh are identified by using mathematical methodology of cluster analysis. Monthly data from 34 climate stations for rainfall from 1991 to 2013 are used in the cluster analysis. Five Agglomerative Hierarchical clustering measures based on mostly used six proximity measures are chosen to perform the regionalization. Besides three popular measures: K-means, Fuzzy and density based clustering techniques are applied initially to decide the most suitable method for the identification of homogeneous region. Stability of the cluster is also tested based on nine validity indices. It is decided that Ward method based on Euclidean distance, K-means, Fuzzy are the most likely to yield acceptable results in this particular case, as is often the case in climatological research. In this analysis we found seven different climate zones in Bangladesh.
This document summarizes a research paper that evaluates cluster quality using a modified density subspace clustering approach. It discusses how density subspace clustering can be used to identify clusters in high-dimensional datasets by detecting density-connected clusters in all subspaces. The proposed approach uses a density subspace clustering algorithm to select attribute subsets and identify the best clusters. It then calculates intra-cluster and inter-cluster distances to evaluate cluster quality and compares the results to other clustering algorithms in terms of accuracy and runtime. Experimental results showed that the proposed method improves clustering quality and performs faster than existing techniques.
Text documents clustering using modified multi-verse optimizerIJECEIAES
In this study, a multi-verse optimizer (MVO) is utilised for the text document clus- tering (TDC) problem. TDC is treated as a discrete optimization problem, and an objective function based on the Euclidean distance is applied as similarity measure. TDC is tackled by the division of the documents into clusters; documents belonging to the same cluster are similar, whereas those belonging to different clusters are dissimilar. MVO, which is a recent metaheuristic optimization algorithm established for continuous optimization problems, can intelligently navigate different areas in the search space and search deeply in each area using a particular learning mechanism. The proposed algorithm is called MVOTDC, and it adopts the convergence behaviour of MVO operators to deal with discrete, rather than continuous, optimization problems. For evaluating MVOTDC, a comprehensive comparative study is conducted on six text document datasets with various numbers of documents and clusters. The quality of the final results is assessed using precision, recall, F-measure, entropy accuracy, and purity measures. Experimental results reveal that the proposed method performs competitively in comparison with state-of-the-art algorithms. Statistical analysis is also conducted and shows that MVOTDC can produce significant results in comparison with three well-established methods.
A Novel Multi- Viewpoint based Similarity Measure for Document ClusteringIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Cds based energy efficient topology control algorithm in wireless sensor net...eSAT Journals
Abstract Wireless Sensor Networks (WSNs) are a self organized network which consists of large number of sensor nodes that collects the data in a various environment [1, 2]. The sensors work on battery that have limited lifetime so it is a challenge to create an energy efficient network that can reduce the energy consumption and interference in the network graph and thereby extend the network lifetime [2]. For saving energy and extending network lifetime the topology is a well-known technique in WSNs and the widely used topology control strategy is the construction of Connected Dominating Set (CDS) [3, 4]. In this paper, we construct a CDS based energy efficient topology control algorithm i.e. GCDSTC for WSNs. The performance analysis includes the study of GCDSTC algorithm in terms of complexity and compares it with EBTC (Energy Balanced Topology Control) algorithm. The simulation results indicate that the GCDSTC algorithm reduce the energy consumption and interference in the network graph, in order to enhance the network lifetime. Keywords: Wireless Sensor Network (WSN), Connected Dominating Set (CDS), Topology Control (TC), etc.
The document summarizes research analyzing student movement patterns on a university campus through GPS tracking and data visualization. Key findings include identifying crowded areas like classrooms and cafeterias as "sinks" where students spent significant time, and using adjacency matrices to represent clustering of students within 100m of each other over time. Future work proposed simulating the real data on a synthetic campus model grid to further analyze cluster formation and validate theoretical assumptions.
Simultaneous Real time Graphical Representation of Kinematic Variables Using ...iosrjce
IOSR Journal of Applied Physics (IOSR-JAP) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
IMPROVED NEURAL NETWORK PREDICTION PERFORMANCES OF ELECTRICITY DEMAND: MODIFY...csandit
Accurate prediction of electricity demand can bring extensive benefits to any country as the
forecast values help the relevant authorities to take decisions regarding electricity generation,
transmission and distribution much appropriately. The literature reveals that, when compared
to conventional time series techniques, the improved artificial intelligent approaches provide
better prediction accuracies. However, the accuracy of predictions using intelligent approaches
like neural networks are strongly influenced by the correct selection of inputs and the number of
neuro-forecasters used for prediction. This research shows how a cluster analysis performed to
group similar day types, could contribute towards selecting a better set of neuro-forecasters in
neural networks. Daily total electricity demands for five years were considered for the analysis
and each date was assigned to one of the thirteen day-types, in a Sri Lankan context. As a
stochastic trend could be seen over the years, prior to performing the k-means clustering, the
trend was removed by taking the first difference of the series. Three different clusters were
found using Silhouette plots, and thus three neuro-forecasters were used for predictions. This
paper illustrates the proposed modified neural network procedure using electricity demand
data.
Data reduction techniques for high dimensional biological dataeSAT Journals
Abstract
High dimensional biological datasets in recent years has been growing rapidly. Extracting the knowledge and analyzing highdimensional
biological data is one the key challenges in which variety and veracity are the two distinct characteristics. The
question that arises now is, how to perform dimensionality reduction for this heterogeneous data and how to develop a high
performance platform to efficiently analyze high dimensional biological data and how to find the useful things from this data. To
deeply discuss this issue, this paper begins with a brief introduction to data analytics available for biological data, followed by
the discussions of big data analytics and then a survey on various data reduction methods for biological data. We propose a dense
clustering algorithm for standard high dimensional biological data.
Keywords: Big Data Analytics, Dimensionality Reduction
We build a network of locations from Twitter geolocalized tweets. We then analyze the small-world and scale-free properties of such spatial network. We also use modularity to reveal the structre of neighborhood communities.
PERFORMANCE EVALUATION OF GRID-BASED ROUTING PROTOCOL FOR UNDERWATER WIRELESS...ijwmn
In this paper, we have conducted a simulation study to evaluate Multipath Grid-Based Geographic Routing
Protocol (MGGR) under different selected mobility models. The protocol has been evaluated under three
mobility models (i.e. Random Way Point, Reference Point Group and Meandering Current Mobility Model)
using Aqua-Sim NS2-based simulator. An Extensive number of experiments have been conducted to assess
the packet delivery ratio, the end-to-end-delay and power consumption under different operating
conditions. We have noticed that the performance of MGGR is only marginally affected by the mobility
behavior of nodes especially with the condition where the speed of nodes movements increased.
Nevertheless, the evaluation has also shown that MGGR exhibit only average performance when used in
coastal areas. This is evident as the MCMM (used primarily to model the movement of nodes in coastal
areas) gave the lowest performance compared to other models.
Study of the Class and Structural Changes Caused By Incorporating the Target ...ijceronline
High dimensional data when processed by using various machine learning and pattern recognition techniques, it undergoes several changes. Dimensionality reduction is one such successfully used pre-processing technique to analyze and represent the high dimensional data that causes several structural changes to occur in the data through the process. The high-dimensional data when used to extract just the target class from among several classes that are spatially scattered then the philosophy of the dimensionality reduction is to find an optimal subset of features either from the original space or from the transformed space using the control set of the target class and then project the input space onto this optimal feature subspace. This paper is an exploratory analysis carried out to study the class properties and the structural properties that are affected due to the target class guided feature subsetting in specific. K-nearest neighbors and minimum spanning tree are employed to study the structural properties, and cluster analysis is applied to understand the target class and other class properties. The experimentation is conducted on the target class derived features on the selected bench mark data sets namely IRIS, AVIRIS Indiana Pine and ROSIS Pavia University data set. Experimentation is also extended to data represented in the optimal principal components obtained by transforming the subset of features and results are also compared
This document summarizes an article that proposes using genetic algorithms to improve the effectiveness of a text to matrix generator. It begins with an overview of information retrieval and discusses the vector space model and genetic algorithms. It then proposes a genetic approach to optimize the objective function of a text to matrix generator to increase the average number of terms. The goal is to retrieve more relevant documents by obtaining the best combination of terms from document collections using genetic algorithms. Experimental results are presented to validate that the genetic approach improves the performance of the text to matrix generator.
A SURVEY ON OPTIMIZATION APPROACHES TO TEXT DOCUMENT CLUSTERINGijcsa
This document provides a survey of optimization approaches that have been applied to text document clustering. It discusses several clustering algorithms and categorizes them as partitioning methods, hierarchical methods, density-based methods, grid-based methods, model-based methods, frequent pattern-based clustering, and constraint-based clustering. It then describes several soft computing techniques that have been used as optimization approaches for text document clustering, including genetic algorithms, bees algorithms, particle swarm optimization, and ant colony optimization. These optimization techniques perform a global search to improve the quality and efficiency of document clustering algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Modeling monthly average daily diffuse radiation for dhaka, bangladesheSAT Journals
Abstract The diffuse part of solar radiation is one of the elements necessary for the design and evaluation of energy production of a solar system. However, in most cases, when radiometric measurements are made, only global radiation is available. To remedy this situation, this paper presents a model of the scattered radiation measured on a horizontal surface for the capital city of Bangladesh. The correlation established for the chosen site was compared to the work of Liu anf Jordan, Page, Collares Pereira and Rabl, Modi and Sukhatme and Gupta el al. Keywords: Diffuse Radiation, Clearness Index, Regression analysis, Horizontal Radiation.
Computational Database for 3D and 2D materials to accelerate discoveryKAMAL CHOUDHARY
The Density functional theory section of JARVIS (JARVIS-DFT) consists of thousands of VASP based calculations for 3D-bulk, single layer (2D), nanowire (1D) and molecular (0D) systems. Most of the calculations are carried out with optB88vDW functional. JARVIS-DFT includes materials data such as: energetics, diffraction pattern, radial distribution function, band-structure, density of states, carrier effective mass, temperature and carrier concentration dependent thermoelectric properties, elastic constants and gamma-point phonons.
MK-Prototypes: A Novel Algorithm for Clustering Mixed Type Data IJMER
Clustering mixed type data is one of the major research topics in the area of data mining. In
this paper, a new algorithm for clustering mixed type data is proposed where the concept of distribution
centroid is used to represent the prototype of categorical variables in a cluster which is then combined
with the mean to represent the prototype of clusters with mixed type variables. In the method, data is
observed from different views and the variables are grouped into different views. Those instances that
can be viewed differently from different viewpoints can be defined as multiview data. During clustering
process the differences among views are ignored in usual cases. Here, both views and variables weights
are computed simultaneously. The view weight is used to determine the closeness or density of view and
variable weight is used to identify the significance of each variable. With the intention of determining
the cluster of objects both these weights are used in the distance function. In the proposed method,
enhancement to the k-prototypes is done so that it automatically computes both view and variable
weights. The proposed algorithm MK-Prototypes algorithm is compared with two other clustering
algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Effectiveness of multilayer coated tool in turning of aisi 430 f steeleSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document summarizes a research paper that evaluates cluster quality using a modified density subspace clustering approach. It discusses how density subspace clustering can be used to identify clusters in high-dimensional datasets by detecting density-connected clusters in all subspaces. The proposed approach uses a density subspace clustering algorithm to select attribute subsets and identify the best clusters. It then calculates intra-cluster and inter-cluster distances to evaluate cluster quality and compares the results to other clustering algorithms in terms of accuracy and runtime. Experimental results showed that the proposed method improves clustering quality and performs faster than existing techniques.
Text documents clustering using modified multi-verse optimizerIJECEIAES
In this study, a multi-verse optimizer (MVO) is utilised for the text document clus- tering (TDC) problem. TDC is treated as a discrete optimization problem, and an objective function based on the Euclidean distance is applied as similarity measure. TDC is tackled by the division of the documents into clusters; documents belonging to the same cluster are similar, whereas those belonging to different clusters are dissimilar. MVO, which is a recent metaheuristic optimization algorithm established for continuous optimization problems, can intelligently navigate different areas in the search space and search deeply in each area using a particular learning mechanism. The proposed algorithm is called MVOTDC, and it adopts the convergence behaviour of MVO operators to deal with discrete, rather than continuous, optimization problems. For evaluating MVOTDC, a comprehensive comparative study is conducted on six text document datasets with various numbers of documents and clusters. The quality of the final results is assessed using precision, recall, F-measure, entropy accuracy, and purity measures. Experimental results reveal that the proposed method performs competitively in comparison with state-of-the-art algorithms. Statistical analysis is also conducted and shows that MVOTDC can produce significant results in comparison with three well-established methods.
A Novel Multi- Viewpoint based Similarity Measure for Document ClusteringIJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Cds based energy efficient topology control algorithm in wireless sensor net...eSAT Journals
Abstract Wireless Sensor Networks (WSNs) are a self organized network which consists of large number of sensor nodes that collects the data in a various environment [1, 2]. The sensors work on battery that have limited lifetime so it is a challenge to create an energy efficient network that can reduce the energy consumption and interference in the network graph and thereby extend the network lifetime [2]. For saving energy and extending network lifetime the topology is a well-known technique in WSNs and the widely used topology control strategy is the construction of Connected Dominating Set (CDS) [3, 4]. In this paper, we construct a CDS based energy efficient topology control algorithm i.e. GCDSTC for WSNs. The performance analysis includes the study of GCDSTC algorithm in terms of complexity and compares it with EBTC (Energy Balanced Topology Control) algorithm. The simulation results indicate that the GCDSTC algorithm reduce the energy consumption and interference in the network graph, in order to enhance the network lifetime. Keywords: Wireless Sensor Network (WSN), Connected Dominating Set (CDS), Topology Control (TC), etc.
The document summarizes research analyzing student movement patterns on a university campus through GPS tracking and data visualization. Key findings include identifying crowded areas like classrooms and cafeterias as "sinks" where students spent significant time, and using adjacency matrices to represent clustering of students within 100m of each other over time. Future work proposed simulating the real data on a synthetic campus model grid to further analyze cluster formation and validate theoretical assumptions.
Simultaneous Real time Graphical Representation of Kinematic Variables Using ...iosrjce
IOSR Journal of Applied Physics (IOSR-JAP) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of physics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in applied physics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
IMPROVED NEURAL NETWORK PREDICTION PERFORMANCES OF ELECTRICITY DEMAND: MODIFY...csandit
Accurate prediction of electricity demand can bring extensive benefits to any country as the
forecast values help the relevant authorities to take decisions regarding electricity generation,
transmission and distribution much appropriately. The literature reveals that, when compared
to conventional time series techniques, the improved artificial intelligent approaches provide
better prediction accuracies. However, the accuracy of predictions using intelligent approaches
like neural networks are strongly influenced by the correct selection of inputs and the number of
neuro-forecasters used for prediction. This research shows how a cluster analysis performed to
group similar day types, could contribute towards selecting a better set of neuro-forecasters in
neural networks. Daily total electricity demands for five years were considered for the analysis
and each date was assigned to one of the thirteen day-types, in a Sri Lankan context. As a
stochastic trend could be seen over the years, prior to performing the k-means clustering, the
trend was removed by taking the first difference of the series. Three different clusters were
found using Silhouette plots, and thus three neuro-forecasters were used for predictions. This
paper illustrates the proposed modified neural network procedure using electricity demand
data.
Data reduction techniques for high dimensional biological dataeSAT Journals
Abstract
High dimensional biological datasets in recent years has been growing rapidly. Extracting the knowledge and analyzing highdimensional
biological data is one the key challenges in which variety and veracity are the two distinct characteristics. The
question that arises now is, how to perform dimensionality reduction for this heterogeneous data and how to develop a high
performance platform to efficiently analyze high dimensional biological data and how to find the useful things from this data. To
deeply discuss this issue, this paper begins with a brief introduction to data analytics available for biological data, followed by
the discussions of big data analytics and then a survey on various data reduction methods for biological data. We propose a dense
clustering algorithm for standard high dimensional biological data.
Keywords: Big Data Analytics, Dimensionality Reduction
We build a network of locations from Twitter geolocalized tweets. We then analyze the small-world and scale-free properties of such spatial network. We also use modularity to reveal the structre of neighborhood communities.
PERFORMANCE EVALUATION OF GRID-BASED ROUTING PROTOCOL FOR UNDERWATER WIRELESS...ijwmn
In this paper, we have conducted a simulation study to evaluate Multipath Grid-Based Geographic Routing
Protocol (MGGR) under different selected mobility models. The protocol has been evaluated under three
mobility models (i.e. Random Way Point, Reference Point Group and Meandering Current Mobility Model)
using Aqua-Sim NS2-based simulator. An Extensive number of experiments have been conducted to assess
the packet delivery ratio, the end-to-end-delay and power consumption under different operating
conditions. We have noticed that the performance of MGGR is only marginally affected by the mobility
behavior of nodes especially with the condition where the speed of nodes movements increased.
Nevertheless, the evaluation has also shown that MGGR exhibit only average performance when used in
coastal areas. This is evident as the MCMM (used primarily to model the movement of nodes in coastal
areas) gave the lowest performance compared to other models.
Study of the Class and Structural Changes Caused By Incorporating the Target ...ijceronline
High dimensional data when processed by using various machine learning and pattern recognition techniques, it undergoes several changes. Dimensionality reduction is one such successfully used pre-processing technique to analyze and represent the high dimensional data that causes several structural changes to occur in the data through the process. The high-dimensional data when used to extract just the target class from among several classes that are spatially scattered then the philosophy of the dimensionality reduction is to find an optimal subset of features either from the original space or from the transformed space using the control set of the target class and then project the input space onto this optimal feature subspace. This paper is an exploratory analysis carried out to study the class properties and the structural properties that are affected due to the target class guided feature subsetting in specific. K-nearest neighbors and minimum spanning tree are employed to study the structural properties, and cluster analysis is applied to understand the target class and other class properties. The experimentation is conducted on the target class derived features on the selected bench mark data sets namely IRIS, AVIRIS Indiana Pine and ROSIS Pavia University data set. Experimentation is also extended to data represented in the optimal principal components obtained by transforming the subset of features and results are also compared
This document summarizes an article that proposes using genetic algorithms to improve the effectiveness of a text to matrix generator. It begins with an overview of information retrieval and discusses the vector space model and genetic algorithms. It then proposes a genetic approach to optimize the objective function of a text to matrix generator to increase the average number of terms. The goal is to retrieve more relevant documents by obtaining the best combination of terms from document collections using genetic algorithms. Experimental results are presented to validate that the genetic approach improves the performance of the text to matrix generator.
A SURVEY ON OPTIMIZATION APPROACHES TO TEXT DOCUMENT CLUSTERINGijcsa
This document provides a survey of optimization approaches that have been applied to text document clustering. It discusses several clustering algorithms and categorizes them as partitioning methods, hierarchical methods, density-based methods, grid-based methods, model-based methods, frequent pattern-based clustering, and constraint-based clustering. It then describes several soft computing techniques that have been used as optimization approaches for text document clustering, including genetic algorithms, bees algorithms, particle swarm optimization, and ant colony optimization. These optimization techniques perform a global search to improve the quality and efficiency of document clustering algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Modeling monthly average daily diffuse radiation for dhaka, bangladesheSAT Journals
Abstract The diffuse part of solar radiation is one of the elements necessary for the design and evaluation of energy production of a solar system. However, in most cases, when radiometric measurements are made, only global radiation is available. To remedy this situation, this paper presents a model of the scattered radiation measured on a horizontal surface for the capital city of Bangladesh. The correlation established for the chosen site was compared to the work of Liu anf Jordan, Page, Collares Pereira and Rabl, Modi and Sukhatme and Gupta el al. Keywords: Diffuse Radiation, Clearness Index, Regression analysis, Horizontal Radiation.
Computational Database for 3D and 2D materials to accelerate discoveryKAMAL CHOUDHARY
The Density functional theory section of JARVIS (JARVIS-DFT) consists of thousands of VASP based calculations for 3D-bulk, single layer (2D), nanowire (1D) and molecular (0D) systems. Most of the calculations are carried out with optB88vDW functional. JARVIS-DFT includes materials data such as: energetics, diffraction pattern, radial distribution function, band-structure, density of states, carrier effective mass, temperature and carrier concentration dependent thermoelectric properties, elastic constants and gamma-point phonons.
MK-Prototypes: A Novel Algorithm for Clustering Mixed Type Data IJMER
Clustering mixed type data is one of the major research topics in the area of data mining. In
this paper, a new algorithm for clustering mixed type data is proposed where the concept of distribution
centroid is used to represent the prototype of categorical variables in a cluster which is then combined
with the mean to represent the prototype of clusters with mixed type variables. In the method, data is
observed from different views and the variables are grouped into different views. Those instances that
can be viewed differently from different viewpoints can be defined as multiview data. During clustering
process the differences among views are ignored in usual cases. Here, both views and variables weights
are computed simultaneously. The view weight is used to determine the closeness or density of view and
variable weight is used to identify the significance of each variable. With the intention of determining
the cluster of objects both these weights are used in the distance function. In the proposed method,
enhancement to the k-prototypes is done so that it automatically computes both view and variable
weights. The proposed algorithm MK-Prototypes algorithm is compared with two other clustering
algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Effectiveness of multilayer coated tool in turning of aisi 430 f steeleSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document describes the implementation of an embedded Bluetooth data broadcast system. The system uses an ARM9 microprocessor running Linux to broadcast information to multiple Bluetooth devices simultaneously via Bluetooth 2.0's EDR technology. It achieves both single-point and multi-point transmission of files and information updating through a Bluetooth-TCP/IP connection. Testing showed the system can reliably broadcast files to multiple devices simultaneously at rates up to 58.8KBps and update information through an FTP connection. The system provides a low-cost solution for wireless information broadcasting with applications in advertising and public information systems.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Achieving operational excellence by implementing an erp (enterprise resource ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document presents a comparison of linear regression and support vector machine (SVM) models for predicting construction project duration. A linear regression model was applied to data from 75 construction projects, using the Bromilow time-cost model. This achieved 73% accuracy based on R-squared and 10% error based on MAPE. An SVM model was then applied to the same data, achieving significantly improved prediction accuracy. The document provides background on linear regression, SVM, and the data and variables used to build and evaluate the two models.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Synthesis, characterisation and antibacterial activity of copolymer (n vinylp...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
1. Pounding is a major cause of damage to adjacent buildings during earthquakes when they are constructed close together without sufficient separation.
2. The study analyzes seismic pounding forces between buildings of different heights and floor levels using software. It finds that pounding damage increases when buildings have different dynamic properties or are inadequately separated.
3. Pounding can cause non-structural to severe structural damage. The required minimum separation between buildings according to codes is 15-30 mm depending on building type, but may need to be larger.
Green cast demonstration of innovative lightweight construction components ma...eSAT Publishing House
This document describes the GREEN CAST project which aims to develop an innovative, sustainable construction material made from recycled fly ash as an alternative to autoclaved aerated concrete (AAC). The material is produced using a geopolymer process with fly ash activated in an alkaline solution and foaming agent. Testing found the material has similar mechanical, thermal and acoustic properties as AAC but with lower environmental impact. Two full-scale demonstrator buildings were constructed using the new material and AAC blocks to compare performance, finding the new material performs similarly as an insulating material.
This document discusses several dynamic thresholding approaches for segmenting continuous Bangla speech sentences into words or subwords. It proposes using k-means clustering, fuzzy c-means clustering (FCM), and Otsu's thresholding method to determine optimal thresholds for segmentation. K-means and FCM clustering produce better segmentation results than Otsu's method. The algorithms are implemented in MATLAB and achieve an average segmentation accuracy of 94%. Blocking black areas and boundary detection techniques are used to properly detect word boundaries in continuous speech and label the segmented units.
Architecture and implementation issues of multi core processors and caching –...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Control of inverters to support bidirectional power flow in grid connected sy...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Trajectory Segmentation and Sampling of Moving Objects Based On Representativ...ijsrd.com
Moving Object Databases (MOD), although ubiquitous, still call for methods that will be able to understand, search, analyze, and browse their spatiotemporal content. In this paper, we propose a method for trajectory segmentation and sampling based on the representativeness of the (sub) trajectories in the MOD. In order to find the most representative sub trajectories, the following methodology is proposed. First, a novel global voting algorithm is performed, based on local density and trajectory similarity information. This method is applied for each segment of the trajectory, forming a local trajectory descriptor that represents line segment representativeness. The sequence of this descriptor over a trajectory gives the voting signal of the trajectory, where high values correspond to the most representative parts. Then, a novel segmentation algorithm is applied on this signal that automatically estimates the number of partitions and the partition borders, identifying homogenous partitions concerning their representativeness. Finally, a sampling method over the resulting segments yields the most representative sub trajectories in the MOD. Our experimental results in synthetic and real MOD verify the effectiveness of the proposed scheme, also in comparison with other sampling techniques.
Drsp dimension reduction for similarity matching and pruning of time series ...IJDKP
The document summarizes a research paper that proposes a framework called DRSP (Dimension Reduction for Similarity Matching and Pruning) for time series data streams. DRSP addresses the challenges of large streaming data size by:
1) Performing dimension reduction using a Multi-level Segment Mean technique to compactly represent the data while retaining crucial information.
2) Incorporating a similarity matching technique to analyze if new data objects match existing streams.
3) Applying a pruning technique to filter out non-relevant data object pairs and join only relevant pairs.
The framework aims to reduce storage and computation costs for similarity matching on large time series data streams.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
This document summarizes a research paper that classified multi-date remote sensing images using NDVI values. It discusses how NDVI values were calculated from Terra satellite imagery using red and infrared band values. A similarity measure formula was proposed to classify images based on comparing NDVI values of unknown images to reference images. The formula measured similarity between image windows using sum of absolute differences of NDVI values. Five Terra images from different dates were classified into 20 reference classes using this approach.
Automatic registration, integration and enhancement of india's chandrayaan 1 ...eSAT Journals
Abstract Chandrayaan-1 was India's first mission in deep space exploration to the moon. Its Terrain Mapping Camera (TMC) sent images of about 50% of total lunar surface in its limited lifetime and covered polar areas almost completely at a high resolution of 5m/pixel and 10m/pixel. This image dataset has been processed and put in public domain as individual strips of images categorized according to the orbits. The authors have already developed a Lunar GIS including a set of utilities like 3-D vision and exploration, crater detection and search using datasets from NASA's Lunar Reconnaissance Orbiter Wide Angle Camera (WAC) which are of lower resolution than CH1. The objective of this paper is to normalize and register the Chandrayaan-1 images to existing processed data so that all these utilities can be transparently applied to high resolution Chandrayaan-1 datasets. Registration process consists of identification of features in source and target images and estimating appropriate correction for offset, rotation and scaling parameters. Furthermore, due to the low altitude orbit of satellite, the acquired images have displacement of pixels from actual nadir position, which need non-linear correction. This paper describes step by step technique to integrate these high and low resolution images in single framework. Keywords: Chandrayaan-1Lunar mapping, Moon, Feature based Image registration, Integration, ISRO, LRO, NASA, TMC, WAC.
Hyperparameters analysis of long short-term memory architecture for crop cla...IJECEIAES
This document summarizes a study that analyzed hyperparameters of a long short-term memory (LSTM) architecture for crop classification using remote sensing data. The study evaluated over 1,000 combinations of four hyperparameters - optimizer, activation function, batch size, and number of LSTM layers - using a grid search algorithm on an LSTM model. The results showed that the choice of optimizer highly impacted classification performance, while other hyperparameters like the number of LSTM layers had less influence. The best performing hyperparameters set for the LSTM model in crop classification was identified.
This document discusses using convolutional neural networks (CNNs) to classify and segment satellite imagery. It presents a novel approach using a CNN to perform per-pixel classification of multispectral satellite imagery and a digital surface model into five categories (vegetation, ground, roads, buildings, water). The CNN is first pre-trained with unsupervised clustering then fine-tuned for classification and segmentation. Results show the CNN approach outperforms existing methods, achieving 94.49% classification accuracy and improving segmentation by reducing salt-and-pepper effects from per-pixel classification alone.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Scalable and efficient cluster based framework for multidimensional indexingeSAT Journals
Abstract Indexing high dimensional data has its utility in many real world applications. Especially the information retrieval process is dramatically improved. The existing techniques could overcome the problem of “Curse of Dimensionality” of high dimensional data sets by using a technique known as Vector Approximation-File which resulted in sub-optimal performance. When compared with VA-File clustering results in more compact data set as it uses inter-dimensional correlations. However, pruning of unwanted clusters is important. The existing pruning techniques are based on bounding rectangles, bounding hyper spheres have problems in NN search. To overcome this problem Ramaswamy and Rose proposed an approach known as adaptive cluster distance bounding for high dimensional indexing which also includes an efficient spatial filtering. In this paper we implement this high-dimensional indexing approach. We built a prototype application to for proof of concept. Experimental results are encouraging and the prototype can be used in real time applications. Index Terms–Clustering, high dimensional indexing, similarity measures, and multimedia databases
The document summarizes a research paper on crowd behavior analysis using a graph theoretic approach. It proposes a model called the Graph theoretic approach based Crowd Behavior Analysis and Classification System (GCBACS). The key points are:
1) GCBACS uses optical flow to track personnel trajectories in video frames and derive streak flow vectors to analyze crowd behavior.
2) It represents each video frame as a graph and analyzes similarities and variances between frames to classify activities as normal or abnormal.
3) Inter-personal activities are considered for analysis, allowing it to work in dense and sparse crowds. If the cumulative variance between frames exceeds a threshold, the activity is classified as abnormal.
SPATIO-TEMPORAL QUERIES FOR MOVING OBJECTS DATA WAREHOUSINGijdms
In the last decade, Moving Object Databases (MODs) have attracted a lot of attention from researchers.
Several research works were conducted to extend traditional database techniques to accommodate the new
requirements imposed by the continuous change in location information of moving objects. Managing,
querying, storing, and mining moving objects were the key research directions. This extensive interest in
moving objects is a natural consequence of the recent ubiquitous location-aware devices, such as PDAs,
mobile phones, etc., as well as the variety of information that can be extracted from such new databases. In
this paper we propose a Spatio-Temporal data warehousing (STDW) for efficiently querying location
information of moving objects. The proposed schema introduces new measures like direction majority and
other direction-based measures that enhance the decision making based on location information
Object Classification of Satellite Images Using Cluster Repulsion Based Kerne...IOSR Journals
Abstract: We investigated the Classification of satellite images and multispectral remote sensing data .we
focused on uncertainty analysis in the produced land-cover maps .we proposed an efficient technique for
classifying the multispectral satellite images using Support Vector Machine (SVM) into road area, building area
and green area. We carried out classification in three modules namely (a) Preprocessing using Gaussian
filtering and conversion from conversion of RGB to Lab color space image (b) object segmentation using
proposed Cluster repulsion based kernel Fuzzy C- Means (FCM) and (c) classification using one-to-many SVM
classifier. The goal of this research is to provide the efficiency in classification of satellite images using the
object-based image analysis. The proposed work is evaluated using the satellite images and the accuracy of the
proposed work is compared to FCM based classification. The results showed that the proposed technique has
achieved better results reaching an accuracy of 79%, 84%, 81% and 97.9% for road, tree, building and vehicle
classification respectively.
Keywords:-Satellite image, FCM Clustering, Classification, SVM classifier.
USING ONTOLOGY BASED SEMANTIC ASSOCIATION RULE MINING IN LOCATION BASED SERVICESIJDKP
Recently, GPS and mobile devices allowed collecting a huge amount of mobility data. Researchers from
different communities have developed models and techniques for mobility analysis. But they mainly focused
on the geometric properties of trajectories and do not consider the semantic facet of moving objects. The
techniques are good at extracting patterns, but they are hard to interpret in a specific application domain.
This paper proposes a methodology to understand mobility data and semantically interpret trajectory
patterns. The process considers four different behavior types such as semantic, semantic and space,
semantic and time, and semantic and space-time. Finally, a system prototype was developed to evaluate the
behavior models in different aspects using one of the location based services. The results showed that
applying the semantic association rules could significantly reduce the number of available services and
customize the services based on the rules.
This document discusses approaches for video segmentation. It describes tracking particles across frames to identify motion patterns, then clustering the particles to obtain a pixel-wise segmentation over space and time. This addresses limitations of segmentation based on motion boundaries. Reality-based 3D models can help address complex spatial motions by representing objects and their relationships in 3D space. The document also reviews direct and feature-based motion estimation methods, variational and level-set segmentation frameworks, and challenges including fitting motion models to data and handling outliers.
An Investigation on Human Organ Localization in Medical ImagesIRJET Journal
This document summarizes various algorithms that can be used for human organ localization in medical images. It discusses methods such as using probabilistic patch-based label fusion with image registration to localize the left ventricle, global-to-local regression forests to localize multiple organs, and using marginal and full space learning with coarse and fine levels to localize the left ventricle. Additionally, it covers using probabilistic edge maps generated from structured decision forests to localize the left ventricle and atrium, as well as using random decision forests and 3D visual features to detect anatomical structures in CT volumes. The document analyzes approaches such as combining random forests with Hough regression and Markov random fields to localize complex anatomical
Self scale estimation of the tracking window merged with adaptive particle fi...IJECEIAES
Tracking a mobile object is one of the important topics in pattern recognition, but style has some obstacles. A reliable tracking system must adjust their tracking windows in real time according to appearance changes of the tracked object. Furthermore, it has to deal with many challenges when one or multiple objects need to be tracked, for instance when the target is partially or fully occluded, background clutter, or even some target region is blurred. In this paper, we will present a novel approach for a single object tracking that combines particle filter algorithm and kernel distribution that update its tracking window according to object scale changes, whose name is multi-scale adaptive particle filter tracker. We will demonstrate that the use of particle filter combined with kernel distribution inside the resampling process will provide more accurate object localization within a research area. Furthermore, its average error for target localization was significantly lower than 21.37 pixels as the mean value. We have conducted several experiments on real video sequences and compared acquired results to other existing state of the art trackers to demonstrate the effectiveness of the multi-scale adaptive particle filter tracker.
This document summarizes an experiment that evaluated the efficiency of three search strategies for autonomous rendezvous in space: random, semi-autonomous, and autonomous. The experiment used LEGO robots to simulate a space tug searching for a target in a two-dimensional space, and measured the time and energy required for each strategy. It was found that the semi-autonomous strategy was most energy efficient but most time-consuming, while the autonomous strategy proved most suitable for space applications by being both reasonably energy efficient and faster. The results provide insight into optimal search algorithms for an actual space tug vehicle to implement during rendezvous and docking.
This project aims to develop a launch traffic model to model past and future satellite launches. This model will provide source terms describing the injection of objects into orbit due to rocket launches. These source terms will then be input into an existing density-based model to simulate the evolution of the space debris population in low Earth orbit. So far, the researcher has analyzed current debris populations from several databases and matched approximately 10% of objects between 2005-2012. The goals are to analyze results from implementing the source terms into the debris cloud simulation code.
Performance analysis of change detection techniques for land use land coverIJECEIAES
Remotely sensed satellite images have become essential to observe the spatial and temporal changes occurring due to either natural phenomenon or man-induced changes on the earth’s surface. Real time monitoring of this data provides useful information related to changes in extent of urbanization, environmental changes, water bodies, and forest. Through the use of remote sensing technology and geographic information system tools, it has become easier to monitor changes from past to present. In the present scenario, choosing a suitable change detection method plays a pivotal role in any remote sensing project. Previously, digital change detection was a tedious task. With the advent of machine learning techniques, it has become comparatively easier to detect changes in the digital images. The study gives a brief account of the main techniques of change detection related to land use land cover information. An effort is made to compare widely used change detection methods used to identify changes and discuss the need for development of enhanced change detection methods.
Abstract
The exponential growth of knowledge in the World Wide Web, has understood the need to develop economical and effective ways for organizing relevant contents. In the field of web computing, document clustering plays a vital role and plays an interesting and challenging problem. Document clustering is mainly used for grouping the similar documents in the search engine. The web also has rich and dynamic collection of hyperlink information. The retrieval of relevant document from the internet is the complicated task. Based on the user’s query the document will be retrieved from the various databases to give relevant information and additional information for the given query. The documents are already clustered based on keyword extraction and stored in the database. The probabilistic relational approach for web document clustering is to find the relation between two linked pages and to define a relational clustering algorithm based on probabilistic graph representation. In document clustering, both content information and hyperlink structure of web page are considered and document is viewed as a semantic units. It also provides additional information to the user.
Keywords: Document Clustering, Agglomerative Clustering, Entropy, F-Measure
Similar to An innovative idea to discover the trend on multi dimensional spatio-temporal datasets (20)
Hudhud cyclone caused extensive damage in Visakhapatnam, India in October 2014, especially to tree cover. This will likely impact the local environment in several ways: increased air pollution as trees absorb less; higher temperatures without tree canopy; increased erosion and landslides. It also created large amounts of waste from destroyed trees. Proper management of solid waste is needed to prevent disease spread. Suggested measures include restoring damaged plants, building fountains to reduce heat, mandating light-colored buildings, improving waste management, and educating public on health risks. Overall, changes are needed to water, land, and waste practices to rebuild the environment after the cyclone removed green cover.
Impact of flood disaster in a drought prone area – case study of alampur vill...eSAT Publishing House
1) In September-October 2009, unprecedented heavy rainfall and dam releases caused widespread flooding in Alampur village in Mahabub Nagar district, a historically drought-prone area.
2) The flood damaged or destroyed homes, buildings, infrastructure, crops, and documents. It displaced many residents and cut off the village.
3) The socioeconomic conditions and mud-based construction of homes in the village exacerbated the flood's impacts, making damage more severe and recovery more difficult.
The document summarizes the Hudhud cyclone that struck Visakhapatnam, India in October 2014. It describes the cyclone's formation, rapid intensification to winds of 175 km/h, and landfall near Visakhapatnam. The cyclone caused extensive damage estimated at over $1 billion and at least 109 deaths in India and Nepal. Infrastructure like buildings, bridges, and power lines were destroyed. Crops and fishing boats were also damaged. The document then discusses coping strategies and improvements needed to disaster management plans to better prepare for future cyclones.
Groundwater investigation using geophysical methods a case study of pydibhim...eSAT Publishing House
This document summarizes the results of a geophysical investigation using vertical electrical sounding (VES) methods at 13 locations around an industrial area in India. The VES data was interpreted to generate geo-electric sections and pseudo-sections showing subsurface resistivity variations. Three main layers were typically identified - a high resistivity topsoil, a weathered middle layer, and a basement rock. Pseudo-sections revealed relatively more weathered areas in the northwest and southwest. Resistivity sections helped identify zones of possible high groundwater potential based on low resistivity anomalies sandwiched between more resistive layers. The study concluded the electrical resistivity method was useful for understanding subsurface geology and identifying areas prospective for groundwater exploration.
Flood related disasters concerned to urban flooding in bangalore, indiaeSAT Publishing House
1. The document discusses urban flooding in Bangalore, India. It describes how factors like heavy rainfall, population growth, and improper land use have contributed to increased flooding in the city.
2. Flooding events in 2013 are analyzed in detail. A November rainfall caused runoff six times higher than the drainage capacity, inundating low-lying residential areas.
3. Impacts of urban flooding include disrupted daily life, damaged infrastructure, and decreased economic activity in affected areas. The document calls for improved flood management strategies to better mitigate urban flooding risks in Bangalore.
Enhancing post disaster recovery by optimal infrastructure capacity buildingeSAT Publishing House
This document discusses enhancing post-disaster recovery through optimal infrastructure capacity building. It presents a model to minimize the cost of meeting demand using auxiliary capacities when disaster damages infrastructure. The model uses genetic algorithms to select optimal capacity combinations. The document reviews how infrastructure provides vital services supporting recovery activities and discusses classifying infrastructure into six types. When disaster reduces infrastructure services, a gap forms between community demands and available support, hindering recovery. The proposed research aims to identify this gap and optimize capacity selection to fill it cost-effectively.
Effect of lintel and lintel band on the global performance of reinforced conc...eSAT Publishing House
This document analyzes the effect of lintels and lintel bands on the seismic performance of reinforced concrete masonry infilled frames through non-linear static pushover analysis. Four frame models are considered: a frame with a full masonry infill wall; a frame with a central opening but no lintel/band; a frame with a lintel above the opening; and a frame with a lintel band above the opening. The results show that the full infill wall model has 27% higher stiffness and 32% higher strength than the model with just an opening. Models with lintels or lintel bands have slightly higher strength and stiffness than the model with just an opening. The document concludes lintels and lintel
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...eSAT Publishing House
1) A cyclone with wind speeds of 175-200 kph caused massive damage to the green cover of Gitam University campus in Visakhapatnam, India. Thousands of trees were uprooted or damaged.
2) A study assessed different types of damage to trees from the cyclone, including defoliation, salt spray damage, damage to stems/branches, and uprooting. Certain tree species were more vulnerable than others.
3) The results of the study can help in selecting more wind-resistant tree species for future planting and reducing damage from future storms.
Wind damage to buildings, infrastrucuture and landscape elements along the be...eSAT Publishing House
1) A visual study was conducted to assess wind damage from Cyclone Hudhud along the 27km Visakha-Bheemli Beach road in Visakhapatnam, India.
2) Residential and commercial buildings suffered extensive roof damage, while glass facades on hotels and restaurants were shattered. Infrastructure like electricity poles and bus shelters were destroyed.
3) Landscape elements faced damage, including collapsed trees that damaged pavements, and debris in parks. The cyclone wiped out over half the city's green cover and caused beach erosion around protected areas.
1) The document reviews factors that influence the shear strength of reinforced concrete deep beams, including compressive strength of concrete, percentage of tension reinforcement, vertical and horizontal web reinforcement, aggregate interlock, shear span-to-depth ratio, loading distribution, side cover, and beam depth.
2) It finds that compressive strength of concrete, tension reinforcement percentage, and web reinforcement all increase shear strength, while shear strength decreases as shear span-to-depth ratio increases.
3) The distribution and amount of vertical and horizontal web reinforcement also affects shear strength, but closely spaced stirrups do not necessarily enhance capacity or performance.
Role of voluntary teams of professional engineers in dissater management – ex...eSAT Publishing House
1) A team of 17 professional engineers from various disciplines called the "Griha Seva" team volunteered after the 2001 Gujarat earthquake to provide technical assistance.
2) The team conducted site visits, assessments, testing and recommended retrofitting strategies for damaged structures in Bhuj and Ahmedabad. They were able to fully assess and retrofit 20 buildings in Ahmedabad.
3) Factors observed that exacerbated the earthquake's impacts included unplanned construction, non-engineered buildings, improper prior retrofitting, and defective materials and workmanship. The professional engineers' technical expertise was crucial for effective post-disaster management.
This document discusses risk analysis and environmental hazard management. It begins by defining risk, hazard, and toxicity. It then outlines the steps involved in hazard identification, including HAZID, HAZOP, and HAZAN. The document presents a case study of a hypothetical gas collecting station, identifying potential accidents and hazards. It discusses quantitative and qualitative approaches to risk analysis, including calculating a fire and explosion index. The document concludes by discussing hazard management strategies like preventative measures, control measures, fire protection, relief operations, and the importance of training personnel on safety.
Review study on performance of seismically tested repaired shear wallseSAT Publishing House
This document summarizes research on the performance of reinforced concrete shear walls that have been repaired after damage. It begins with an introduction to shear walls and their failure modes. The literature review then discusses the behavior of original shear walls as well as different repair techniques tested by other researchers, including conventional repair with new concrete, jacketing with steel plates or concrete, and use of fiber reinforced polymers. The document focuses on evaluating the strength retention of shear walls after being repaired with various methods.
Monitoring and assessment of air quality with reference to dust particles (pm...eSAT Publishing House
This document summarizes a study on monitoring and assessing air quality with respect to dust particles (PM10 and PM2.5) in the urban environment of Visakhapatnam, India. Sampling was conducted in residential, commercial, and industrial areas from October 2013 to August 2014. The average PM2.5 and PM10 concentrations were within limits in residential areas but moderate to high in commercial and industrial areas. Exceedance factor levels indicated moderate pollution for residential areas and moderate to high pollution for commercial and industrial areas. There is a need for management measures like improved public transport and green spaces to combat particulate air pollution in the study areas.
Low cost wireless sensor networks and smartphone applications for disaster ma...eSAT Publishing House
This document describes a low-cost wireless sensor network and smartphone application system for disaster management. The system uses an Arduino-based wireless sensor network comprising nodes with various sensors to monitor the environment. The sensor data is transmitted to a central gateway and then to the cloud for analysis. A smartphone app connected to the cloud can detect disasters from the sensor data and send real-time alerts to users to help with early evacuation. The system aims to provide low-cost localized disaster detection and warnings to improve safety.
Coastal zones – seismic vulnerability an analysis from east coast of indiaeSAT Publishing House
This document summarizes an analysis of seismic vulnerability along the east coast of India. It discusses the geotectonic setting of the region as a passive continental margin and reports some moderate seismic activity from offshore in recent decades. While seismic stability cannot be assumed given events like the 2004 tsunami, no major earthquakes have been recorded along this coast historically. The document calls for further study of active faults, neotectonics, and implementation of improved seismic building codes to mitigate vulnerability.
Can fracture mechanics predict damage due disaster of structureseSAT Publishing House
This document discusses how fracture mechanics can be used to better predict damage and failure of structures. It notes that current design codes are based on small-scale laboratory tests and do not account for size effects, which can lead to more brittle failures in larger structures. The document outlines how fracture mechanics considers factors like size effect, ductility, and minimum reinforcement that influence the strength and failure behavior of structures. It provides examples of how fracture mechanics has been applied to problems like evaluating shear strength in deep beams and investigating a failure of an oil platform structure. The document argues that fracture mechanics provides a more scientific basis for structural design compared to existing empirical code provisions.
This document discusses the assessment of seismic susceptibility of reinforced concrete (RC) buildings. It begins with an introduction to earthquakes and the importance of vulnerability assessment in mitigating earthquake risks and losses. It then describes modeling the nonlinear behavior of RC building elements and performing pushover analysis to evaluate building performance. The document outlines modeling RC frames and developing moment-curvature relationships. It also summarizes the results of pushover analyses on sample 2D and 3D RC frames with and without shear walls. The conclusions emphasize that pushover analysis effectively assesses building properties but has limitations, and that capacity spectrum method provides appropriate results for evaluating building response and retrofitting impact.
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...eSAT Publishing House
1) A 6.0 magnitude earthquake occurred off the coast of Paradip, Odisha in the Bay of Bengal on May 21, 2014 at a depth of around 40 km.
2) Analysis of magnetic and bathymetric data from the area revealed the presence of major lineaments in NW-SE and NE-SW directions that may be responsible for seismic activity through stress release.
3) Movements along growth faults at the margins of large Bengal channels, due to large sediment loads, could also contribute to seismic events by triggering movements along the faults.
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...eSAT Publishing House
This document discusses the effects of Cyclone Hudhud on the development of Visakhapatnam as a smart and green city through a case study and preliminary surveys. The surveys found that 31% of participants had experienced cyclones, 9% floods, and 59% landslides previously in Visakhapatnam. Awareness of disaster alarming systems increased from 14% before the 2004 tsunami to 85% during Cyclone Hudhud, while awareness of disaster management systems increased from 50% before the tsunami to 94% during Hudhud. The surveys indicate that initiatives after the tsunami improved awareness and preparedness. Developing Visakhapatnam as a smart, green city should consider governance
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Generative AI Use cases applications solutions and implementation.pdfmahaffeycheryld
Generative AI solutions encompass a range of capabilities from content creation to complex problem-solving across industries. Implementing generative AI involves identifying specific business needs, developing tailored AI models using techniques like GANs and VAEs, and integrating these models into existing workflows. Data quality and continuous model refinement are crucial for effective implementation. Businesses must also consider ethical implications and ensure transparency in AI decision-making. Generative AI's implementation aims to enhance efficiency, creativity, and innovation by leveraging autonomous generation and sophisticated learning algorithms to meet diverse business challenges.
https://www.leewayhertz.com/generative-ai-use-cases-and-applications/
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
VARIABLE FREQUENCY DRIVE. VFDs are widely used in industrial applications for...PIMR BHOPAL
Variable frequency drive .A Variable Frequency Drive (VFD) is an electronic device used to control the speed and torque of an electric motor by varying the frequency and voltage of its power supply. VFDs are widely used in industrial applications for motor control, providing significant energy savings and precise motor operation.
Digital Twins Computer Networking Paper Presentation.pptxaryanpankaj78
A Digital Twin in computer networking is a virtual representation of a physical network, used to simulate, analyze, and optimize network performance and reliability. It leverages real-time data to enhance network management, predict issues, and improve decision-making processes.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELijaia
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Software Engineering and Project Management - Introduction, Modeling Concepts...Prakhyath Rai
Introduction, Modeling Concepts and Class Modeling: What is Object orientation? What is OO development? OO Themes; Evidence for usefulness of OO development; OO modeling history. Modeling
as Design technique: Modeling, abstraction, The Three models. Class Modeling: Object and Class Concept, Link and associations concepts, Generalization and Inheritance, A sample class model, Navigation of class models, and UML diagrams
Building the Analysis Models: Requirement Analysis, Analysis Model Approaches, Data modeling Concepts, Object Oriented Analysis, Scenario-Based Modeling, Flow-Oriented Modeling, class Based Modeling, Creating a Behavioral Model.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
AI + Data Community Tour - Build the Next Generation of Apps with the Einstei...
An innovative idea to discover the trend on multi dimensional spatio-temporal datasets
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 243
AN INNOVATIVE IDEA TO DISCOVER THE TREND ON MULTI-
DIMENSIONAL SPATIO-TEMPORAL DATASETS
N. Naga Saranya1
, M. Hemalatha2
1
Assistant Professor, Department of Computer Applications, Tamil Nadu, India
2
Professor, Department of Computer Science, Tamil Nadu, India
Abstract
Spatio-temporal data is any information regarding space and time. It is frequently updated data with 1TB/hr, are greatly challenging
our ability to digest the data. Thereupon, it is unable to gain exact information from that data. So this research offers an innovative
idea to discover the trend on multi-dimensional spatio-temporal datasets. Here it briefly describes the scope and relevancy of spatio-
temporal data. From that, gain the depth knowledge of spatio-temporal recent research process to discover the trend.
Keywords: Spatio-temporal Data, Applications of Spatio-temporal data, Problem Definition, Contributions
----------------------------------------------------------------------***------------------------------------------------------------------------
1. INTRODUCTION
Data mining is the analysis of observational data sets to find
unsuspected relationships and to summarize the data in novel
ways that are both understandable and useful to the data
owner. Extracting interesting and useful patterns from spatio -
temporal database is more difficult than extracting the
corresponding patterns from traditional (fixed) numeric and
categorical (clear) data due to the complexity of spatio -
temporal data types, spatio - temporal relationships, and spatio
- temporal autocorrelation. Spatio-temporal data is any
information relating space and time.
Geospatial data is regularly updated data with 1TB/hr, are
greatly challenging our ability to digest the data. With that
data, it is unable to gain exact information. Here the data may
lose. Geospatial data take a view of geospatial phenomena,
here it captures the spatiality. If it evolves over time, it
captures the spatiality as well as the temporality. From that we
are able to know the process and events of spatio-temporal
data. Knowledge of extracting spatio-temporal data gives
better prediction of its process and events.
Finding the trend is most important task in spatio-temporal
data. Particularly it is to analyze trajectories movement,
animal movements, mating behavior, harvesting, soil quality
changes, earthquake histories, volcanic activities and
prediction etc. Spatial, temporal and/or spatio-temporal data
mining looks for patterns in data using the spatio-temporal
attributes. Trend discovery cannot be directly performed well.
The proposed research performs trend discovery by combining
the principles of clustering, association rule mining,
generalization and characterization.
The significance of spatio-temporal data is continuously
updated data with 1 TB/hr. It is terribly massive and huge-
dimensional geographic and spatio-temporal datasets.
Meteorology
Biology
Crop sciences
Forestry
Medicine
Geophysics
Ecology
Transportation, etc
And we can apply this proposed algorithm to any kind of data
mining applications. Especially when we apply to spatio-
temporal databases, it gives most significant results. When
trend discover algorithm is used, the rate of the retrieval is fast
and results obtained were accurate.
The objective of this research is to develop a novel algorithm
for trend discovery on multi dimensional Spatio-temporal
databases. Objectives are,
Predict the knowledge
Retrieve hidden information
Discover Trend
Future Usage
2. LITERATURE REVIEW
2.1 Clustering in Spatio-Temporal Data
Literature survey have been made on of cluster-ing the moving
data objects in the trajectory dataset the basis of trajectories
representation, moving object point, similarity, clustering
algorithms. Data can be viewed as three forms like spatial
aspects, temporal aspects and spatio-temporal aspects. The
problem of clustering of moving objects in spatial networks
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 244
environment is ad-dressed (Jidong Chen et al., 2007). Since
the clustering process is seems to be difficult the author
proposed a framework that are divided into two sections. It
can be done through performing various methods on CBs for
constant maintaining and periodical construction of clusters.
Results show that the proposed framework outperforms inroad
network. The author deploys multi-scale wavelet transforms
and self-organizing map with neural networks to mine air
pollutant data (Sheng-Tun Li and Shih-Wei Chou, 2000).
Significance of using Wavelet transform method is to
generate data and time variant features for detecting the air
pollutant spatial data in a specific time variant. This wavelet
transform method greatly reduces the local small regions using
small scale input data and improve the over-smoothed regions
using one large scale input data.
Moving objects in trajectory database can be identified with
the help of free moving trajectory context and constrained
trajectories (Ahmed Kharrat et al., 2008). Spatial shapes are
identified to evaluate the OWD (One Way Distance) in both
continuous and discrete cases and trajectories similarity
measures are later dis-cussed (Lin et al., 2005).
The problem of moving object with two datasets with the help
of time series algorithm is investigated (Hoda M. O. Mokhtar
et al., 2011). And both simulated and real time data has been
used for the study. A new similarity measures based on huge
dimensional mobile trajectories is presented (Sigal Elnekave et
al., 2007). The uniform geometric model has been proposed
(O. Gervasi et al., 2005) for clustering and query time-
dependent data. Three dimensional visualization tools to help
the user visualization and time taken for query clustering that
are dynamic in nature.
A study on historical trajectories of moving ob-jects by
proposing an algorithm to discover moving clus-ters are
performed (Kalnis et al., 2005). Sequence of spatial clusters
that appear in consecutive snapshots of the object movements
is termed as moving clusters and other sequence of clusters
tends to share the most common objects. That can be detected
using clusters at consecutive snapshots with high time
complexity. The problem of finding clusters both in dynamic
and continuous nature within 2D Euclidean space is (Christian
S. Jensen et al., 2007) analyzed. Using this concept the user
can easily generate cluster necessary features within reduced
time constraints. The insertion and deletion of clustering
scheme has not done efficiently. An experimental result shows
that the proposed method outperforms when compared to
other existing methods. Clusters for mobile object trajectories
to find the movement patterns based on cluster centroids are
(Sigal Elnekave et al., 2007) evaluated.
To predict the object location, the movement pattern is used.
Precision and recall values are used to measure the
performance of clusters. Threshold values are kept to remove
the exceptional data points. Several existing clustering
methods have been done to calculate the overall partition of
data sets so as to provide some meaningful information about
clusters. But it may not applicable for moving object databases
with huge dimensional spatio-temporal data sets. An
interesting approach called model based concept clustering
(MCC) to identify the moving object in video surveillance is
(Jeongkyu Lee et al., 2007) provided. It includes three steps
namely model formation, model-based concept analysis and
concept graph generation. The results show that the MCC
works well in terms of quality when compared to other
existing methods.
The goal of trajectory clustering is to find simi-lar movement
traces. Many clustering methods have been proposed using
different distance measures between trajectories. The similar
portions of sub trajectories are (Lee. J.G et al., 2007)
discovered. Note that a trajectory may have a long and
complicated path. Since the two trajectories are similar in
some sub-trajectories, they may not be similar as a whole.
Similar sub-trajectories are useful, when one considers regions
of special interest in analysis part. This observation leads to
the development of new sub-trajectory clustering algorithm
named as TRACLUS (Lee. J.G et al., 2007). This method
discovers clusters by grouping sub-trajectories based on
density. Whereas each trajectory must be partitioned into line
segments using Minimum Description Length (MDL). Then,
density-based clustering is applied on the segments to
represent the sub-trajectories that are summarized over all the
line segments in the cluster.
The predestination method to know the history of a driver’s
destination, along with its behaviors, to pre-dict a trip progress
of a driver is (Krumm.J and E.Horvitz, 2006) described. The
driving behaviors pos-sess the starting point of drier, its
destinations and also its efficiency to estimate the trip times
and position of the truck. Four different probabilistic aspects
and ma-thematical principles to create a probability grid of ex-
pected destinations. The authors introduce an open-world
model of destinations that helps the algorithm to work well
with small of training data at the beginning of the training
period by accounting the behavior of users (I.e.what they
visited previously) and unobserved loca-tions based on trends
in the data and the background properties related to the
concerned locations. The 3,667 different driving trips show
the error for two kilometers at the trip’s halfway point.
However this technique predicts the driver’s destination, at
any given point of time. In case of linear functions the object
movement may exhibit different movement patterns.
A trajectories cluster contains similar periodic trajectories.
Trajectories in the same cluster contain as much similar MBBs
that are close to space and time. The centroid of a cluster in a
trajectory databases groups similar trajectory to represent a
movement pattern of a given object. Since the genetic
clustering algorithm handles input of trajectory data to bound
intervals (trajectories) in spite of numeric vectors Y X T hours
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 245
are estimated. The K-Means algorithm for clustering
trajectories based upon the amount of data and its
corresponding similarity measures in a spatial temporal
version is also (Elnekave S et al., 2007) used. It makes use of
new centroid structure and updating me-thod respectively. It
adapt the incremental clustering approach to promote the
clustering of the first training data window (e.g. trajectories
during the first month of data collection), when no pasts data
are available and clustering for the corresponding subsequent
windows, by utilizing clustering centroids and thereby
improving the performance of various sequences of
incremental clustering processes (Elnekave S et al., 2007).
Limited updation are needed to pursue the movement behavior
of the same object which stays relatively stable.
More recently (Spiliopoulou et al., 2006), a framework called
MONIC has been proposed to model the traces of cluster
transitions. Specifically, the data can be clustered at varying
timestamps using bisecting K-means algorithm to detect the
changes of clusters at various timestamps. Unlike the above
two works, which analyze the relations between clusters after
the clusters are obtained, here author aims to predict the
possible cluster evolution to guide the clustering. Finally, we
note that clustering of moving objects involves future-position
modeling. In addition to the linear function model, which has
been involved in most work, a recent proposal considers
nonlinear object movement (Tao.Y et al., 2004).
The key aspect is to obtain a recursive motion function to
predict the future positions of a moving ob-ject depending
upon the positions in the recent past. However, this approach
is more complex than the widely adopted linear model and
complicates the analysis of several interesting spatio-temporal
problems. Thus, we use the linear model. But it is not feasible
to find work on clustering in the literature devoted to kinetic
data structures (Basch.J et al., 1999).
Recently, a new framework has been proposed for segmenting
trajectories for enabling line segments (Lee.J.G et al., 2008,
and Lee.J.G et al., 2007). These line segments are grouped
together to build the clusters. However, time has not been
considered in this system (Lee.J.G et al., 2008, and Lee.J.G et
al., 2007), which makes some line segments to be clustered
together even though they are not ―close‖ when time is
considered. Nevertheless, such approaches for clustering
moving objects cannot solve the flock pattern query since: (1)
they use different criteria when joining the moving ob-ject
clusters for two consecutive time instances; (2) they employ
clustering algorithms, and therefore no strong relationship
between all elements are enforced; (3) mov-ing clustering
does not require the same set of moving objects to stay in a
cluster all the time for the specified minimum duration.
The distance between two trajectory segmenta-tions at time t
as the distance between the rectangles at time t, and the
distance between two segmentations is the sum of the
distances between them at every time instant is
(Anagnostopoulos et al. (2006) defined. The distance between
the trajectory MBRs is a lower bound of the original distance
between the raw data, which is an essential property for
generating the correctness of results for most mining tasks.
The similarity of trajectories along time is computed by
analyzing the way the distance between the varies trajectories
(D'Auria, et al., 2005). More precisely, for each time instant
the positions of moving objects is compared at that moment,
therby aggregating the set of distance values.
The generic definition of clustering is usually redefined
depending on the type of data to be clustered and the
clustering objective. Different and scalable clus-tering
algorithms have been proposed (Iwerks Glenn S et al., 2003,
and LiuWenting et al., 2008). The authors pro-pose a
clustering technique (Zhang Qing and Lin Xu-emin, 2004),
that uses a distance function which com-bines both position
and velocity differences, they employ the k-center clustering
algorithm (Gonzalez, 1985) for histogram construction. The
authors represent a trajectory as a list of (MBB) minimal
bounding boxes (Sigal Elnekave et al., 2008).
Trajectory clustering problem is further investi-gated (Dino
Pedreschi Margherita D'Auria and Mirco Nanni, 2004). In this
work, a clustering technique called temporal focusing was
introduced to make use of the intrinsic semantics of the
temporal dimension and therby promoting the quality of
trajectory clusters. Based on this dependency, the density-
based clustering algorithm has been proposed to trajectory
data for estimating a notion of distance between trajectories.
Clustering of moving objects to optimize the continuous
spatio-temporal query execution has been (Rimma V. Nehme
and Elke A. Rundensteiner, 2006) used. The moving cluster
can be represented by a circle. And a unified framework for
Clustering Moving Objects in spatial Networks (CMON) is
(Chen Jidong et al., 2007) proposed. In this architecture the
clustering process is divided into the continuous maintenance
of cluster blocks (CBs) and the periodical construction of
clusters with different criteria based on CBs. A CB groups a
set of objects on a road segment in close proximity to each
other at present and in future.
The moving clusters from historical trajectories of objects is
(In Panos Kalnis et al., 2005) discovered. Important events
like collision among micro-clusters are also identified. Based
on the historical trajectories of moving objects a new
algorithm has been proposed. A moving cluster is defined as a
sequence of spatial clusters that appear in consecutive
snapshots of the object movements and subsequent spatial
clusters share a large number of common objects. A clustering
technique (Hoda M. O. Mokhtar et al., 2011), that is close to
the partition and group framework proposed (Lee Jae-Gil et
al., 2007) to partition a trajectory into a set of line segments
and then groups similar line segments together into a cluster.
Clustering approaches to enable the common aspects of sub-
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 246
trajectories from a trajectory database has benn (Hoda M. O.
Mokhtar et al., 2011) proposed. This aspect makes the
approaches more accurate than techniques that cluster based
on the whole trajectory behavior. The proposed algorithms
uses the idea of recapping to create a list of clusters that are
visited by different trajectory's segments developed by an
approximate clustering algorithm using the visited cluster list
so as to predict the future motion pattern in the trajectory
databases.
2.2 Outlier Detection in Spatio-Temporal Data
Data objects that appear away from particular consistency in
database are called outliers (Barnett and Lewis, 1994). Usually
uncommon and remarkable ac-tions of data will be considered
as anomalies or noise. Outlier detection is classified as
distribution-based, depth-based and distance based methods.
First, the dis-tribution-based approach uses standard statistical
distri-bution, secondly, the depth-based technique divides the
data objects into an n number of dimensional space (i.e., n is
the total number of attribute in that space) and finally the
distance-based approach estimates the amount of database
objects in a specified distance from a target object (Ng. R.T,
2001). The non-spatial referenced objects vary from the other
objects by means of spatial reference of its neighborhood
called spatial outliers. All spatially referenced objects may not
differ from the entire objects only some of its specified lo-
cation differs from the other in nature (Shekhar et al, 2003).
Distribution based approach have been used extensively in
case of analyzing spatial outliers (Shekhar et al, 2003). It
includes two types of outlier detection methods namely single
and multidimensional outlier detection method.
The problem of spatial-temporal outlier (STO) in which the
spatio – temporal objects differs greatly from both spatial and
temporal objects in terms of rela-tionship are addressed (Tao
Cheng et al., 2004). To im-prove the performance of both
semantic aspects and dynamic aspects of spatio-temporal data
in multi-scales environment, the STO is proposed.
Trajectory outliers are of two kinds. First, a moving object can
be an outlier in terms of its neighborhood or it does not follow
the similar paths. It can be detected by calculating the
distance between two fixed moving objects. Secondly, the
outlying sub-trajectory may not follow the common aspects
among other sub trajectories respectively. This method
includes many challenging issues and can be solved by
partition-and-detect framework (Lee. J.G et al., 2008).
2.3 Feature, Locations, Patterns from Spatio-
temporal Data
The main motivation of association rule is to find the
regularities between the items and their support and
confidence value in huge volume of data sets (Diansheng Guo
and Jeremy Mennis, 2009). Finding the rule generation using
association rule mining, sometimes it may seriously affects the
threshold; support and confidence value (R.J. Kuoa et al.,
2011). To overcome this drawback, a new framework has been
proposed for assigning the threshold automatically by the
system with the help of association rule mining technique and
the working efficiency is more.
Normal association rule generation steps for data set are same
to generate the rule in spatio-temporal database also. It
considers the spatio-temporal predicates and properties.
Number of authors has been discussed about rule mining in
spatio-temporal data (Appice et al., 2003, and Han et al., 2001,
and Koperski et al., 1995, and Mennis et al., 2005).
But for finding the spatio-temporal data set feature, locations,
patterns from the co-location point etc, are very difficult for
generating the rule using association rule mining method
(Shekhar and Huang, 2001). Features of those are frequently
located together, such as a certain species of bird tend to
habitat with a certain type of trees. And for finding the spatio-
temporal patterns and their co-locations, and the measuring
process, a new algorithm has been proposed (Huang et al.,
2006, and Lu et al., 2008 and Shekhar et al., 2008). Finally the
features are extracted from the spatio-temporal data sets and
discover the trend using association rule mining (Zhang
Xuewu , 2008).
Particle swarm optimization (PSO) first searches the best
fitness for each and every particle with the help of minimum
support and confidence value denoted as threshold (Manisha
Gupta, 2011). Based on the literature study, the author
concluded that the PSO will suggest the suitable threshold for
the searching the particle. Spatio-temporal auto correlation
and regression using algebra formula is also done with the
help of spatio-temporal association rule mining (Corbett. J,
1985). Here spatio-temporal data structure and their auto
correlations are calculated by algebra formula.
The measurement of auto correlation, frequent item set of
autocorrelation using algebra formula has been done with the
help of spatio-temporal data structure such as points, lines,
curves, regions, etc (Jiangping Chen, 2008). Several author
discussed the role of association rule mining in spatial mining,
temporal mining, privacy preservation mining, data stream
mining, utility mining, and etc using optimization techniques
(Maragatham. G, 2012). This survey guides the upcoming
researchers to know the growth of association rule mining role
in different types of mining.
By applying association rule in spatio-temporal or non spatio-
temporal data, a specific format data has been delivered
(Imam Mukhlash and Benhard Sitohang, 2007). Chaotic
particle swarm optimization (CPSO) method with an
acceleration strategy has been proposed for finding the
optimization in spatio-temporal data (Cheng-Hong Yang,
2009). Acceleration strategy combined with chaotic particle
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 247
swarm optimization (CPSO) becomes Acceleration chaotic
particle swarm optimization (ACPSO). ACPSO effectively
find out the cluster centers and the global fitness for the
certain particle. But this system cannot able to work properly
to find out the best particle in huge volume of data.
The combination of PSO and BP based neural network system
is developed to train the network. Here it produces the shortest
time, high accuracy with precision and recall rate, fast
disappearance, etc (ZhongLiang Fu and Bin Wan, 2009).
Generating the rule and finding the cluster center using
optimization techniques also discussed by number of authors
(Samadzadegan. F and S. Saeedi, 2009). Here the optimization
method (PSO) reduces the time consumption and produce the
better performance, while finding the cluster centers and
searching the best fitness of each particle. But when the
concentration of local and global best particle, it may miss to
work properly.
A new framework has been proposed for finding the best
fitness in terms of comprehensibility and accuracy (Veenu
Mangat, 2010) using association rule mining technique. This
system reduces the time and space complexity and it finds the
optimal value for the variables in the optimization algorithm.
The features of frequently co-located patterns are discovered
for the spatio-temporal Boolean features. The problem of
spatio-temporal co-location pattern is difficult, when applying
spatial association rule. Because there is no normal notion of
transactions which are embedded in continues geographic
space.
3. DRAWBACKS OF EXISTING SYSTEM
Clustering of spatio - temporal data is a difficult prob-lem
that is compared to various fields and applications.
o The Major and common drawbacks are high
dimensionality of data (2D or 3D), initial error
propaga-tion, high dimensional data complexity.
o Clustering of space and time related data, spa-tio-
temporal clustering methods focus on the specific
characteristics of distributions in 2-D or 3-D space,
while general-purpose high-dimensional clustering
method have limited power in recognizing spatio-
temporal pat-terns that involve neighbors.
o In human computer interaction, the clustering
techniques may not work properly to find out the com-
plex patterns in huge volume of spatio-temporal data
sets.
Outlier Detection - Spatio-temporal neighbors which is
occurs in spatio-temporal outliers are very conflict, even the
non spatio-temporal values are normal for the rest of the
objects of the same class.
o Spatio-temporal outlier detection may not prop-erly work
with non-spatial or non-temporal datasets. Even if the
non-spatial attributes are separated, it cannot able to
predict the outliers from spatio-temporal datasets.
Rule mining in spatio-temporal data - Dependency Anal-ysis
existing algorithms drawbacks are, while incorporat-ing local
search in high-dimensional spatio-temporal data, the
performance is very low and it faces the diversity problem. In
the optimization process, the existing algorithms are not able
to work with partial optimism.
o In that stage, the algorithm will easily affect the speed
and direction of the particle and it may not work properly
to do their remaining iterations.
o And the existing algorithms cannot able to solve the
issues of huge dimensional, unsystematic datasets in the
stage of particle search and moving object search.
Trend discovery in spatio-temporal data- existing algo-
rithms drawbacks, while discovering the trend in spatio-
temporal data are as follow,
o The anomalies will occur in the existing system, while it
concentrates the graph alteration. Here the in-formation
may loss badly. So the necessities of the table scan may
increase more.
o In generalization process face problem to identify the
description of data while doing the path relation and their
hierarchy.
o Spatio-temporal dependency process faces sev-eral
problems while finding the dependency of large spa-tio-
temporal data.
To overcome these drawbacks, we propose new algo-rithms
for discovering the best trend in spatio-temporal data sets.
4. PROBLEM DEFINITION
The performance of trend discovery analysis is hindered in
spatio-temporal data due to the following reason:
o Clustering of massive spatio-temporal data is
cumbersome. Features of the data are often preselected,
yet the properties of different features and feature
combinations are not well investigated in the huge
spatio-temporal data sets. Finding appropriate features to
form a cluster group is essential for better search.
o Cluster accuracy is less, so it requires deviation and
outlier detection analysis for high dimensional spatio-
temporal clustered data.
o Rule generation accuracy of spatio-temporal data is less,
while it predicts the value of one attribute with the help
of another attribute value over a period of time. So it
requires optimization process for rule generation.
o To know about the compact description of data, it needs
better generalization and characterization method.
o Trend discovery of spatio-temporal data accuracy is very
less, while it’s going for the huge dimensional data sets.
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 248
5. CONTRIBUTIONS OF THIS RESEARCH
WORK
Phase 1 gives the solutions to the challenges in segmenting
the spatio - temporal data by Proposed PANN algorithm
Phase 2 removes the deviations and outliers of spatio -
temporal data accurately by improved STOD Technique
Phase 3 generates the rules for dependency analysis of spatio -
temporal data by proposed IESO technique
Phase 4 generalize and characterize the spatio - temporal data
by improved GAOI technique
Phase 5 Finally the Proposed GPLDE algorithm is developed
for the further trend discovery of spatio - temporal data.
Fig 1 shows that the entire framework of proposed research.
Fig 1 Entire Framework of Proposed Research
6. CONCLUSIONS
This research offers an innovative idea to discover the trend
on multi-dimensional spatio-temporal datasets. Here it briefly
describes the scope and relevancy of spatio-temporal data.
From the literature survey it has listed a number of issues. And
also it contributes several phases, in which each and every
phase output must be very helpful to go for the next phase
input. From that, gain the depth knowledge of spatio-temporal
recent research process to discover the trend. The proposed
algorithms evaluation process of this research will be learned
in our next research paper.
REFERENCES
[1] Adam, N.R., V.P. Janeja and V. Atluri, 2004. Neigh-
bourhood based detection of anomalies inhigh di-
mension spatio-temporal sensor datasets. ACM sym-
posium on Applied Computing, Nicosia, Cyprus., 1:
576 – 583.
[2] Ai, C. and X. Chen, 2003. Efficient estimation of
models with conditional moment restrictions containing
un-known functions. Econometrica., 7: 1795-1843.
[3] Anagnostopoulos. A., M. Vlachos., M.
Hadjieleftheriou., E Keogh and P.s. Yu, 2006. Global
Distance-Based Segmentation of Trajectories.
KDD’06, Philadelphia, Pennsylvania, USA.
[4] Anselin, L. and A.K. Bera, A.K, 2002. Spatial depen-
dence in linear regression models with an introduction
to spatial econometrics. In: Ullah, A., Giles, D.E.A.
(Eds.), Handbook of Applied Economics Statistics.
Marcel Dekker, New York.
[5] Auroop. RR., Ganguly and Karsten Steinhaeuser, 2008.
Data Mining for Climate Change and Impacts. IEEE
International conference on Data mining workshop,
ICDMW., 385-394.
[6] Basile, R. and B. Gress, 2004. Semi-parametric spatial
auto-covariance models of regional growth behavior in
Europe. Mimeo. Dept. of Economics, UC, River-side.
[7] Benkert. M., J. Gudmundsson., F. H¨ubner and T.
Wolle, 2008. Reporting flock patterns. Comput. Geom.
Theory Appl., 41(3):111–125.
[8] Cheng-Hong Yang et al,. 2009. Accelerated Chaotic
Par-ticle Swarm Optimization for Data Clustering.
Inter-national Conference on Machine Learning and
Com-puting IPCSIT vol.3 ,IACSIT Press, Singapore.,
1: 249-253.
[9] Diansheng Guo and Jeremy Mennis, 2009. Spatial data
mining and geographic knowledge discovery—An
introduction. Computers, Environment and Urban
Systems., 33(6): 403-408.
[10] Dien J., K.M. Spencer and E. Donchin, 2005. Parsing
the "Late Positive Complex": Mental chronometry and
the ERP components that inhabit the neighborhood of
the P300. Psychophysiology.
[11] Elnekave. S., M. Last and O. Maimon, 2007.
Incremental Clustering of Mobile Objects. STDM07,
IEEE. Engineering, Washington, DC, USA., 1: 422—
432.
[12] Gudmundsson, J. and M. Van Kreveld, 2006.
Computing longest duration flocks in trajectory data. In
ACM GIS., 35–42.
[13] Hoda M. O. Mokhtar et al., 2011. A Time
Parameterized Technique for Clustering Moving Object
Trajectories, International Journal of Data Mining &
Knowledge Management Process (IJDKP)., 1(1): 14-
30.
[14] Hoda Mokhtar and Jianwen Su, 2005. A query
language for moving object trajectories. In SSDBM:
Interna-tional Conference on Scientific and Statistical
Data-base Management., 1: 173-182.
[15] Hoda, M. O. and Mokhtar, 2011. A Time
Parameterized Technique for Clustering Moving Object
Trajectories. International Journal of Data Mining &
Knowledge Management Process., 1(1): 1-17.
[16] Huang, Y., J. Pei and H. Xiong, 2006. Mining co-
location patterns with rare events from spatial data sets.
Geoinformatica., 10(3): 239–260.
7. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Issue: 03 | Mar-2014, Available @ http://www.ijret.org 249
[17] Imam Mukhlash and Benhard Sitohang, 2007. Spatial
Data Preprocessing for Mining Spatial Association
Rule with Conventional Association Mining Algo-
rithms. International Conference on Electrical Engi-
neering and Informatics Institute Technology Ban-
dung, Indonesia., 1: 531-534.
[18] Jensen. C,, D. Lin and B. Ooi, 2007. Continuous
cluster-ing of moving objects. IEEE Trans. Knowl.
Data Eng., 19(9): 1161–1174.
[19] Jiangping Chen, 2008. An Algorithm About
Association Rule Mining Based On Spatial
Autocorrelation. The International Archives of the
Photogrammetry, Re-mote Sensing and Spatial
Information Sciences. XXXVII( B6b), Beijing.
[20] Jiawei Han and Micheline Kamber, 2009. Data mining
concepts and techniques. Morgan Kaufmann
Publishers., 354-359.
[21] Jidong Chen et al., 2007. Clustering Moving Objects in
Spatial Networks. Proceedings of the 12th interna-
tional conference on Database systems for advanced
applications., 1: 611-623.
[22] Kang Hye-Young., Joon-Seok and Ki-Joune, 2009.
Simi-larity measures for trajectory of moving objects in
cellular space. In SAC '09: Proceedings of the 2009
ACM symposium on Applied Computing., 1: 1325-
1330.
[23] Kelejian, H.H. and I.R. Prucha, 2010. Specification and
estimation of spatial autoregressive models with au-
toregressive and heteroskedastic disturbances. Jour-nal
of Econometrics., 157(1): 53-67.
[24] Kuo. R.J., C.M. Chao and Y.T. Chiu, 2011.
Application of particle swarm optimization to
association rule mining. Applied Soft Computing., 11:
326–336.
[25] Lee Jae-Gil., Han Jiawei and Whang Kyu-Young,
2007. Trajectory clustering: a partition-andgroup
frame-work. In SIGMOD '07: Proceedings of the 2007
ACM SIGMOD international conference on
Management of data., 1: 593-604.
[26] Lin, X. and L.-F. Lee, L.-F, 2010. GMM estimation of
spatial autoregressive models with unknown hete-
roskedasticity. Journal of Econometrics., 157 (1): 34-
52.
[27] Lu, Y. and J-C. Thill, 2008. Cross-scale analysis of
clus-ter correspondence using different operational
neigh-borhoods. Journal of Geographical Systems.,
10(3): 241–261.
[28] Manisha Gupta, 2011. Application of Weighted
Particle Swarm Optimization in Association Rule
Mining. In-ternational Journal of Computer Science
and Infor-matics (IJCSI) ISSN (PRINT): 2231 –5292.,
1(3).
[29] Maragatham G et al, 2012. A Recent Review On
Associ-ation Rule Mining. Indian Journal of Computer
Science and Engineering (IJCSE)., 2(6): 831-836.
[30] Margaret H. Dunham., 2009. Data Mining Introductory
and Advanced Topics. Dorling Kindersley (India) Pvt.
Ltd., 1(1): 1-311.
[31] Richard Frank., Wen Jin and Martin Ester, 2009. Effi-
ciently Mining Regional Outliers in Spatial Data., 1-18.
[32] Sigal Elnekave and Mark Last, 2007. A Compact
Repre-sentation of Spatio-Temporal Data, Seventh
IEEE International Conference on Data Mining –
Workshops.
[33] Sigal Elnekave et al., 2007. Predicting Future
Locations Using Clusters' Centroids. Proceedings of
the 15th annual ACM international symposium on
Advances in geographic information systems., 55(1).
[34] Urška Demšar., Paul Harris., Chris Brunsdon., A.
Stewart Fotheringham and Sean McLoone, 2012.
Principal Component Analysis on Spatial Data: An
Overview, Annals of the Association of American
Geographers., 1:1-24.
[35] Veenu Mangat, 2010. Swarm Intelligence Based Tech-
nique for Rule Mining in the Medical Domain. Inter-
national Journal of Computer Applications (0975 –
8887)., 4(1): 19-24.
[36] Yang, Z., C. Li and Y.K. Tse, 2006. Functional form
and spatial dependence in dynamic panels. Economics
Letters., 91: 138-145.
[37] Yiu, M.L. and N. Mamoulis, 2004. Clustering objects
on a spatial network. In Proc. ACM SIGMOD.,1: 443–
454.
[38] Zhang Xuewu, 2008. Association Rule Mining on
Spatio-temporalProcesses. Wireless Communications,
Net-working and Mobile Computing, WiCOM '08., 1:
1- 4.