Abstract Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms. The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the proposed approach. Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document analyzes crop yield data from spatial locations in Guntur District, Andhra Pradesh, India using hybrid data mining techniques. It first applies k-means clustering to the dataset, producing 5 clusters. It then applies the J48 classification algorithm to the clustered data, resulting in a decision tree that predicts cluster membership based on attributes like crop type, irrigated area, and latitude. Analysis found irrigated areas of cotton and chilies increased from 2007-2008 to 2011-2012. Association rule mining on the clustered data also found relationships between productivity and location attributes. The hybrid approach of clustering followed by classification effectively analyzed the spatial agricultural data.
This document summarizes a research paper that proposes a dynamic approach to improving the k-means clustering algorithm. The proposed approach aims to address two weaknesses of the standard k-means algorithm: its requirement of prior knowledge of the number of clusters k, and its sensitivity to initialization. The approach determines initial cluster centroids by segmenting the data space and selecting high-frequency segments. It then uses the silhouette validity index to dynamically determine the optimal number of clusters k, rather than requiring the user to specify k. The approach is compared to the standard k-means algorithm and other modified approaches, and is shown to improve initial center selection and reduce computation time.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
Reconstruction of Time Series using Optimal Ordering of ICA Componentsrahulmonikasharma
This document summarizes research investigating the application of independent component analysis (ICA) and optimal ordering of independent components to reconstruct time series generated by mixed independent sources. A modified fast neural learning ICA algorithm is used to obtain independent components from observed time series. Different error measures and algorithms for determining the optimal ordering of independent components for reconstruction are compared, including exhaustive search, using the L-infinity norm of components, and excluding the least contributing component first. Experimental results on artificial and currency exchange rate time series support using the Euclidean error measure and minimizing the error profile area to obtain the optimal ordering. Reconstructions with only the first few dominant independent components are often acceptable.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document analyzes crop yield data from spatial locations in Guntur District, Andhra Pradesh, India using hybrid data mining techniques. It first applies k-means clustering to the dataset, producing 5 clusters. It then applies the J48 classification algorithm to the clustered data, resulting in a decision tree that predicts cluster membership based on attributes like crop type, irrigated area, and latitude. Analysis found irrigated areas of cotton and chilies increased from 2007-2008 to 2011-2012. Association rule mining on the clustered data also found relationships between productivity and location attributes. The hybrid approach of clustering followed by classification effectively analyzed the spatial agricultural data.
This document summarizes a research paper that proposes a dynamic approach to improving the k-means clustering algorithm. The proposed approach aims to address two weaknesses of the standard k-means algorithm: its requirement of prior knowledge of the number of clusters k, and its sensitivity to initialization. The approach determines initial cluster centroids by segmenting the data space and selecting high-frequency segments. It then uses the silhouette validity index to dynamically determine the optimal number of clusters k, rather than requiring the user to specify k. The approach is compared to the standard k-means algorithm and other modified approaches, and is shown to improve initial center selection and reduce computation time.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
LOGNORMAL ORDINARY KRIGING METAMODEL IN SIMULATION OPTIMIZATIONorajjournal
This paper presents a lognormal ordinary kriging (LOK) metamodel algorithm and its application to
optimize a stochastic simulation problem. Kriging models have been developed as an interpolation method
in geology. They have been successfully used for the deterministic simulation optimization (SO) problem. In
recent years, kriging metamodeling has attracted a growing interest with stochastic problems. SO
researchers have begun using ordinary kriging through global optimization in stochastic systems. The
goals of this study are to present LOK metamodel algorithm and to analyze the result of the application
step-by-step. The results show that LOK is a powerful alternative metamodel in simulation optimization
when the data are too skewed.
Reconstruction of Time Series using Optimal Ordering of ICA Componentsrahulmonikasharma
This document summarizes research investigating the application of independent component analysis (ICA) and optimal ordering of independent components to reconstruct time series generated by mixed independent sources. A modified fast neural learning ICA algorithm is used to obtain independent components from observed time series. Different error measures and algorithms for determining the optimal ordering of independent components for reconstruction are compared, including exhaustive search, using the L-infinity norm of components, and excluding the least contributing component first. Experimental results on artificial and currency exchange rate time series support using the Euclidean error measure and minimizing the error profile area to obtain the optimal ordering. Reconstructions with only the first few dominant independent components are often acceptable.
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
This document compares hierarchical and non-hierarchical clustering algorithms. It summarizes four clustering algorithms: K-Means, K-Medoids, Farthest First Clustering (hierarchical algorithms), and DBSCAN (non-hierarchical algorithm). It describes the methodology of each algorithm and provides pseudocode. It also describes the datasets used to evaluate the performance of the algorithms and the evaluation metrics. The goal is to compare the performance of the clustering methods on different datasets.
GRAPH PARTITIONING FOR IMAGE SEGMENTATION USING ISOPERIMETRIC APPROACH: A REVIEWDrm Kapoor
Graph cut is fast method performing a binary segmentation. Graph cuts proved to be a useful multidimensional optimization tool which can enforce piecewise smoothness while preserving relevant sharp discontinuities. This paper is mainly intended as an application of isoperimetric algorithm of graph theory for image segmentation and analysis of different parameters used in the algorithm like generating weights, regulates the execution, Connectivity Parameter, cutoff, number of recursions. We present some basic background information on graph cuts and discuss major theoretical results, which helped to reveal both strengths and limitations of this surprisingly versatile combinatorial algorithm.
Scalable and efficient cluster based framework for multidimensional indexingeSAT Journals
Abstract Indexing high dimensional data has its utility in many real world applications. Especially the information retrieval process is dramatically improved. The existing techniques could overcome the problem of “Curse of Dimensionality” of high dimensional data sets by using a technique known as Vector Approximation-File which resulted in sub-optimal performance. When compared with VA-File clustering results in more compact data set as it uses inter-dimensional correlations. However, pruning of unwanted clusters is important. The existing pruning techniques are based on bounding rectangles, bounding hyper spheres have problems in NN search. To overcome this problem Ramaswamy and Rose proposed an approach known as adaptive cluster distance bounding for high dimensional indexing which also includes an efficient spatial filtering. In this paper we implement this high-dimensional indexing approach. We built a prototype application to for proof of concept. Experimental results are encouraging and the prototype can be used in real time applications. Index Terms–Clustering, high dimensional indexing, similarity measures, and multimedia databases
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Statistical Data Analysis on a Data Set (Diabetes 130-US hospitals for years ...Seval Çapraz
This document analyzes a dataset of diabetes records from 130 US hospitals from 1999-2008 using various statistical data analysis and machine learning techniques. It first performs dimensionality reduction using principal component analysis (PCA) and multidimensional scaling (MDS). It then clusters the data using hierarchical clustering and k-means clustering. Cluster validity is assessed using precision. Spectral clustering is also applied and validated using Dunn and Davies-Bouldin indexes, with complete linkage diameter performing best.
One fundamental problem in large scale image retrieval with the bag-of-features is its lack of spatial information, which affects accuracy of image retrieval. Depending on distribution of local features in an image, we propose a novel adaptive Hilbert-scan strategy which computes weight of each path at increasingly fine resolutions. Owing to merits of this strategy, spatial information of object will be preserved more precisely at Hilbert order. Extensive experiments on Caltech-256 show that our method obtains higher accuracy.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
The main goal of cluster analysis is to classify elements into groupsbased on their similarity. Clustering has many applications such as astronomy, bioinformatics, bibliography, and pattern recognition. In this paper, a survey of clustering methods and techniques and identification of advantages and disadvantages of these methods are presented to give a solid background to choose the best method to extract strong association rules.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
This document discusses error propagation in quantitative analysis based on ratio spectra. It presents the mathematical foundations for studying noise-filtering properties and error propagation using ratio spectra and mean-centered ratio spectra methods. Computer modeling was performed to evaluate the standard deviation ratios and condition numbers for binary mixtures modeled by Gaussian doublets, showing the mean-centered ratio spectra method is more advantageous for proportional noise typical of UV-VIS instruments. The advantages and limitations of using ratio spectra methods are discussed.
Clustering using kernel entropy principal component analysis and variable ker...IJECEIAES
Clustering as unsupervised learning method is the mission of dividing data objects into clusters with common characteristics. In the present paper, we introduce an enhanced technique of the existing EPCA data transformation method. Incorporating the kernel function into the EPCA, the input space can be mapped implicitly into a high-dimensional of feature space. Then, the Shannon’s entropy estimated via the inertia provided by the contribution of every mapped object in data is the key measure to determine the optimal extracted features space. Our proposed method performs very well the clustering algorithm of the fast search of clusters’ centers based on the local densities’ computing. Experimental results disclose that the approach is feasible and efficient on the performance query.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Finding Relationships between the Our-NIR Cluster ResultsCSCJournals
The problem of evaluating node importance in clustering has been active research in present days and many methods have been developed. Most of the clustering algorithms deal with general similarity measures. However In real situation most of the cases data changes over time. But clustering this type of data not only decreases the quality of clusters but also disregards the expectation of users, when usually require recent clustering results. In this regard we proposed Our-NIR method that is better than Ming-Syan Chen proposed a method and it has proven with the help of results of node importance, which is related to calculate the node importance that is very useful in clustering of categorical data, still it has deficiency that is importance of data labeling and outlier detection. In this paper we modified Our-NIR method for evaluating of node importance by introducing the probability distribution which will be better than by comparing the results.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Analysis of green’s function and surface current density for rectangular micr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
An input enhancement technique to maximize the performance of page replacemen...eSAT Journals
Abstract One of the main goals of the memory management is to allow multiprogramming. Several strategies are used to allocate a memory to the process needed. These strategies require the entire process should be in the memory before execution and some of these strategies suffer from fragmentation. Virtual memory does not require entire process to be in memory before execution. It loads only those pages of the process in to the memory that are needed for execution. This can be achieved using paging scheme having number of frames in the memory to accommodate for each pages. Whenever one of the page in the memory need to be replaced, one of the page replacement algorithms are used. The performance of these page replacement algorithms depends on the total number of page faults achieved. In this paper, a new algorithm called Comparison Counting FIFO (CC-FIFO) and Distribution Counting FIFO (DC-FIFO) has been proposed using input enhancement technique. Our results and calculations show that the proposed algorithm minimizes the page fault rate compared to FIFO, LRU and Optimal page replacement algorithms. Keywords-Page replacement; Page fault; FIFO; OPT; LRU; Comparison Counting; Distribution Counting;
An experimental investigation on flexural behaviour of fibre reinforced robo ...eSAT Journals
Abstract Now a days due to many civil engineering constructions through out the globe the usage of natural sand is very large and so it is slowly becoming scarce. Because of this main reason in the present days the usage of Robo sand or Crushed or artificial sand has gained momentum. This paper presents a brief study on the flexural behaviour of fibre reinforced ferrocement elements made with artificial sand(Robo sand).Nearly 30 mortar cubes and 180 flexural specimens were cast and tested with the variables such as different percentages of steel fibre, number of wire mesh layers and different span to depth ratios (a/d) etc,. From the results it is observed that with the increase in percentage of fibres the compressive strength of mortar, first crack load, ultimate load in flexure, flexural stress at first crack load, flexural stress at ultimate load and energy absorption increase up to certain extent and afterwards get decreased. Also the above strength parameters are found to increase with number of wire mesh layers. More so the above strengths are found to decrease with the increase in a/d ratio except the flexural stresses at first crack load and ultimate load. Besides the paper presents the behaviour of load – deflection variation and crack pattern for number of variables studied. Further a comparative study of the behaviour of robo sand specimens with those of the natural sand has been studied. Finally an analytical model has been proposed for Mcr and Mu with the inclusion of the most of the variables used in the present investigation. Keywords: Robosand, Span to Depth ratio (a/d), Volume percentage of fibres (Vf), Number of Mesh Layers (N).
Design and development of intelligent electronics travelling aid for visually...eSAT Journals
Abstract The dynamic nature of the environment poses a big challenge to the visually impaired in navigation. This compels most of the visually impaired persons in the developing countries, who generally do not get any technology support, to depend on the visual sense of others, thus, undermining their independence. The intelligent electronic travelling aid for the visually impaired (IETA-VI) designed and developed by the authors and reported here, intends to provide solution to the navigational challenges of the visually impaired individuals. The system utilizes the ultrasonic detection technology for detecting any obstacle in his/her path, and then converts the distance to the obstacle into voice using voice synthesis technology so as to inform the visually impaired user. The device uses GPS and GSM technologies to determine the location of the user and to send this location to his/her care givers on mobile phone, respectively. The device also recognizes the voice signals of the visually impaired when in distress, by making use of voice recognition technology and send SMS to his care givers giving his location and asking them to help him. In a moment of emergency, when the visually impaired requires the attention of his/her care givers, the device provides three alternatives: The first alternative it provides is an emergency key, which when pressed will automatically send the location of the visually impaired to the care givers. The second alternative is that the visually impaired shouts (gives a voice command), the device responds by sending the location of the visually impaired to the care givers. The third alternatives take care of the situation when the user is unconscious or cannot even speak. This is provided by the display device, which displays the names and contact details of his/her care givers, so that someone can offer a helping hand. KEYWORDS: Ultrasonic detection, Voice recognition, Voice synthesis, GPS, GSM.
G code based data receiving and control systemeSAT Journals
Abstract Today in engineering field technical research among the students are increased. To design and develop such hardware projects we may need to collect real time data from Data Acquisition System. These Systems are manufactured by very few companies in India. These systems are very expensive for the students. Hence students are in need of low cost data acquisition system to perform their research work. In this paper we have explained and implemented a Data Acquisition system (DAQ) using ATmega16 microcontroller. The GUI and data processing for DAQ system has been designed on LabVIEW and the system has been tested for different tasks. Keywords : Data Acquisition; LabVIEW; G-code; AVR ATmega-16;
This document compares hierarchical and non-hierarchical clustering algorithms. It summarizes four clustering algorithms: K-Means, K-Medoids, Farthest First Clustering (hierarchical algorithms), and DBSCAN (non-hierarchical algorithm). It describes the methodology of each algorithm and provides pseudocode. It also describes the datasets used to evaluate the performance of the algorithms and the evaluation metrics. The goal is to compare the performance of the clustering methods on different datasets.
GRAPH PARTITIONING FOR IMAGE SEGMENTATION USING ISOPERIMETRIC APPROACH: A REVIEWDrm Kapoor
Graph cut is fast method performing a binary segmentation. Graph cuts proved to be a useful multidimensional optimization tool which can enforce piecewise smoothness while preserving relevant sharp discontinuities. This paper is mainly intended as an application of isoperimetric algorithm of graph theory for image segmentation and analysis of different parameters used in the algorithm like generating weights, regulates the execution, Connectivity Parameter, cutoff, number of recursions. We present some basic background information on graph cuts and discuss major theoretical results, which helped to reveal both strengths and limitations of this surprisingly versatile combinatorial algorithm.
Scalable and efficient cluster based framework for multidimensional indexingeSAT Journals
Abstract Indexing high dimensional data has its utility in many real world applications. Especially the information retrieval process is dramatically improved. The existing techniques could overcome the problem of “Curse of Dimensionality” of high dimensional data sets by using a technique known as Vector Approximation-File which resulted in sub-optimal performance. When compared with VA-File clustering results in more compact data set as it uses inter-dimensional correlations. However, pruning of unwanted clusters is important. The existing pruning techniques are based on bounding rectangles, bounding hyper spheres have problems in NN search. To overcome this problem Ramaswamy and Rose proposed an approach known as adaptive cluster distance bounding for high dimensional indexing which also includes an efficient spatial filtering. In this paper we implement this high-dimensional indexing approach. We built a prototype application to for proof of concept. Experimental results are encouraging and the prototype can be used in real time applications. Index Terms–Clustering, high dimensional indexing, similarity measures, and multimedia databases
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONSijscmcj
The purpose of this article is to determine the usefulness of the Graphics Processing Unit (GPU) calculations used to implement the Latent Semantic Indexing (LSI) reduction of the TERM-BY DOCUMENT matrix. Considered reduction of the matrix is based on the use of the SVD (Singular Value Decomposition) decomposition. A high computational complexity of the SVD decomposition - O(n3), causes that a reduction of a large indexing structure is a difficult task. In this article there is a comparison of the time complexity and accuracy of the algorithms implemented for two different environments. The first environment is associated with the CPU and MATLAB R2011a. The second environment is related to graphics processors and the CULA library. The calculations were carried out on generally available benchmark matrices, which were combined to achieve the resulting matrix of high size. For both considered environments computations were performed for double and single precision data.
The document describes a recursive algorithm for multi-step prediction with mixture models that have dynamic switching between components. It begins by introducing notations and reviewing individual models, including normal regression components and static/dynamic switching models. It then presents the mixture prediction algorithm, first for a static switching model by constructing a predictive distribution from weighted component predictions. For a dynamic switching model, it similarly takes point estimates from the previous time and substitutes them into components to make weighted averaged predictions over multiple steps. The algorithm is summarized as initializing component statistics and parameter estimates, then substituting previous estimates into components to obtain weighted mixture predictions for new data points.
KNN and ARL Based Imputation to Estimate Missing Valuesijeei-iaes
Missing data are the absence of data items for a subject; they hide some information that may be important. In practice, missing data have been one major factor affecting data quality. Thus, Missing value imputation is needed. Methods such as hierarchical clustering and K-means clustering are not robust to missing data and may lose effectiveness even with a few missing values. Therefore, to improve the quality of data method for missing value imputation is needed. In this paper KNN and ARL based Imputation are introduced to impute missing values and accuracy of both the algorithms are measured by using normalized root mean sqare error. The result shows that ARL is more accurate and robust method for missing value estimation.
Statistical Data Analysis on a Data Set (Diabetes 130-US hospitals for years ...Seval Çapraz
This document analyzes a dataset of diabetes records from 130 US hospitals from 1999-2008 using various statistical data analysis and machine learning techniques. It first performs dimensionality reduction using principal component analysis (PCA) and multidimensional scaling (MDS). It then clusters the data using hierarchical clustering and k-means clustering. Cluster validity is assessed using precision. Spectral clustering is also applied and validated using Dunn and Davies-Bouldin indexes, with complete linkage diameter performing best.
One fundamental problem in large scale image retrieval with the bag-of-features is its lack of spatial information, which affects accuracy of image retrieval. Depending on distribution of local features in an image, we propose a novel adaptive Hilbert-scan strategy which computes weight of each path at increasingly fine resolutions. Owing to merits of this strategy, spatial information of object will be preserved more precisely at Hilbert order. Extensive experiments on Caltech-256 show that our method obtains higher accuracy.
This document summarizes research on using particle swarm optimization to reconstruct microwave images of two-dimensional dielectric scatterers. It formulates the inverse scattering problem as an optimization problem to find the dielectric parameter distribution that minimizes the difference between measured and simulated scattered field data. Numerical results show that a particle swarm optimization approach can accurately reconstruct the shape and dielectric properties of a test cylindrical scatterer, with lower background reconstruction error than a genetic algorithm approach. The research demonstrates that particle swarm optimization is a suitable technique for high-dimensional microwave imaging problems.
The main goal of cluster analysis is to classify elements into groupsbased on their similarity. Clustering has many applications such as astronomy, bioinformatics, bibliography, and pattern recognition. In this paper, a survey of clustering methods and techniques and identification of advantages and disadvantages of these methods are presented to give a solid background to choose the best method to extract strong association rules.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
This document discusses error propagation in quantitative analysis based on ratio spectra. It presents the mathematical foundations for studying noise-filtering properties and error propagation using ratio spectra and mean-centered ratio spectra methods. Computer modeling was performed to evaluate the standard deviation ratios and condition numbers for binary mixtures modeled by Gaussian doublets, showing the mean-centered ratio spectra method is more advantageous for proportional noise typical of UV-VIS instruments. The advantages and limitations of using ratio spectra methods are discussed.
Clustering using kernel entropy principal component analysis and variable ker...IJECEIAES
Clustering as unsupervised learning method is the mission of dividing data objects into clusters with common characteristics. In the present paper, we introduce an enhanced technique of the existing EPCA data transformation method. Incorporating the kernel function into the EPCA, the input space can be mapped implicitly into a high-dimensional of feature space. Then, the Shannon’s entropy estimated via the inertia provided by the contribution of every mapped object in data is the key measure to determine the optimal extracted features space. Our proposed method performs very well the clustering algorithm of the fast search of clusters’ centers based on the local densities’ computing. Experimental results disclose that the approach is feasible and efficient on the performance query.
A comparative study of three validities computation methods for multimodel ap...IJECEIAES
The multimodel approach offers a very satisfactory results in modelling, diagnose and control of complex systems. In the modelling case, this approach passes by three steps: the determination of the model’s library, the validities computation and the establishment of the final model. In this context, this paper focuses on the elaboration of a comparative study between three recent methods of validities computation. Thus, it highlight the method that offers the best performances in term of precision. To achieve this goal, we apply, these three methods on two simulation examples in order to compare their performances.
Finding Relationships between the Our-NIR Cluster ResultsCSCJournals
The problem of evaluating node importance in clustering has been active research in present days and many methods have been developed. Most of the clustering algorithms deal with general similarity measures. However In real situation most of the cases data changes over time. But clustering this type of data not only decreases the quality of clusters but also disregards the expectation of users, when usually require recent clustering results. In this regard we proposed Our-NIR method that is better than Ming-Syan Chen proposed a method and it has proven with the help of results of node importance, which is related to calculate the node importance that is very useful in clustering of categorical data, still it has deficiency that is importance of data labeling and outlier detection. In this paper we modified Our-NIR method for evaluating of node importance by introducing the probability distribution which will be better than by comparing the results.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Analysis of green’s function and surface current density for rectangular micr...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
An input enhancement technique to maximize the performance of page replacemen...eSAT Journals
Abstract One of the main goals of the memory management is to allow multiprogramming. Several strategies are used to allocate a memory to the process needed. These strategies require the entire process should be in the memory before execution and some of these strategies suffer from fragmentation. Virtual memory does not require entire process to be in memory before execution. It loads only those pages of the process in to the memory that are needed for execution. This can be achieved using paging scheme having number of frames in the memory to accommodate for each pages. Whenever one of the page in the memory need to be replaced, one of the page replacement algorithms are used. The performance of these page replacement algorithms depends on the total number of page faults achieved. In this paper, a new algorithm called Comparison Counting FIFO (CC-FIFO) and Distribution Counting FIFO (DC-FIFO) has been proposed using input enhancement technique. Our results and calculations show that the proposed algorithm minimizes the page fault rate compared to FIFO, LRU and Optimal page replacement algorithms. Keywords-Page replacement; Page fault; FIFO; OPT; LRU; Comparison Counting; Distribution Counting;
An experimental investigation on flexural behaviour of fibre reinforced robo ...eSAT Journals
Abstract Now a days due to many civil engineering constructions through out the globe the usage of natural sand is very large and so it is slowly becoming scarce. Because of this main reason in the present days the usage of Robo sand or Crushed or artificial sand has gained momentum. This paper presents a brief study on the flexural behaviour of fibre reinforced ferrocement elements made with artificial sand(Robo sand).Nearly 30 mortar cubes and 180 flexural specimens were cast and tested with the variables such as different percentages of steel fibre, number of wire mesh layers and different span to depth ratios (a/d) etc,. From the results it is observed that with the increase in percentage of fibres the compressive strength of mortar, first crack load, ultimate load in flexure, flexural stress at first crack load, flexural stress at ultimate load and energy absorption increase up to certain extent and afterwards get decreased. Also the above strength parameters are found to increase with number of wire mesh layers. More so the above strengths are found to decrease with the increase in a/d ratio except the flexural stresses at first crack load and ultimate load. Besides the paper presents the behaviour of load – deflection variation and crack pattern for number of variables studied. Further a comparative study of the behaviour of robo sand specimens with those of the natural sand has been studied. Finally an analytical model has been proposed for Mcr and Mu with the inclusion of the most of the variables used in the present investigation. Keywords: Robosand, Span to Depth ratio (a/d), Volume percentage of fibres (Vf), Number of Mesh Layers (N).
Design and development of intelligent electronics travelling aid for visually...eSAT Journals
Abstract The dynamic nature of the environment poses a big challenge to the visually impaired in navigation. This compels most of the visually impaired persons in the developing countries, who generally do not get any technology support, to depend on the visual sense of others, thus, undermining their independence. The intelligent electronic travelling aid for the visually impaired (IETA-VI) designed and developed by the authors and reported here, intends to provide solution to the navigational challenges of the visually impaired individuals. The system utilizes the ultrasonic detection technology for detecting any obstacle in his/her path, and then converts the distance to the obstacle into voice using voice synthesis technology so as to inform the visually impaired user. The device uses GPS and GSM technologies to determine the location of the user and to send this location to his/her care givers on mobile phone, respectively. The device also recognizes the voice signals of the visually impaired when in distress, by making use of voice recognition technology and send SMS to his care givers giving his location and asking them to help him. In a moment of emergency, when the visually impaired requires the attention of his/her care givers, the device provides three alternatives: The first alternative it provides is an emergency key, which when pressed will automatically send the location of the visually impaired to the care givers. The second alternative is that the visually impaired shouts (gives a voice command), the device responds by sending the location of the visually impaired to the care givers. The third alternatives take care of the situation when the user is unconscious or cannot even speak. This is provided by the display device, which displays the names and contact details of his/her care givers, so that someone can offer a helping hand. KEYWORDS: Ultrasonic detection, Voice recognition, Voice synthesis, GPS, GSM.
G code based data receiving and control systemeSAT Journals
Abstract Today in engineering field technical research among the students are increased. To design and develop such hardware projects we may need to collect real time data from Data Acquisition System. These Systems are manufactured by very few companies in India. These systems are very expensive for the students. Hence students are in need of low cost data acquisition system to perform their research work. In this paper we have explained and implemented a Data Acquisition system (DAQ) using ATmega16 microcontroller. The GUI and data processing for DAQ system has been designed on LabVIEW and the system has been tested for different tasks. Keywords : Data Acquisition; LabVIEW; G-code; AVR ATmega-16;
A research on significance of kalman filter approach as applied in electrical...eSAT Journals
Abstract Recently, AC distribution systems have experienced high harmonic pollution due to the fact that electrical power system
parameters are often mixed with noise. In an ideal situation, AC power system is supposed to have a constant frequency at
specific voltage but owing to presence of connected nonlinear loads and injection into the grid from non-sinusoidal output active
sources etc., have immensely contributed to the total distortion of the both current and voltage waveforms. This has increased the
system loses and consequently affected other connected equipment in the system. Therefore there is a need to mitigate these effects
if they cannot be eliminated intoto, hence the proposition of Kalman filter. It has been very useful in the aspect of electrical power
discipline particularly in harmonic estimation. It has also find it way in the application of power system dynamics, optimal
operation and control of motor, relay operation and protection, and also for accurate prediction of short and medium term
electrical load forecasting. This paper is to highlight on the significant of Kalman filter methodological approach as adopted in
electrical power system.
Keywords: Kalman Filter; Electrical Power System; Electrical Load; Harmonic Estimation.
3 d mrf based video tracking in the compressed domaineSAT Journals
Abstract Object tracking is an interesting and needed procedure for many real time applications. But it is a challenging one, because of the presence of challenging sequences with abrupt motion, drastic illumination change, large pose variation, occlusion, cluttered background and also the camera shake. This paper presents a novel method of object tracking by using the algorithms spatio-temporal Markov random field (STMRF) and online discriminative feature selection (ODFS), which overcome the above mentioned problems and provide a better tracking process. This method is also capable of tracking multiple objects in video sequence even in the presence of an object interactions and occlusions that achieves better results with real time performance. Keywords: Video object tracking, spatio-temporal Markov random field (ST-MRF), online discriminative feature selection (ODFS).
Nonlinear prediction of human resting state functional connectivity based on ...eSAT Journals
Abstract
Mounting evidence demonstrated that neuronal activity derived from functional magnetic resonance imaging (fMRI) relates to the
underlying anatomical circuitry measured by diffusion tensor/spectrum imaging (DTI/DSI). However, exploring the relationship
between functional connectivity (FC) and structural connectivity (SC) remains challengeable and thus has motivated a number of
computational models to investigate the extent to which the dynamics depend on the topology. Nevertheless, most of the models
are complex and difficult to treat analytically. In this paper, for simplicity, we utilize four network communication measures
extracted from SC as well as polynomial curves fitting method to predict FC. Our results indicate that all of these measures
predict FC via the nonlinear fitting method. Besides, compared with the linear method, the fitting value between predicted FC and
empirical FC attains higher after applying nonlinear process on communication measures which may help to shed light on the
function-structure relationship.
Key Words: brain connectivity; fMRI; DTI/DSI; network communication measure; nonlinear fitting
Developing an actual very high frequency antenna using genetic algorithmseSAT Journals
Abstract Antenna for the 88-108MHz Very high frequency (VHF) broadcast audio frequency-modulation (FM) band. The antenna is intended tofit in the flat area inside the head-band of an over ear hearing-protector headset. The space for the antenna is limited by an existing head-band design, where the unused internal area is the space studied in this thesis. A genetic algorithm is described for the multiple objective optimization of the antenna matching and radiation pattern optimization. The results of multiple genetic algorithm evaluations are described, and possible further improvements outlined. Progress is made on the development of the antenna. The antenna radiation pattern is evolved in desirable way, but a difficulty in solving the antenna matching problem is identified. Research for resolving the antenna matching problem is described in this paper Keywords: Antenna, Modulation, Genetic Algorithm, Frequency, Head-Band, Very High Frequency Broadcast.
Performance evaluation of modified modular exponentiation for rsa algorithmeSAT Journals
Abstract
Authentication is a very important application of public-key cryptography. Cryptographic algorithms make use of secret keys
known to send and receive information. When the keys are known the encryption / decryption process is an easy task, however
decryption will be impossible without knowing the correct key. The shared public key is managed by the sender, to produce a
message authentication code (MAC) for every transmitted message. There are many algorithms to enable security for message
authentication (secret key). RSA is one such best algorithm for public key based message authentication approaches. But it takes
more time for encryption and/or decryption process, when it has large key length. This research work evaluates the performance
of RSA algorithm with modified modular exponentiation technique for message authentication. As a result modified modular
exponent based RSA algorithm reduces execution time for encryption and decryption process.
Key Words: Cryptography, Message authentication, RSA, Modular Exponentiation.
Fpga implementation of wireless data logger systemeSAT Journals
Abstract This Project describes the design and development of hardware and software modules for wireless data logger system using ZigBee.. FPGA firmware is developed in VHDL using Xilinx ISE 14.7 and the design is simulated using ModelSim Simulator and the code is synthesized on Altera Cyclone IV FPGA.End user can change remotely the data logger settings such as date and time. The Advent of Field Programmable Gate Arrays (FPGAs) has made compact realization of embedded system possible. As FPGAs are high density logic devices. It is possible to realize certain software functions in hardware in an efficient way. Keywords: Data logger; FPGA; ZigBee; Real time clock; ADC.
Alternative fundings to improve road safety in malaysiaeSAT Journals
Abstract Road safety is a major transport, health and social issue worldwide as an estimated 1.3 million road users are killed on the roads every year, of which 90% are in low and middle-income countries (LMIC), where 72% of the world’s population lives but only half of the world’s registered vehicles are owned and driven. In terms of cost, it is estimated that USD 518 billion of lost yearly has been recorded according to the World Health Organisation, WHO (2009). These poor road safety records require immediate actions to be taken in areas of management, institutional reform and funding. Malaysia is an established dynamic and progressive LMIC seeking to improve its road safety performance and until today it depends mostly on the government’s revenues to finance its road safety plans. This practice however may cause burden to the government yearly budget which also need to cater for other sectors such as education, health and defence. To this end, this paper explores and critically evaluates the current situation of road safety inclusive of its funding mechanisms on a global scale as well as in Malaysia. In an effort to improve the situation, the paper aims at analysing the effectiveness of funding mechanism in enhancing road safety. A number of examples of successful road funding mechanism worldwide are presented together with implementation issues with the view to suggest options to improve road safety management and financing at both national and local level in Malaysia. Index terms: Road safety funding, Road safety in Malaysia, Second generation road fund.
Water quality index for groundwater of southern part of bangalore cityeSAT Journals
Abstract Combining different water quality parameters into one single number leads an easy interpretation of an index, thus providing an important tool for management and decision making purposes. Water quality index is a statistical index and is based on the rank order of observation. The purpose of an index is to transform the complex water quality data into information that is easily understandable and useable by the general public. As a part of research work, 14 important water viz., pH, Ec, Cl, Fe, F, SO4, T.H, Ca, Mg, TDS, Na, K, Zn, NO3 were used to evaluate WQI of Southern part of Bangalore City. The water quality index number varies between (19 to122).The indices is classified as Excellent at range less than 10, Good at range 10 to 30, Medium at a range 30 to 50, Bad at a range 50 to 75, and Very Bad at a range greater than 75and the corresponding range contributed by each group is 0%, 18%, 18%, 28%, and 36% respectively. Keywords: WQI, Bangalore
Finite element modeling and bending stress analysis of non standard spur geareSAT Journals
Abstract Gears are toothed wheels, transmitting power and motion from one shaft to another by means of successive engagement of teeth. Having a higher degree of reliability, compactness, high velocity ratio and finally able to transmit motion at a very low velocity, gears are gaining importance as the most efficient means for transmitting power. A gearing system is susceptible to problems such as interference, backlash and undercut. The contact portions of tooth profiles that are not conjugate is called interference. Furthermore due to interference and in the absence of undercut, the involute tip or face of the driven gear tends to dig out the non-involute flank of the driver. The response of a spur gear and its wear is an engineering problem that has not been completely overcome yet. With the perspective of overcoming such defects and for increase the efficiency of gearing system, the use of a non-standard spur gear i.e., an asymmetric spur gear having different pressure angles for drive and coast side of the tooth comes into picture. This paper emphasis on the generation of an asymmetric spur gear tooth using modeling software and bending stress at the root of Asymmetric spur gear tooth is estimated by finite element analysis using ANSYS software and results were compared with the standard spur gear tooth. Keywords: Asymmetric spur gear, Bending stress, Finite element method, Pressure angle
Fatigue life estimation of rear fuselage structure of an aircrafteSAT Journals
This document summarizes a study on estimating the fatigue life of the rear fuselage structure of an aircraft. The researchers created a finite element model of the rear fuselage structure in CATIA and analyzed it in MSC.PATRAN and MSC.NASTRAN to identify high stress regions. They found the maximum stress locations were at cut-out corners and rivet holes in the skin. A local model with finer meshing around the cargo door cut-out was also analyzed. Fatigue life was then estimated using Miner's rule and an S-N curve, accounting for factors like surface roughness and reliability. Damage was accumulated over the expected load cycles to predict fatigue life until crack initiation.
A study on the marshall properties of dbm mix prepared using vg 30 and crmb-5...eSAT Journals
produce a mix which is supposed to be sufficiently sturdy, long-lasting,
resistive.DBM is used as a binder course in the highway pavement. Binder is a prime material in the bituminous mix. Marshall
properties of bituminous mix varies from binder to binder. In this work an effort has been ended to evaluate the Marshall
properties of dense bituminous macadam prepared using VG-30 and CRMB-55 as binder materials. DBM mix is prepared using
2% lime as filler material and VG-30, CRMB-55 as binder material. Marshall method of bituminous mix design is adopted to
decide the optimum bitumen content (OBC )and Marshall properties were determined at optimum bitumen content. On the basis
of limited laboratory studies carried out, it is conclude that CRMB-55 is superior binder material in terms of Marshall properties.
Key Words: VG-30, CRMB-55, Lime, and DBM.
Implementation of ac induction motor control using constant vhz principle and...eSAT Journals
Abstract
This paper presents the hardware implementation of V/f control of induction motor using sine wave PWM method. Because of its
simplicity, the V/F control also called as the scalar control, is the most widely used speed control method in the industrial
applications. In this method the ratio between stator voltage to frequency is maintained constant so that the stator flux is
maintained constant. As the stator flux is maintained constant, the torque developed by the motor depends only on the slip speed
and is independent of the supply frequency. Thus we can control the speed and torque of induction motor by simply controlling the
slip speed of induction motor. The complete control is achieved with the help of TMS320F28027 Development Board. One of the
basic requirements of this scheme is the PWM inverter. The gating signals are generated using sine wave PWM technique.
Implementation issues such as PWM signal generation, ramp control, v/f control, inverter design are discussed. The results are
discussed based on the various waveforms.
KeyWords: v/f control, Induction motor, PWM, and Scalar control
Design & analysis of needle roller bearing at gudgeon pin of ginning machineeSAT Journals
Abstract
Needle roller bearings have high load carrying capacity although it have small cylindrical roller having low cross section,
thus it used in application where space limitation is the important factor along with high dynamic loading capacity. So in
ginning machine where space is limited for driving the mechanism a Needle roller bearing having diameter of 20 mm is
used in between connecting rod and head pin to give the relative motion between parts. As the continuous impacting of
repetitive load on bearing, it fails with in shorter period of time if proper lubricant is not supplied by grease gun after 2 to 3
hours. While bearing are considered under observation it is found that bearing are fail due to brinnelling. So design and
analysis of needle roller bearing is vital to calculate the life of bearing and to make sure whether this bearing is suitable or
not. So it becomes mandatory in designing to calculate radial load applied on bearing as the bearing is rolling contact
bearing, from this life of the bearing can be calculated and finally analysis by ANSYS software is done to find the result so
that we can conclude for bearing.
Key Words: brinneling, connecting rod, dynamic load capacity, head pin, life, radial load, repetitive load
Weed and crop segmentation and classification using area thresholdingeSAT Journals
Abstract In the agricultural industry, the weed and crop identification and classification are major technical and economical importance. Two classification algorithms are focused in this paper. And the better classification algorithm has been selected to classify weed and crop from the images. There are three main parts of proposed system are segmentation, classification and error calculation. The developed algorithm based on area thresholding has been tested on weeds and various locations. Forty one sample images have been tested and the result of some weed coverage rate is illustrated. Moreover, the misclassification rate is also computed. An algorithm has been done to automate the tasks of segmentation and classification. The overall process is implemented in MATLAB. Keywords - Objects segmentation, Image processing, Plant classification, Area Thresholding
Wdm based fso link optimizing for 180 km using bessel filtereSAT Journals
Abstract Free space optical link is a growing field in communication due to its advantage of wide bandwidth, high security and easy installation. A wavelength division multiplexing (WDM) access network using free space optical (FSO) communication in different weather conditions like haze and rain are discussed in this article and find out the possibility of communication link up to 180 km in clear weather with 2.5 Gbps data rate on the wavelength of 1550 nm and up to 54 km in haze condition using same data rate & wave length. Further the effect of using two different low pass filter (Gaussian and Bessel) at the receiver are discussed and conclude that Bessel filter is better on 2.5 Gbps data rate for WDM based FSO link. Keywords: optical communications, wavelength Division Multiplexing (WDM), free space optics (FSO)
Parametric estimation of construction cost using combined bootstrap and regre...IAEME Publication
The document discusses a method for estimating construction costs using a combined bootstrap and regression technique. It involves using historical project data to develop a regression model relating cost to key parameters. A bootstrap resampling method is then used to generate multiple simulated datasets from the original. Regression analysis is performed on each resampled dataset to calculate coefficients and develop a cost range estimate that captures uncertainty. This allows integrating probabilistic and parametric estimation methods while requiring fewer assumptions than traditional statistical techniques. The goal is to provide more accurate conceptual cost estimates early in projects when design information is limited.
Comparitive analysis of doa and beamforming algorithms for smart antenna systemseSAT Journals
Abstract This paper revolves around the implementation of Direction of arrival and Adaptive beam-forming algorithms for Smart Antenna Systems. This paper also investigates the implementation of algorithms on various planner array geometries viz. circular and rectangular. Music algorithm is primarily finds the possible location of desired user and adaptive beam-forming algorithms such as LMS, RLS and CMA algorithms adapts the weights of the array. DOA estimation gives the maximum peak of spectrum with respect to angle of arrival where the desired user is supposed to exist. After DOA estimation weights of array antenna are changed with the changing received signal. This methodology is called as Spectral estimation, which allows the antenna pattern to steer in desired direction estimated by DOA and simultaneously null out the interfering signals. Rate of convergence is the major criterion for comparison for adaptive beam-forming algorithms. Keywords: DOA, MUSIC, LMS, RLS, CMA, SAS.
A Predictive Stock Data Analysis with SVM-PCA Model .......................................................................1
Divya Joseph and Vinai George Biju
HOV-kNN: A New Algorithm to Nearest Neighbor Search in Dynamic Space.......................................... 12
Mohammad Reza Abbasifard, Hassan Naderi and Mohadese Mirjalili
A Survey on Mobile Malware: A War without End................................................................................... 23
Sonal Mohite and Prof. R. S. Sonar
An Efficient Design Tool to Detect Inconsistencies in UML Design Models............................................. 36
Mythili Thirugnanam and Sumathy Subramaniam
An Integrated Procedure for Resolving Portfolio Optimization Problems using Data Envelopment
Analysis, Ant Colony Optimization and Gene Expression Programming ................................................. 45
Chih-Ming Hsu
Emerging Technologies: LTE vs. WiMAX ................................................................................................... 66
Mohammad Arifin Rahman Khan and Md. Sadiq Iqbal
Introducing E-Maintenance 2.0 ................................................................................................................. 80
Abdessamad Mouzoune and Saoudi Taibi
Detection of Clones in Digital Images........................................................................................................ 91
Minati Mishra and Flt. Lt. Dr. M. C. Adhikary
The Significance of Genetic Algorithms in Search, Evolution, Optimization and Hybridization: A Short
Review ...................................................................................................................................................... 103
A Novel Approach to Mathematical Concepts in Data Miningijdmtaiir
-This paper describes three different fundamental
mathematical programming approaches that are relevant to
data mining. They are: Feature Selection, Clustering and
Robust Representation. This paper comprises of two clustering
algorithms such as K-mean algorithm and K-median
algorithms. Clustering is illustrated by the unsupervised
learning of patterns and clusters that may exist in a given
databases and useful tool for Knowledge Discovery in
Database (KDD). The results of k-median algorithm are used
to collecting the blood cancer patient from a medical database.
K-mean clustering is a data mining/machine learning algorithm
used to cluster observations into groups of related observations
without any prior knowledge of those relationships. The kmean algorithm is one of the simplest clustering techniques
and it is commonly used in medical imaging, biometrics and
related fields.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Kernel based similarity estimation and real time tracking of movingIAEME Publication
This document discusses kernel-based mean shift algorithm for real-time object tracking. It presents the following:
1) The algorithm uses kernel density estimation to calculate the similarity between a target model and candidate windows, using the Bhattacharyya coefficient. 2) It can successfully track objects moving uniformly at slow speeds but struggles with fast or non-uniform motion, or changes in scale. 3) The algorithm was tested on video streams and could track objects moving slowly but failed for fast or irregular motion. Adaptive target windows are needed to handle changes in scale.
IRJET- Bidirectional Graph Search Techniques for Finding Shortest Path in Ima...IRJET Journal
This document presents a study comparing different graph search algorithms for solving mazes represented as images. The paper implements bidirectional versions of breadth-first search (BFS) and A* search and compares their performance on 8x8 and 16x16 mazes to the traditional unidirectional algorithms. For smaller 8x8 mazes, BFS performed best but for larger 16x16 mazes, bidirectional BFS was most efficient at finding the shortest path. Bidirectional search improves results but uses more space. The key aspect is finding the meeting point where the two searches meet, guaranteeing a solution if one exists.
Enhancing the performance of cluster based text summarization using support v...eSAT Journals
Abstract Technology is evolving day by day and this increase in technology is nothing but is the efforts to reduce human work and to have systems as automatic as possible. Same thing is true in terms of existence of digital information. Due to enormous increase in the use of internet, there is striking increase in the digital information. This digital information is characterized by different form of information, same information in different form, unrelated information and also there is lot of redundant information. Another next important thing to note is that most of the time we require textual information. To search or retrieve small information one has to go through thousands of documents, read all the retrieved documents irrespective whether they contain useful information or no. It becomes very difficult to read all the retrieved documents and prepare exact summary out of it within time. Besides this, many times retrieved information is repeated in almost many documents. This leads to research in the area of text mining. Text summarization is one of the challenging tasks in the field of text mining. Keywords: Entropy, FCM, Purity, SVM
The document proposes improvements to the traditional K-means clustering algorithm to increase accuracy and efficiency. It discusses selecting initial centroids and assigning data points to clusters. The improved algorithm determines initial centroids by calculating distances from data points to the mean and partitioning into K clusters. It then assigns points to centroids based on minimum distance, only recalculating distances if they increase. Experimental results on standard datasets show the improved algorithm takes less time with higher accuracy compared to traditional K-means.
EDGE DETECTION IN SEGMENTED IMAGES THROUGH MEAN SHIFT ITERATIVE GRADIENT USIN...ijscmcj
In this paper, we propose a new method for edge detection in obtained images from the Mean Shift iterative algorithm. The comparable, proportional and symmetrical images are de?ned and the importance of Ring Theory is explained. A relation of equivalence among proportional images are de?ned for image groups in equivalent classes. The length of the mean shift vector is used in order to quantify the homogeneity of the neighborhoods. This gives a measure of how much uniform are the regions that compose the image. Edge detection is carried out by using the mean shift gradient based on symmetrical images. The difference among the values of gray levels are accentuated or these are decreased to enhance the interest region contours. The chosen images for the experiments were standard images and real images (cerebral hemorrhage images). The obtained results were compared with the Canny detector, and our results showed a good performance as for the edge continuity.
ESTIMATING PROJECT DEVELOPMENT EFFORT USING CLUSTERED REGRESSION APPROACHcscpconf
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the complex and dynamic interaction of factors that impact software development. Heterogeneity exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency due to heterogeneity of the data. Using a clustered approach creates the subsets of data having a degree of homogeneity that enhances prediction accuracy. It was also observed in this study that ridge regression performs better than other regression techniques used in the analysis.
Estimating project development effort using clustered regression approachcsandit
Due to the intangible nature of “software”, accurate and reliable software effort estimation is a
challenge in the software Industry. It is unlikely to expect very accurate estimates of software
development effort because of the inherent uncertainty in software development projects and the
complex and dynamic interaction of factors that impact software development. Heterogeneity
exists in the software engineering datasets because data is made available from diverse sources.
This can be reduced by defining certain relationship between the data values by classifying
them into different clusters. This study focuses on how the combination of clustering and
regression techniques can reduce the potential problems in effectiveness of predictive efficiency
due to heterogeneity of the data. Using a clustered approach creates the subsets of data having
a degree of homogeneity that enhances prediction accuracy. It was also observed in this study
that ridge regression performs better than other regression techniques used in the analysis.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Application of normalized cross correlation to image registrationeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Bio medical image segmentation using marker controlled watershed algorithm a ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Comparison of Cost Estimation Methods using Hybrid Artificial Intelligence on...IJERA Editor
Cost estimating at schematic design stage as the basis of project evaluation, engineering design, and cost
management, plays an important role in project decision under a limited definition of scope and constraints in
available information and time, and the presence of uncertainties. The purpose of this study is to compare the
performance of cost estimation models of two different hybrid artificial intelligence approaches: regression
analysis-adaptive neuro fuzzy inference system (RANFIS) and case based reasoning-genetic algorithm (CBRGA)
techniques. The models were developed based on the same 50 low-cost apartment project datasets in
Indonesia. Tested on another five testing data, the models were proven to perform very well in term of accuracy.
A CBR-GA model was found to be the best performer but suffered from disadvantage of needing 15 cost drivers
if compared to only 4 cost drivers required by RANFIS for on-par performance.
Critical Paths Identification on Fuzzy Network Projectiosrjce
In this paper, a new approach for identifying fuzzy critical path is presented, based on converting the
fuzzy network project into deterministic network project, by transforming the parameters set of the fuzzy
activities into the time probability density function PDF of each fuzzy time activity. A case study is considered as
a numerical tested problem to demonstrate our approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A novel scheme for reliable multipath routing through node independent direct...eSAT Journals
Abstract Multipath routing is essential in the wake of voice over IP, multimedia streaming for efficient data transmission. The growing usage of such network requirements also demands fast recovery from network failures. Multipath routing is one of the promising routing schemes to accommodate the diverse requirements of the network with provision such as load balancing and improved bandwidth. Cho et al. introduced a resilient multipath routing scheme known as directed acyclic graphs. These graphs enable multipath routing with all possible edges while ensuring guaranteed recovery from single point of failures. We also built a prototype application that demonstrates the efficiency of the scheme. The simulation results revealed that the scheme is useful and can be used in real world network applications.
Index Terms – Multipath routing, failure recovery, directed acyclic graphs
Segmentation of MR Images using Active Contours: Methods, Challenges and Appl...AM Publications
In recent years, Active contours have been widely studied and applied in medical image analysis. Active contours combine underlying information with high-level prior knowledge to achieve automatic segmentation for complex objects. Their applications include edge detection, segmentation of objects, shape modelling and object boundary tracking. This paper presents the development process of active contour models and describes the classical parametric active contour models, geometric active contour models, and new hybrid active contour models based on curve evolution and energy minimization techniques. It also discusses challenges and applications of active contour models in medical image segmentation.
Similar to Fault diagnosis using genetic algorithms and principal curves (20)
Mechanical properties of hybrid fiber reinforced concrete for pavementseSAT Journals
Abstract
The effect of addition of mono fibers and hybrid fibers on the mechanical properties of concrete mixture is studied in the present
investigation. Steel fibers of 1% and polypropylene fibers 0.036% were added individually to the concrete mixture as mono fibers and
then they were added together to form a hybrid fiber reinforced concrete. Mechanical properties such as compressive, split tensile and
flexural strength were determined. The results show that hybrid fibers improve the compressive strength marginally as compared to
mono fibers. Whereas, hybridization improves split tensile strength and flexural strength noticeably.
Keywords:-Hybridization, mono fibers, steel fiber, polypropylene fiber, Improvement in mechanical properties.
Material management in construction – a case studyeSAT Journals
Abstract
The objective of the present study is to understand about all the problems occurring in the company because of improper application
of material management. In construction project operation, often there is a project cost variance in terms of the material, equipments,
manpower, subcontractor, overhead cost, and general condition. Material is the main component in construction projects. Therefore,
if the material management is not properly managed it will create a project cost variance. Project cost can be controlled by taking
corrective actions towards the cost variance. Therefore a methodology is used to diagnose and evaluate the procurement process
involved in material management and launch a continuous improvement was developed and applied. A thorough study was carried
out along with study of cases, surveys and interviews to professionals involved in this area. As a result, a methodology for diagnosis
and improvement was proposed and tested in selected projects. The results obtained show that the main problem of procurement is
related to schedule delays and lack of specified quality for the project. To prevent this situation it is often necessary to dedicate
important resources like money, personnel, time, etc. To monitor and control the process. A great potential for improvement was
detected if state of the art technologies such as, electronic mail, electronic data interchange (EDI), and analysis were applied to the
procurement process. These helped to eliminate the root causes for many types of problems that were detected.
Managing drought short term strategies in semi arid regions a case studyeSAT Journals
Abstract
Drought management needs multidisciplinary action. Interdisciplinary efforts among the experts in various fields of the droughts
prone areas are helpful to achieve tangible and permanent solution for this recurring problem. The Gulbarga district having the total
area around 16, 240 sq.km, and accounts 8.45 per cent of the Karnataka state area. The district has been situated with latitude 17º 19'
60" North and longitude of 76 º 49' 60" east. The district is situated entirely on the Deccan plateau positioned at a height of 300 to
750 m above MSL. Sub-tropical, semi-arid type is one among the drought prone districts of Karnataka State. The drought
management is very important for a district like Gulbarga. In this paper various short term strategies are discussed to mitigate the
drought condition in the district.
Keywords: Drought, South-West monsoon, Semi-Arid, Rainfall, Strategies etc.
Life cycle cost analysis of overlay for an urban road in bangaloreeSAT Journals
Abstract
Pavements are subjected to severe condition of stresses and weathering effects from the day they are constructed and opened to traffic
mainly due to its fatigue behavior and environmental effects. Therefore, pavement rehabilitation is one of the most important
components of entire road systems. This paper highlights the design of concrete pavement with added mono fibers like polypropylene,
steel and hybrid fibres for a widened portion of existing concrete pavement and various overlay alternatives for an existing
bituminous pavement in an urban road in Bangalore. Along with this, Life cycle cost analyses at these sections are done by Net
Present Value (NPV) method to identify the most feasible option. The results show that though the initial cost of construction of
concrete overlay is high, over a period of time it prove to be better than the bituminous overlay considering the whole life cycle cost.
The economic analysis also indicates that, out of the three fibre options, hybrid reinforced concrete would be economical without
compromising the performance of the pavement.
Keywords: - Fatigue, Life cycle cost analysis, Net Present Value method, Overlay, Rehabilitation
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materialseSAT Journals
Abstract
The issue of growing demand on our nation’s roadways over that past couple of decades, decreasing budgetary funds, and the need to
provide a safe, efficient, and cost effective roadway system has led to a dramatic increase in the need to rehabilitate our existing
pavements and the issue of building sustainable road infrastructure in India. With these emergency of the mentioned needs and this
are today’s burning issue and has become the purpose of the study.
In the present study, the samples of existing bituminous layer materials were collected from NH-48(Devahalli to Hassan) site.The
mixtures were designed by Marshall Method as per Asphalt institute (MS-II) at 20% and 30% Reclaimed Asphalt Pavement (RAP).
RAP material was blended with virgin aggregate such that all specimens tested for the, Dense Bituminous Macadam-II (DBM-II)
gradation as per Ministry of Roads, Transport, and Highways (MoRT&H) and cost analysis were carried out to know the economics.
Laboratory results and analysis showed the use of recycled materials showed significant variability in Marshall Stability, and the
variability increased with the increase in RAP content. The saving can be realized from utilization of recycled materials as per the
methodology, the reduction in the total cost is 19%, 30%, comparing with the virgin mixes.
Keywords: Reclaimed Asphalt Pavement, Marshall Stability, MS-II, Dense Bituminous Macadam-II
Laboratory investigation of expansive soil stabilized with natural inorganic ...eSAT Journals
This document summarizes a study on stabilizing expansive black cotton soil with the natural inorganic stabilizer RBI-81. Laboratory tests were conducted to evaluate the effect of RBI-81 on the soil's engineering properties. The tests showed that with 2% RBI-81 and 28 days of curing, the unconfined compressive strength increased by around 250% and the CBR value improved by approximately 400% compared to the untreated soil. Overall, the study found that RBI-81 effectively improved the strength properties of the black cotton soil and its suitability as a soil stabilizer was supported.
Influence of reinforcement on the behavior of hollow concrete block masonry p...eSAT Journals
Abstract
Reinforced masonry was developed to exploit the strength potential of masonry and to solve its lack of tensile strength. Experimental
and analytical studies have been carried out to investigate the effect of reinforcement on the behavior of hollow concrete block
masonry prisms under compression and to predict ultimate failure compressive strength. In the numerical program, three dimensional
non-linear finite elements (FE) model based on the micro-modeling approach is developed for both unreinforced and reinforced
masonry prisms using ANSYS (14.5). The proposed FE model uses multi-linear stress-strain relationships to model the non-linear
behavior of hollow concrete block, mortar, and grout. Willam-Warnke’s five parameter failure theory has been adopted to model the
failure of masonry materials. The comparison of the numerical and experimental results indicates that the FE models can successfully
capture the highly nonlinear behavior of the physical specimens and accurately predict their strength and failure mechanisms.
Keywords: Structural masonry, Hollow concrete block prism, grout, Compression failure, Finite element method,
Numerical modeling.
Influence of compaction energy on soil stabilized with chemical stabilizereSAT Journals
This document summarizes a study on the influence of compaction energy on soil stabilized with a chemical stabilizer. Laboratory tests were conducted on locally available loamy soil treated with a patented polymer liquid stabilizer and compacted at four different energy levels. The study found that increasing the compaction effort increased the density of both untreated and treated soil, but the rate of increase was lower for stabilized soil. Treating the soil with the stabilizer improved its unconfined compressive strength and resilient modulus, and reduced accumulated plastic strain, with these properties further improved by higher compaction efforts. The stabilized soil exhibited strength and performance benefits compared to the untreated soil.
Geographical information system (gis) for water resources managementeSAT Journals
This document describes a hydrological framework developed in the form of a Hydrologic Information System (HIS) to meet the information needs of various government departments related to water management in a state. The HIS consists of a hydrological database coupled with tools for collecting and analyzing spatial and non-spatial water resources data. It also incorporates a hydrological model to indirectly assess water balance components over space and time. A web-based GIS portal was created to allow users to access and visualize the hydrological data, as well as outputs from the SWAT hydrological model. The framework is intended to facilitate integrated water resources planning and management across different administrative levels.
Forest type mapping of bidar forest division, karnataka using geoinformatics ...eSAT Journals
Abstract
The study demonstrate the potentiality of satellite remote sensing technique for the generation of baseline information on forest types
including tree plantation details in Bidar forest division, Karnataka covering an area of 5814.60Sq.Kms. The Total Area of Bidar
forest division is 5814Sq.Kms analysis of the satellite data in the study area reveals that about 84% of the total area is Covered by
crop land, 1.778% of the area is covered by dry deciduous forest, 1.38 % of mixed plantation, which is very threatening to the
environmental stability of the forest, future plantation site has been mapped. With the use of latest Geo-informatics technology proper
and exact condition of the trees can be observed and necessary precautions can be taken for future plantation works in an appropriate
manner
Keywords:-RS, GIS, GPS, Forest Type, Tree Plantation
Factors influencing compressive strength of geopolymer concreteeSAT Journals
Abstract
To study effects of several factors on the properties of fly ash based geopolymer concrete on the compressive strength and also the
cost comparison with the normal concrete. The test variables were molarities of sodium hydroxide(NaOH) 8M,14M and 16M, ratio of
NaOH to sodium silicate (Na2SiO3) 1, 1.5, 2 and 2.5, alkaline liquid to fly ash ratio 0.35 and 0.40 and replacement of water in
Na2SiO3 solution by 10%, 20% and 30% were used in the present study. The test results indicated that the highest compressive
strength 54 MPa was observed for 16M of NaOH, ratio of NaOH to Na2SiO3 2.5 and alkaline liquid to fly ash ratio of 0.35. Lowest
compressive strength of 27 MPa was observed for 8M of NaOH, ratio of NaOH to Na2SiO3 is 1 and alkaline liquid to fly ash ratio of
0.40. Alkaline liquid to fly ash ratio of 0.35, water replacement of 10% and 30% for 8 and 16 molarity of NaOH and has resulted in
compressive strength of 36 MPa and 20 MPa respectively. Superplasticiser dosage of 2 % by weight of fly ash has given higher
strength in all cases.
Keywords: compressive strength, alkaline liquid, fly ash
Experimental investigation on circular hollow steel columns in filled with li...eSAT Journals
Abstract
Composite Circular hollow Steel tubes with and without GFRP infill for three different grades of Light weight concrete are tested for
ultimate load capacity and axial shortening , under Cyclic loading. Steel tubes are compared for different lengths, cross sections and
thickness. Specimens were tested separately after adopting Taguchi’s L9 (Latin Squares) Orthogonal array in order to save the initial
experimental cost on number of specimens and experimental duration. Analysis was carried out using ANN (Artificial Neural
Network) technique with the assistance of Mini Tab- a statistical soft tool. Comparison for predicted, experimental & ANN output is
obtained from linear regression plots. From this research study, it can be concluded that *Cross sectional area of steel tube has most
significant effect on ultimate load carrying capacity, *as length of steel tube increased- load carrying capacity decreased & *ANN
modeling predicted acceptable results. Thus ANN tool can be utilized for predicting ultimate load carrying capacity for composite
columns.
Keywords: Light weight concrete, GFRP, Artificial Neural Network, Linear Regression, Back propagation, orthogonal
Array, Latin Squares
Experimental behavior of circular hsscfrc filled steel tubular columns under ...eSAT Journals
This document summarizes an experimental study that tested circular concrete-filled steel tube columns with varying parameters. 45 specimens were tested with different fiber percentages (0-2%), tube diameter-to-wall-thickness ratios (D/t from 15-25), and length-to-diameter (L/d) ratios (from 2.97-7.04). The results found that columns filled with fiber-reinforced concrete exhibited higher stiffness, equal ductility, and enhanced energy absorption compared to those filled with plain concrete. The load carrying capacity increased with fiber content up to 1.5% but not at 2.0%. The analytical predictions of failure load closely matched the experimental values.
Evaluation of punching shear in flat slabseSAT Journals
Abstract
Flat-slab construction has been widely used in construction today because of many advantages that it offers. The basic philosophy in
the design of flat slab is to consider only gravity forces; this method ignores the effect of punching shear due to unbalanced moments
at the slab column junction which is critical. An attempt has been made to generate generalized design sheets which accounts both
punching shear due to gravity loads and unbalanced moments for cases (a) interior column; (b) edge column (bending perpendicular
to shorter edge); (c) edge column (bending parallel to shorter edge); (d) corner column. These design sheets are prepared as per
codal provisions of IS 456-2000. These design sheets will be helpful in calculating the shear reinforcement to be provided at the
critical section which is ignored in many design offices. Apart from its usefulness in evaluating punching shear and the necessary
shear reinforcement, the design sheets developed will enable the designer to fix the depth of flat slab during the initial phase of the
design.
Keywords: Flat slabs, punching shear, unbalanced moment.
Evaluation of performance of intake tower dam for recent earthquake in indiaeSAT Journals
Abstract
Intake towers are typically tall, hollow, reinforced concrete structures and form entrance to reservoir outlet works. A parametric
study on dynamic behavior of circular cylindrical towers can be carried out to study the effect of depth of submergence, wall thickness
and slenderness ratio, and also effect on tower considering dynamic analysis for time history function of different soil condition and
by Goyal and Chopra accounting interaction effects of added hydrodynamic mass of surrounding and inside water in intake tower of
dam
Key words: Hydrodynamic mass, Depth of submergence, Reservoir, Time history analysis,
Evaluation of operational efficiency of urban road network using travel time ...eSAT Journals
This document evaluates the operational efficiency of an urban road network in Tiruchirappalli, India using travel time reliability measures. Traffic volume and travel times were collected using video data from 8-10 AM on various roads. Average travel times, 95th percentile travel times, and buffer time indexes were calculated to assess reliability. Non-motorized vehicles were found to most impact reliability on one road. A relationship between buffer time index and traffic volume was developed. Finally, a travel time model was created and validated based on length, speed, and volume.
Estimation of surface runoff in nallur amanikere watershed using scs cn methodeSAT Journals
Abstract
The development of watershed aims at productive utilization of all the available natural resources in the entire area extending from
ridge line to stream outlet. The per capita availability of land for cultivation has been decreasing over the years. Therefore, water and
the related land resources must be developed, utilized and managed in an integrated and comprehensive manner. Remote sensing and
GIS techniques are being increasingly used for planning, management and development of natural resources. The study area, Nallur
Amanikere watershed geographically lies between 110 38’ and 110 52’ N latitude and 760 30’ and 760 50’ E longitude with an area of
415.68 Sq. km. The thematic layers such as land use/land cover and soil maps were derived from remotely sensed data and overlayed
through ArcGIS software to assign the curve number on polygon wise. The daily rainfall data of six rain gauge stations in and around
the watershed (2001-2011) was used to estimate the daily runoff from the watershed using Soil Conservation Service - Curve Number
(SCS-CN) method. The runoff estimated from the SCS-CN model was then used to know the variation of runoff potential with different
land use/land cover and with different soil conditions.
Keywords: Watershed, Nallur watershed, Surface runoff, Rainfall-Runoff, SCS-CN, Remote Sensing, GIS.
Estimation of morphometric parameters and runoff using rs & gis techniqueseSAT Journals
This document summarizes a study that used remote sensing and GIS techniques to estimate morphometric parameters and runoff for the Yagachi catchment area in India over a 10-year period. Morphometric analysis was conducted to understand the hydrological response at the micro-watershed level. Daily runoff was estimated using the SCS curve number model. The results showed a positive correlation between rainfall and runoff. Land use/land cover changes between 2001-2010 were found to impact estimated runoff amounts. Remote sensing approaches provided an effective means to model runoff for this large, ungauged area.
Effect of variation of plastic hinge length on the results of non linear anal...eSAT Journals
Abstract The nonlinear Static procedure also well known as pushover analysis is method where in monotonically increasing loads are applied to the structure till the structure is unable to resist any further load. It is a popular tool for seismic performance evaluation of existing and new structures. In literature lot of research has been carried out on conventional pushover analysis and after knowing deficiency efforts have been made to improve it. But actual test results to verify the analytically obtained pushover results are rarely available. It has been found that some amount of variation is always expected to exist in seismic demand prediction of pushover analysis. Initial study is carried out by considering user defined hinge properties and default hinge length. Attempt is being made to assess the variation of pushover analysis results by considering user defined hinge properties and various hinge length formulations available in literature and results compared with experimentally obtained results based on test carried out on a G+2 storied RCC framed structure. For the present study two geometric models viz bare frame and rigid frame model is considered and it is found that the results of pushover analysis are very sensitive to geometric model and hinge length adopted. Keywords: Pushover analysis, Base shear, Displacement, hinge length, moment curvature analysis
Effect of use of recycled materials on indirect tensile strength of asphalt c...eSAT Journals
Abstract
Depletion of natural resources and aggregate quarries for the road construction is a serious problem to procure materials. Hence
recycling or reuse of material is beneficial. On emphasizing development in sustainable construction in the present era, recycling of
asphalt pavements is one of the effective and proven rehabilitation processes. For the laboratory investigations reclaimed asphalt
pavement (RAP) from NH-4 and crumb rubber modified binder (CRMB-55) was used. Foundry waste was used as a replacement to
conventional filler. Laboratory tests were conducted on asphalt concrete mixes with 30, 40, 50, and 60 percent replacement with RAP.
These test results were compared with conventional mixes and asphalt concrete mixes with complete binder extracted RAP
aggregates. Mix design was carried out by Marshall Method. The Marshall Tests indicated highest stability values for asphalt
concrete (AC) mixes with 60% RAP. The optimum binder content (OBC) decreased with increased in RAP in AC mixes. The Indirect
Tensile Strength (ITS) for AC mixes with RAP also was found to be higher when compared to conventional AC mixes at 300C.
Keywords: Reclaimed asphalt pavement, Foundry waste, Recycling, Marshall Stability, Indirect tensile strength.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMHODECEDSIET
Time Division Multiplexing (TDM) is a method of transmitting multiple signals over a single communication channel by dividing the signal into many segments, each having a very short duration of time. These time slots are then allocated to different data streams, allowing multiple signals to share the same transmission medium efficiently. TDM is widely used in telecommunications and data communication systems.
### How TDM Works
1. **Time Slots Allocation**: The core principle of TDM is to assign distinct time slots to each signal. During each time slot, the respective signal is transmitted, and then the process repeats cyclically. For example, if there are four signals to be transmitted, the TDM cycle will divide time into four slots, each assigned to one signal.
2. **Synchronization**: Synchronization is crucial in TDM systems to ensure that the signals are correctly aligned with their respective time slots. Both the transmitter and receiver must be synchronized to avoid any overlap or loss of data. This synchronization is typically maintained by a clock signal that ensures time slots are accurately aligned.
3. **Frame Structure**: TDM data is organized into frames, where each frame consists of a set of time slots. Each frame is repeated at regular intervals, ensuring continuous transmission of data streams. The frame structure helps in managing the data streams and maintaining the synchronization between the transmitter and receiver.
4. **Multiplexer and Demultiplexer**: At the transmitting end, a multiplexer combines multiple input signals into a single composite signal by assigning each signal to a specific time slot. At the receiving end, a demultiplexer separates the composite signal back into individual signals based on their respective time slots.
### Types of TDM
1. **Synchronous TDM**: In synchronous TDM, time slots are pre-assigned to each signal, regardless of whether the signal has data to transmit or not. This can lead to inefficiencies if some time slots remain empty due to the absence of data.
2. **Asynchronous TDM (or Statistical TDM)**: Asynchronous TDM addresses the inefficiencies of synchronous TDM by allocating time slots dynamically based on the presence of data. Time slots are assigned only when there is data to transmit, which optimizes the use of the communication channel.
### Applications of TDM
- **Telecommunications**: TDM is extensively used in telecommunication systems, such as in T1 and E1 lines, where multiple telephone calls are transmitted over a single line by assigning each call to a specific time slot.
- **Digital Audio and Video Broadcasting**: TDM is used in broadcasting systems to transmit multiple audio or video streams over a single channel, ensuring efficient use of bandwidth.
- **Computer Networks**: TDM is used in network protocols and systems to manage the transmission of data from multiple sources over a single network medium.
### Advantages of TDM
- **Efficient Use of Bandwidth**: TDM all
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
A review on techniques and modelling methodologies used for checking electrom...
Fault diagnosis using genetic algorithms and principal curves
1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 02 Issue: 12 | Dec-2013, Available @ http://www.ijret.org 61
FAULT DIAGNOSIS USING GENETIC ALGORITHMS AND
PRINCIPAL CURVES
Tawfik Najeh1
, Lotfi Nabli2
1
PHD student, Laboratory of Automatic Control, Signal and Image Processing, National School of
Engineers of Monastir, University of Monastir, 5019 Tunisia, najehtawfik@gmail.com,
2
Master of conference, Laboratory of Automatic Control, Signal and Image Processing, National School of
Engineers of Monastir, University of Monastir, 5019 Tunisia, lotfi.nabli@enim.rnu.tn
Abstract
Several applications of nonlinear principal component analysis (NPCA) have appeared recently in process monitoring and
fault diagnosis. In this paper a new approach is proposed for fault detection based on principal curves and genetic algorithms.
The principal curve is a generation of linear principal component (PCA) introduced by Hastie as a parametric curve passes
satisfactorily through the middle of data. The existing principal curves algorithms employ the first component of the data as an
initial estimation of principal curve. However the dependence on initial line leads to a lack of flexibility and the final curve is only
satisfactory for specific problems. In this paper we extend this work in two ways. First, we propose a new method based on
genetic algorithms to find the principal curve. Here, lines are fitted and connected to form polygonal lines (PL). Second, potential
application of principal curves is discussed. An example is used to illustrate fault diagnosis of nonlinear process using the
proposed approach.
Index Terms: Principal curve, Genetic Algorithm, Nonlinear principal component analysis, Fault detection.
--------------------------------------------------------------------***----------------------------------------------------------------------
1. Introduction
Although the principal component analysis (PCA) can be
considered as the oldest and simplest technique in
multifarious analysis [2], this method is a topical subject in
several research areas such process monitoring and fault
detection [2]. It belongs to the most effective techniques for
linear data analysis [3]. PCA reduces the dimension of the
representation space of data with minimal loss of
information. The transformation of the whole original
variables gives new ones called principal components. The
first major components contain the most variations of initial
variables. The last major components called primary
residues contain the measurement noise. Despite the
effectiveness of PCA in various fields, this method is no
longer approved for nonlinear problems that present most
real systems [4]. Using linear PCA to analyze nonlinear data
gives a residual space with significant information part
rather than containing the system noise. This will lead to the
loss of major data variations. In order to adapt the concept
of PCA to nonlinear problems, an inspired approach to PCA
called principal curves or nonlinear principal component
analysis (NPCA) is introduced by Hastie et al [5]. This
approach can be seen as a generalization and the natural
extension of linear PCA. It is defined by curves that pass
through the middle of a cloud of points.
In spite of the progress reported during recent years in
resolving the main problem of principal curves, existing
approaches have multiple limitations. Due to the inadequate
initialization of the algorithm or the predefined strategy of
constructing local models, these methods present
weaknesses for the data with self-intersecting
characteristics, high curvature, and significant dispersion
[6]. The contribution of this paper is new practical concepts
that use genetic algorithms (GAs) to estimate curve passing
through the middle of probability distribution. To support
the complex data structures and have a flexible algorithm
that covers the important curvature of the principal curves, a
new approach that differs from conventional mentioned
approaches is used. This serves to demonstrate the
effectiveness of using revolutionary techniques for solving
the problem of the principal curves. Likewise, the use of
algorithms is easily extended to complex data structures that
present a great challenge for conventional approaches. The
second section is an overview of different definitions of
principal curve. The third section briefly outlines the genetic
approach and how it differs from conventional techniques.
The forth section is the core of paper; this section describes
the developed method based on genetic algorithms for
constructing the principal curves and the technical details of
the proposed approach. In the fifth section the results
obtained on two examples and the perspectives for future
research.
2. Overview of principal curves
In the literature various definitions of principal curves are
proposed. The first one is based on the self-consistency
property [5]. A different approach is introduced by Kégal et
al [7] and regards the principal curves as curves with
bounded length. Other approaches are based on the mixture
model [8]. The Hastie approach does not support either
closed curves or those with intersections. However many
improvements are added to this approach. The fixes
2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 02 Issue: 12 | Dec-2013, Available @ http://www.ijret.org 62
developed by Tibershirani [9] of the original version of the
algorithm of Hastie allowed it to overcome the bais. Later
Chang et al.[10] proposed a hybrid approach based on
Hastie and Banifield methods to improve obtained results on
finding principal curves.
A different approach based on a model of the principal
semi-parametric curve was proposed by Tibshirani [8]. But
lack of flexibility in the case of curves with large curvatures
this method has the same weaknesses of the theory of
Hastie. In [11] authors introduced an algorithm that relies on
the polygonal lines to achieve a satisfactory principal curve,
but with an important calculation cost. All this algorithms
are characterized by their top-down analysis. They start
research for a straight line, which by default is the first
principal component of the data. Besides, another group of
definitions called bottom-up approach describes the same
topic, but, from another side. Such approaches are based on
local data analysis to construct the principal curve [6] rather
than starting with a line that represents the entire cloud of
points, these algorithms consider just a set of neighboring
points in each step. The approach introduced by Delicado
[12] for the construction of principal curves is directly
inspired by this principle. The second group seems to be
more effective for complex data structures such as closed
curves or spiral-shaped [7]. It should be noted that
considering all the data cloud at once make the result
completely dependent on the initial line. This has a direct
impact on the flexibility of the algorithm and the final curve
is only satisfactory for specific problems [13]. Principal
curves are considered as a nonlinear way to summarize a
multidimensional data by a one-dimensional curve. They are
defined as curves that pass through the most dense regions
of the data cloud [14]. This one-dimensional representation
catches shape depending on the data distributions (Figure 1).
Figure.1. Graphical representation of a curve that passes
from the middle of a cloud of points
Hastie and Stuetzul introduce the first approach based on the
self-consistency property of principal components.
Given a random vector X if 1 ,df f f
is the curve parameterized with , then for any d
X
we have ( )f X the projection index and ( )f is a
principal curve. Mathematically the projection index is
defined by:
sup : inff f f (1)
For d
X , the projection index ( )f X is the largest value
giving the minimum of f , as shown in (Figure 2).
Figure.2. Projecting points on a curve
self-contained, the basic terms on distance from a point to a
curve and the point’s position along a line segment on
2
are reviewed.
Let the point ( , )x yC C C and the line segment AB from
( , )x yA A A to ( , )x yB B B . Let P be the point of perpendicular
projection of C on AB .The parameter r ,which indicates P's
position along AB, is computed by the dot product of AC
and AB divided by the square of the length of AB:
2
AC dot AB
r
AB
(2)
The parameter r has the following meaning:
0
1
0
1
0 1
if r then P A
if r then P B
if r then Pis on the backword extention of AB
if r then P is on the forword extention of AB
if r then P is interior to AB
The
point P can then be found:
.
.
x x x x
y y y y
P A r B A
P A r B A
(3)
And the distance from A to P is:
d r AB (4)
3. Genetic algorithms-some preliminaries
Gas are classified as evolutionary optimization methods
[15]. They allow defining an optimum of a function defined
on a data space. These algorithms are applied to a wide
variety of problems [16]. Simplicity and efficiency are the
two advantages of GAs [17]. After having fixed the
expression of the objective function to be optimized,
probabilistic steps are involved to create an initial
population of individuals [18]. The initial population will
submit some genetic operations (such as evaluation,
mutation, crossover ...) to converge to the optimal solution
of the problem. Optimization steps with GAs are as follows:
3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 02 Issue: 12 | Dec-2013, Available @ http://www.ijret.org 63
Initialization: It is usually random and it is often
advantageous to include the maximum knowledge
about the problem.
Evaluation: This step is to compute the quality of
individuals by the allocation a positive value called
"ability or fitness" to each one. The highest is
assigned to the individual that minimizes (or
maximizes) the objective function.
The selection: This step selects a definite number
of individuals of the current population. The
selection is probabilistic and characterized by a
parameter called selection pressure (Ps) fixing the
number of selected individuals.
The crossover: The genetic crossover operator
creates new individuals. From two randomly
selected parents, crossover produces two
descendants. This step affects only a limited
number of individuals established by the crossover
rate (Pc) number.
The mutation: The mutation consists on providing
a small disruption to a number (Pm) of individuals.
The effect of this operator is to counteract the
attraction exerted by the best individuals; this
allows us to explore other areas of the search space.
This has been a brief overview of GAs. For further details
on the processing power and the convergence properties of
GAs, reference should be made to [15].
4. Principal curves by genetic algorithms
It has been shown in Section 2 that obtaining the
principal curves leads back to an optimization problem. This
principal curve minimizes the orthogonal deviations
between the data and the curve subject to smoothness
constraints. To resolve this computation, several methods
have been established; all of them were deterministic. The
use of these methods can lead to a local optimum providing
less faithful principal curves to the data distribution. The
resolution of such problems by GAs avoids all local optima
and converges to the global optimum of the problem. The
proposed approach considers the principal curves as an
ensemble of connected line segments. In each step new
segment is inserted to form polygonal lines. GAs are used to
find the optimal segments one by one. In each step, only
data in a local neighbourhood of the segment are considered.
The use of GAs in order to find the principal curves requires
the development of an objective function. This function
determines how to evaluate the fitness of any segment pass
through the data cloud.
Let Xn be a two-dimensional random vector and
2
, 1,...nix i is a local neighbourhood for a fix point x0.
For any segment s=[x0,b] pass through the neighborhood
points let di denotes the distance between the considered
point xi and its orthogonal projection on the segment given
by equation (4). The objective function must take into
account the quadratic sum of the distances di.
2 2
( , ) (min )
n n
i i
i i
d x s x f (5)
The objective is to find the segment that minimizes the total
squared distance of all neighbour’s points. We can project
orthogonally any data point in the neighbourhood cloud
onto the segment and the deviations between the data and
the projection on considered segment are computed by
equation (5). Hence, for any segment s; an associated fitness
is calculated by .
The size of the neighbourhood is an important parameter
done by a fraction of all data points that are considered to be
in the neighbourhood. Increasing or decreasing this
parameter tends respectively to the smoothness of the curve
or to interpolate the data cloud. Then the neighbourhood
controls the trade-off between smoothness and fidelity to the
data. To determine the size of neighbourhood one can use
the cross-validation.
The new genetic curve algorithm is constructed following
the strategy outlined as fellows:
Algorithm 1
01: Start with a random point 0x from nX .
02: Ln = the local neighborhood around 0x .
03: Repeat until number generation (NB) = Gmax.
04: Generate at random a population of K segments.
05: for every segment kx do
06: compute the Euclidean distance:
2
( , )
n
k i
i
d x s for each segment.
07: end for
08: Apply crossover and mutation.
09: Save the better line segment and return to step 3.
10: NB=NB+1
11: Delete used local neighbourhood points from the
original distribution.
12: The end point of previous segment is served as a
starting point.
13: Connect with previous segment (Except first one).
14: Return 02 and repeat until all data points are achieved.
The series of best saved segments gives the global principal
curve for the data distribution. In the next section these steps
will be explained in detail:
(1) Selection of the starting point: In this paper the starting
point can be imposed by hand or if this is not possible it
might be chosen at random.
(2) Search the local neighborhood: For each segment in the
current population the algorithm identifies distances from all
points to them orthogonally projections and a fixed number
of the nearest neighbor’s points might be chosen.
(3) Stop after Gmax iteration: Following the execution of a
specified number of Gmax generations the genetic algorithm
converges to the global optimum. The result is a satisfactory
4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 02 Issue: 12 | Dec-2013, Available @ http://www.ijret.org 64
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
segment passes through the middle of neighborhood of the
starting points.
(4) Generate an initial population: The point 0x is taken as
the end of a set of K segments having all the same length.
These segments randomly generated present the initial
population of individuals. Each segment represents a
possible solution for the local neighbourhood (Figure 3).
Figure 3. Initial Population
(5,6,7) The assignment of fitness value: The equation (5) is
taken as an objective function. After having generated an
initial population, the abilities of individuals are evaluated
by this function. Individuals with the highest fitness are
selected to be subject to different genetic operators
(crossover, mutation and selection).
(8) Apply crossover and mutation: The fitness value
assigned in the previous step is used to direct the application
of tow operations, crossover and mutation, which produce
the new populations of segments. This new generation will
contain better solutions. Crossover affords a means for each
segment to mix and match their desirable qualities through a
random process. The second operation, mutation, enhances
the ability of the GAs to find near-optimal segments.
Mutation is the occasional and partial modification of a
particular end point of a given segment.
(10) Save the better line segments: After saving best
segments return to step 3 for fixed number of generations
(Gmax)
(11) Delete used local neighbourhood: Before moving to the
next segment calculation, all points previously used are
removed from the distribution. This avoids the algorithm to
dangle between two points and never escaping.
(12,13,14) Found the global curve: The end point of the
previous segment serves as a new starting point for a new
algorithm. Steps (2) to (14) are repeated times until no point
in the distribution can be considered to be in the
neighbourhood. Then return the global principal curve those
summaries the data cloud.
5. Experimental results and analyses
In this section, two examples are used to prove the
performance of the proposed approach to find the principal
curve in case of synthetic dataset and for fault detection. But
before, we first discuss parameters of GAs.
5.1 Setting parameters
For all runs of GAs, the population size is 80 and the
maximum number of generations to be run is Gmax=100.
Two genetic operations are used: crossover and mutation
with the probability 0.25 and 0.01, respectively. The
maximum number of permissible segments inserted in the
new generation is 90%. The method of selection of
segments for crossover is tournament selection.
5.2 Synthetic dataset
The new algorithm has been tested with synthetic
datasets in case of circle of radius=1. Data points are
disturbed along the circle with Gaussian noise independently
imposed on different dimensions of the given curve. We
start with 100 data point contaminated with small noise
(0.03). The results obtained by applying the algorithm are
shown in Figure (4). One can see that the circle is
reconstructed quite well.
Figure 4. Principal curve: synthetic datasets
5.3 Tennessee Eastman
The Tennessee Eastman Process is a test problem published
by the Eastman Company for use in evaluating process
control methods [19]. This process produces two products G
and H from four reactants A, C, D, and E.
Figure 5. Tennessee Eastman reactor
5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 02 Issue: 12 | Dec-2013, Available @ http://www.ijret.org 65
The process also produces an inert B and a byproduct F. The
major units of the process are: a reactor, a product
condenser, a vapor/liquid separator, a recycle compressor,
and a product stripper. The process gives 41 measurements
with 12 manipulated variables [20]. All gaseous reactants
are transmitted to the reactor to produce other liquid
products. A catalyst dissolved in the liquid phase catalyzes
the gas-phase reactions. Then, the catalyst remains in the
reactor and then products leave the reactor as vapors. The
process has a total of six operating states which comprise a
basic operating state with 50/50 mass ratios between product
components G and H.
In this paper we considered only the reactor unit with 2
variables: percent of component C (Mol %) and reactor
coolant temperature (Deg C). The principal curve was
trained with a data set of 300 samples. The detection was
performed with data containing one fault at a time with two
variables. The test was designed for safe and failed
operating mode.
The objective of this section is to demonstrate the
application of the principal cures technique to the problem
of faults detection of the Tennessee Eastman Process.
Supposing that the TEP have two operating mode in which
the principal curve of the first normal mode was already
defined: the predefined operating mode is called M0, the
failed operating mode is noted M1. To identify a change of
the system’s operating mode by the proposed method, we
try to get the principal curve noted C0 corresponding to
normal operating mode in the absence of defect. This curve
(Figure 6) is obtained by the calculating algorithm of the
principal curve detailed in the previous sections.
Figure 6.Principal curve
After having defined the principal curve from data, we
propose, now, its use for change detection of the operating
mode.
One technique for monitoring of processes uses the Squared
Prediction Error (SPE) [21]. The idea consists on
constructing the principal curve of safe operating mode, then
check the SPE between the estimated curve and data from
the system at the present time. The index below its detection
threshold corresponds to the failed operating mode.
Given a set of data points xn for each data point xi let p(xi)
being its projection point on the principal curve f. The
Euclidean squared distance between xi and p(xi) is
calculated for the all data set .Then the SPE can be defined
as:
2
1
( , )
n
i
i
SPE d x f
(6)
Usually, the process is considered abnormal operating if:
fI (7)
With δ is the threshold of detection.
From the constructed curve and the cloud of points, we can
construct an indicator of change SPE through the equation
(6). The process is simulated for 600 samples in the
following manner; the first 300 samples correspond to the
mode M0 and the second 300 ones correspond to mode M1.
Figure 7. Variation of indicator value SPE
The evolution of indicators SPE and threshold among this
simulation is shown in (Figure 7). We note the presence of
several false alarms of two indicators; this is mainly due to
measurement noise affecting variables. In the interval [0,
300], the SPE is below the threshold corresponding to the
M0 mode and above its threshold in the interval [301,600]
corresponding to failed operating mode M1.
6. Conclusion
In this paper genetic programming approach was applied to
find principal curves. These curves are nonlinear generation
of principal components, and many methods are inspired
from this concept and employ principal component of the
data to reconstruct the principal curves. However in many
cases of high dispersion or noisy data, those conventional
methods may be ineffective. To solve this problem and in
connection with researching existing approaches for
computing principal curves a new concept based on GAs is
proposed. GAs are search algorithms that are able to find
principal curves nearly indistinguishable from the true
curves. It is enough flexible to be effective in a wide range
6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 02 Issue: 12 | Dec-2013, Available @ http://www.ijret.org 66
of complex data with self-intersecting characteristics, high
curvature, and significant dispersion.
By using a simple GAs made up of crossover and mutation,
a new approach was used to perform the principal curves.
The algorithm had been applied on synthetic dataset and to
the problem of fault detection on the Tennessee Eastman
Process. The results demonstrate the improved performance
of the proposed method in real problems. This fact leads us
to conduct that GAs are a very powerful method for the
construction of principal curves with higher dimension or
even principal surfaces. This will be our main goal in future
work.
REFERENCES
[1] Anderson .T.W. Asymptotic theory for principal
component an analysis. Annals of Mathematics and
statistics, 2009; 45:1593-1600.
[2] Gertler J, Formiguera R, Vicenç I. Leak detection and
isolation in water distribution networks using principal
components analysis and structured residuals. Conference
on Control and Fault Tolerant Systems. 2010; 191-196.
[3] Mina J, Verde C, Sanchez-Parra M, Ortega F. Fault
isolation with principal components structured models for a
gas turbine. In : 2008 American Control conference; 4268-
4273.
[4] Tharrault Y. Diagnostic de fonctionnement par analyse
en composantes principales : Application à une station de
traitement des eaux usées. Institut National Polytechnique
de Lorraine, Nancy. France, 2008.
[5] Hastie T, Stuetzel W. Principal curves. Journal of
American statistical Association 1989; 84: 502-512.
[6] Delicado, P. and Huerta, M. Principal curves of oriented
points: Theoretical and computational improvements.
Computational Statistics 2003; 18: 293-315.
[7] Kègl B, Krzyzak A. Piecewise linear skeletonization
using principal curves. IEEE Transactions on Pattern
Analysis and Machine Intelligence 2002; 24: 59-74.
[8] Tibshirani R. Principal curves revisited, Statistics and
Computing 1992; 2:183–190.
[9] Banfield J.D , Raftery A.E. Ice floe identification in
satellite images using mathematical morphology and
clustering about principal curves. J. Amer. Statist Ass 1992;
87:7-16.
[10] Chang K, Ghosh J. A Unified Model for Probabilistic
Principal Surfaces. IEEE Transactions on Pattern Analysis
and Machine Intelligence 2001; Vol. 23, No. 1.
[11] Kègl B, Krzyzak A, Linder. T. Learning and design of
principal curves. IEEE Transactions on Pattern Analysis and
Machine Intelligence 2000; 22: 281–297.
[12] Delicado P. Another look at principal curves and
surface. Journal of Multi-variate analysis 2001; 7: 84–116.
[13] Einbeck J, Tutz G, Evers L. Local principal curves.
Statistics and Computing 2005 15:301–313.
[14] Verbeek J J, Vlassis N, Krose B. A k-segments
algorithm for finding principal curves. Pattern Recognition
Letter 2002, 23:1009–1017.
[15] Goldberg D, Korb B, Deb K. Messy genetic
algorithms: Motivation, analysis, and first results. Journal of
Complex Systems 1989; 3:493-530.
[16] Cha S H, Tappert C. A Genetic Algorithm for
Constructing Compact Binary Decision Trees. Journal of
Pattern Recognition Research 2009; 4:1-13.
[17] Akbari Z. A multilevel evolutionary algorithm for
optimizing numerical functions. International Journal of
Industrial Engineering Computations 2011; 2:419–430.
[18] Zhang J, Chung H, Lo W. Clustering-Based Adaptive
Crossover and Mutation Probabilities for Genetic
Algorithms. IEEE Transactions on Evolutionary
Computation 2007; 11: 326-335.
[19] Downs J J , Vogel E. A plant-wide industrial process
control problem. Computers and Chemical Engineering
1993; 17: 245–255.
[20] Downs J J , Vogel E. A plant-wide industrial process
control problem. Computers and Chemical Engineering
1993; 17: 245–255.
[21] Kresta J. V , MacGregor J F, Marlin T E. Multivariate
statistical monitoring of process operating performance.
Can. J. Chem. Engng 1991; 69:34-47.