In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization stratagem. This evolutionary technique always aims to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as
automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic
clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization
stratagem. This evolutionary technique always aims
to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to
detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal
values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance
of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Novel Approach to Mathematical Concepts in Data Miningijdmtaiir
-This paper describes three different fundamental
mathematical programming approaches that are relevant to
data mining. They are: Feature Selection, Clustering and
Robust Representation. This paper comprises of two clustering
algorithms such as K-mean algorithm and K-median
algorithms. Clustering is illustrated by the unsupervised
learning of patterns and clusters that may exist in a given
databases and useful tool for Knowledge Discovery in
Database (KDD). The results of k-median algorithm are used
to collecting the blood cancer patient from a medical database.
K-mean clustering is a data mining/machine learning algorithm
used to cluster observations into groups of related observations
without any prior knowledge of those relationships. The kmean algorithm is one of the simplest clustering techniques
and it is commonly used in medical imaging, biometrics and
related fields.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Extensive Analysis on Generation and Consensus Mechanisms of Clustering Ensem...IJECEIAES
Data analysis plays a prominent role in interpreting various phenomena. Data mining is the process to hypothesize useful knowledge from the extensive data. Based upon the classical statistical prototypes the data can be exploited beyond the storage and management of the data. Cluster analysis a primary investigation with little or no prior knowledge, consists of research and development across a wide variety of communities. Cluster ensembles are melange of individual solutions obtained from different clusterings to produce final quality clustering which is required in wider applications. The method arises in the perspective of increasing robustness, scalability and accuracy. This paper gives a brief overview of the generation methods and consensus functions included in cluster ensemble. The survey is to analyze the various techniques and cluster ensemble methods.
Illustration of Medical Image Segmentation based on Clustering Algorithmsrahulmonikasharma
Image segmentation is the most basic and crucial process remembering the true objective to facilitate the characterization and representation of the structure of excitement for medical or basic images. Despite escalated research, segmentation remains a challenging issue because of the differing image content, cluttered objects, occlusion, non-uniform object surface, and different factors. There are numerous calculations and techniques accessible for image segmentation yet at the same time there requirements to build up an efficient, quick technique of medical image segmentation. This paper has focused on K-means and Fuzzy C means clustering algorithm to segment malaria blood samples in more accurate manner.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithmaciijournal
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as
automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic
clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization
stratagem. This evolutionary technique always aims
to attain global best solution rather than a local best solution in larger datasets. The explorations and exploitations imposed on the proposed work results to
detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal
values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance
of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Novel Approach to Mathematical Concepts in Data Miningijdmtaiir
-This paper describes three different fundamental
mathematical programming approaches that are relevant to
data mining. They are: Feature Selection, Clustering and
Robust Representation. This paper comprises of two clustering
algorithms such as K-mean algorithm and K-median
algorithms. Clustering is illustrated by the unsupervised
learning of patterns and clusters that may exist in a given
databases and useful tool for Knowledge Discovery in
Database (KDD). The results of k-median algorithm are used
to collecting the blood cancer patient from a medical database.
K-mean clustering is a data mining/machine learning algorithm
used to cluster observations into groups of related observations
without any prior knowledge of those relationships. The kmean algorithm is one of the simplest clustering techniques
and it is commonly used in medical imaging, biometrics and
related fields.
Engineering Research Publication
Best International Journals, High Impact Journals,
International Journal of Engineering & Technical Research
ISSN : 2321-0869 (O) 2454-4698 (P)
www.erpublication.org
Extensive Analysis on Generation and Consensus Mechanisms of Clustering Ensem...IJECEIAES
Data analysis plays a prominent role in interpreting various phenomena. Data mining is the process to hypothesize useful knowledge from the extensive data. Based upon the classical statistical prototypes the data can be exploited beyond the storage and management of the data. Cluster analysis a primary investigation with little or no prior knowledge, consists of research and development across a wide variety of communities. Cluster ensembles are melange of individual solutions obtained from different clusterings to produce final quality clustering which is required in wider applications. The method arises in the perspective of increasing robustness, scalability and accuracy. This paper gives a brief overview of the generation methods and consensus functions included in cluster ensemble. The survey is to analyze the various techniques and cluster ensemble methods.
Illustration of Medical Image Segmentation based on Clustering Algorithmsrahulmonikasharma
Image segmentation is the most basic and crucial process remembering the true objective to facilitate the characterization and representation of the structure of excitement for medical or basic images. Despite escalated research, segmentation remains a challenging issue because of the differing image content, cluttered objects, occlusion, non-uniform object surface, and different factors. There are numerous calculations and techniques accessible for image segmentation yet at the same time there requirements to build up an efficient, quick technique of medical image segmentation. This paper has focused on K-means and Fuzzy C means clustering algorithm to segment malaria blood samples in more accurate manner.
In the present day huge amount of data is generated in every minute and transferred frequently. Although
the data is sometimes static but most commonly it is dynamic and transactional. New data that is being
generated is getting constantly added to the old/existing data. To discover the knowledge from this
incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time
consuming. Again to analyze the datasets properly, construction of efficient classifier model is necessary.
The objective of developing such a classifier is to classify unlabeled dataset into appropriate classes. The
paper proposes a dimension reduction algorithm that can be applied in dynamic environment for
generation of reduced attribute set as dynamic reduct, and an optimization algorithm which uses the
reduct and build up the corresponding classification system. The method analyzes the new dataset, when it
becomes available, and modifies the reduct accordingly to fit the entire dataset and from the entire data
set, interesting optimal classification rule sets are generated. The concepts of discernibility relation,
attribute dependency and attribute significance of Rough Set Theory are integrated for the generation of
dynamic reduct set, and optimal classification rules are selected using PSO method, which not only
reduces the complexity but also helps to achieve higher accuracy of the decision system. The proposed
method has been applied on some benchmark dataset collected from the UCI repository and dynamic
reduct is computed, and from the reduct optimal classification rules are also generated. Experimental
result shows the efficiency of the proposed method.
A HYBRID MODEL FOR MINING MULTI DIMENSIONAL DATA SETSEditor IJCATR
This paper presents a hybrid data mining approach based on supervised learning and unsupervised learning to identify the closest data patterns in the data base. This technique enables to achieve the maximum accuracy rate with minimal complexity. The proposed algorithm is compared with traditional clustering and classification algorithm and it is also implemented with multidimensional datasets. The implementation results show better prediction accuracy and reliability.
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...idescitation
Nature inspired population based evolutionary algorithms are very popular with
their competitive solutions for a wide variety of applications. Teaching Learning based
Optimization (TLBO) is a very recent population based evolutionary algorithm evolved
on the basis of Teaching Learning process of a class room. TLBO does not require any
algorithmic specific parameters. This paper proposes an automatic grouping of pixels into
different homogeneous regions using the TLBO. The experimental results have
demonstrated the effectiveness of TLBO in image segmentation.
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
A new model for iris data set classification based on linear support vector m...IJECEIAES
Data mining is known as the process of detection concerning patterns from essential amounts of data. As a process of knowledge discovery. Classification is a data analysis that extracts a model which describes an important data classes. One of the outstanding classifications methods in data mining is support vector machine classification (SVM). It is capable of envisaging results and mostly effective than other classification methods. The SVM is a one technique of machine learning techniques that is well known technique, learning with supervised and have been applied perfectly to a vary problems of: regression, classification, and clustering in diverse domains such as gene expression, web text mining. In this study, we proposed a newly mode for classifying iris data set using SVM classifier and genetic algorithm to optimize c and gamma parameters of linear SVM, in addition principle components analysis (PCA) algorithm was use for features reduction.
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Study of relevancy, diversity, and novelty in recommender systemsChemseddine Berbague
In the next slides, we present our approach to tackling the conflicting recommendation quality in recommender systems using a genetic-based clustering algorithm. In our approach, we studied the users' tendencies toward diversity and proposed a pairwise similarity measure to amount it. Later, we used the new similarity within a fitness function to create overlapped clusters and to recommend balanced recommendations in terms of diversity and relevancy.
When deep learners change their mind learning dynamics for active learningDevansh16
Abstract:
Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results.
Using particle swarm optimization to solve test functions problemsriyaniaes
In this paper the benchmarking functions are used to evaluate and check the particle swarm optimization (PSO) algorithm. However, the functions utilized have two dimension but they selected with different difficulty and with different models. In order to prove capability of PSO, it is compared with genetic algorithm (GA). Hence, the two algorithms are compared in terms of objective functions and the standard deviation. Different runs have been taken to get convincing results and the parameters are chosen properly where the Matlab software is used. Where the suggested algorithm can solve different engineering problems with different dimension and outperform the others in term of accuracy and speed of convergence.
Automatic Feature Subset Selection using Genetic Algorithm for Clusteringidescitation
Feature subset selection is a process of selecting a
subset of minimal, relevant features and is a pre processing
technique for a wide variety of applications. High dimensional
data clustering is a challenging task in data mining. Reduced
set of features helps to make the patterns easier to understand.
Reduced set of features are more significant if they are
application specific. Almost all existing feature subset
selection algorithms are not automatic and are not application
specific. This paper made an attempt to find the feature subset
for optimal clusters while clustering. The proposed Automatic
Feature Subset Selection using Genetic Algorithm (AFSGA)
identifies the required features automatically and reduces
the computational cost in determining good clusters. The
performance of AFSGA is tested using public and synthetic
datasets with varying dimensionality. Experimental results
have shown the improved efficacy of the algorithm with optimal
clusters and computational cost.
With the development of database, the data volume stored in database increases rapidly and in the large
amounts of data much important information is hidden. If the information can be extracted from the
database they will create a lot of profit for the organization. The question they are asking is how to extract
this value. The answer is data mining. There are many technologies available to data mining practitioners,
including Artificial Neural Networks, Genetics, Fuzzy logic and Decision Trees. Many practitioners are
wary of Neural Networks due to their black box nature, even though they have proven themselves in many
situations. This paper is an overview of artificial neural networks and questions their position as a
preferred tool by data mining practitioners.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Mine Blood Donors Information through Improved K-Means Clusteringijcsity
The number of accidents and health diseases which are increasing at an alarming rate are resulting in a huge increase in the demand for blood. There is a necessity for the organized analysis of the blood donor database or blood banks repositories. Clustering analysis is one of the data mining applications and K-means clustering algorithm is the fundamental algorithm for modern clustering techniques. K-means clustering algorithm is traditional approach and iterative algorithm. At every iteration, it attempts to find the distance from the centroid of each cluster to each and every data point. This paper gives the improvement to the original k-means algorithm by improving the initial centroids with distribution of data. Results and discussions show that improved K-means algorithm produces accurate clusters in less computation time to find the donors information
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Particle Swarm Optimization based K-Prototype Clustering Algorithm iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
In the present day huge amount of data is generated in every minute and transferred frequently. Although
the data is sometimes static but most commonly it is dynamic and transactional. New data that is being
generated is getting constantly added to the old/existing data. To discover the knowledge from this
incremental data, one approach is to run the algorithm repeatedly for the modified data sets which is time
consuming. Again to analyze the datasets properly, construction of efficient classifier model is necessary.
The objective of developing such a classifier is to classify unlabeled dataset into appropriate classes. The
paper proposes a dimension reduction algorithm that can be applied in dynamic environment for
generation of reduced attribute set as dynamic reduct, and an optimization algorithm which uses the
reduct and build up the corresponding classification system. The method analyzes the new dataset, when it
becomes available, and modifies the reduct accordingly to fit the entire dataset and from the entire data
set, interesting optimal classification rule sets are generated. The concepts of discernibility relation,
attribute dependency and attribute significance of Rough Set Theory are integrated for the generation of
dynamic reduct set, and optimal classification rules are selected using PSO method, which not only
reduces the complexity but also helps to achieve higher accuracy of the decision system. The proposed
method has been applied on some benchmark dataset collected from the UCI repository and dynamic
reduct is computed, and from the reduct optimal classification rules are also generated. Experimental
result shows the efficiency of the proposed method.
A HYBRID MODEL FOR MINING MULTI DIMENSIONAL DATA SETSEditor IJCATR
This paper presents a hybrid data mining approach based on supervised learning and unsupervised learning to identify the closest data patterns in the data base. This technique enables to achieve the maximum accuracy rate with minimal complexity. The proposed algorithm is compared with traditional clustering and classification algorithm and it is also implemented with multidimensional datasets. The implementation results show better prediction accuracy and reliability.
An Automatic Medical Image Segmentation using Teaching Learning Based Optimiz...idescitation
Nature inspired population based evolutionary algorithms are very popular with
their competitive solutions for a wide variety of applications. Teaching Learning based
Optimization (TLBO) is a very recent population based evolutionary algorithm evolved
on the basis of Teaching Learning process of a class room. TLBO does not require any
algorithmic specific parameters. This paper proposes an automatic grouping of pixels into
different homogeneous regions using the TLBO. The experimental results have
demonstrated the effectiveness of TLBO in image segmentation.
Accounting for variance in machine learning benchmarksDevansh16
Accounting for Variance in Machine Learning Benchmarks
Xavier Bouthillier, Pierre Delaunay, Mirko Bronzi, Assya Trofimov, Brennan Nichyporuk, Justin Szeto, Naz Sepah, Edward Raff, Kanika Madan, Vikram Voleti, Samira Ebrahimi Kahou, Vincent Michalski, Dmitriy Serdyuk, Tal Arbel, Chris Pal, Gaël Varoquaux, Pascal Vincent
Strong empirical evidence that one machine-learning algorithm A outperforms another one B ideally calls for multiple trials optimizing the learning pipeline over sources of variation such as data sampling, data augmentation, parameter initialization, and hyperparameters choices. This is prohibitively expensive, and corners are cut to reach conclusions. We model the whole benchmarking process, revealing that variance due to data sampling, parameter initialization and hyperparameter choice impact markedly the results. We analyze the predominant comparison methods used today in the light of this variance. We show a counter-intuitive result that adding more sources of variation to an imperfect estimator approaches better the ideal estimator at a 51 times reduction in compute cost. Building on these results, we study the error rate of detecting improvements, on five different deep-learning tasks/architectures. This study leads us to propose recommendations for performance comparisons.
A new model for iris data set classification based on linear support vector m...IJECEIAES
Data mining is known as the process of detection concerning patterns from essential amounts of data. As a process of knowledge discovery. Classification is a data analysis that extracts a model which describes an important data classes. One of the outstanding classifications methods in data mining is support vector machine classification (SVM). It is capable of envisaging results and mostly effective than other classification methods. The SVM is a one technique of machine learning techniques that is well known technique, learning with supervised and have been applied perfectly to a vary problems of: regression, classification, and clustering in diverse domains such as gene expression, web text mining. In this study, we proposed a newly mode for classifying iris data set using SVM classifier and genetic algorithm to optimize c and gamma parameters of linear SVM, in addition principle components analysis (PCA) algorithm was use for features reduction.
The pertinent single-attribute-based classifier for small datasets classific...IJECEIAES
Classifying a dataset using machine learning algorithms can be a big challenge when the target is a small dataset. The OneR classifier can be used for such cases due to its simplicity and efficiency. In this paper, we revealed the power of a single attribute by introducing the pertinent single-attributebased-heterogeneity-ratio classifier (SAB-HR) that used a pertinent attribute to classify small datasets. The SAB-HR’s used feature selection method, which used the Heterogeneity-Ratio (H-Ratio) measure to identify the most homogeneous attribute among the other attributes in the set. Our empirical results on 12 benchmark datasets from a UCI machine learning repository showed that the SAB-HR classifier significantly outperformed the classical OneR classifier for small datasets. In addition, using the H-Ratio as a feature selection criterion for selecting the single attribute was more effectual than other traditional criteria, such as Information Gain (IG) and Gain Ratio (GR).
Study of relevancy, diversity, and novelty in recommender systemsChemseddine Berbague
In the next slides, we present our approach to tackling the conflicting recommendation quality in recommender systems using a genetic-based clustering algorithm. In our approach, we studied the users' tendencies toward diversity and proposed a pairwise similarity measure to amount it. Later, we used the new similarity within a fitness function to create overlapped clusters and to recommend balanced recommendations in terms of diversity and relevancy.
When deep learners change their mind learning dynamics for active learningDevansh16
Abstract:
Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results.
Using particle swarm optimization to solve test functions problemsriyaniaes
In this paper the benchmarking functions are used to evaluate and check the particle swarm optimization (PSO) algorithm. However, the functions utilized have two dimension but they selected with different difficulty and with different models. In order to prove capability of PSO, it is compared with genetic algorithm (GA). Hence, the two algorithms are compared in terms of objective functions and the standard deviation. Different runs have been taken to get convincing results and the parameters are chosen properly where the Matlab software is used. Where the suggested algorithm can solve different engineering problems with different dimension and outperform the others in term of accuracy and speed of convergence.
Automatic Feature Subset Selection using Genetic Algorithm for Clusteringidescitation
Feature subset selection is a process of selecting a
subset of minimal, relevant features and is a pre processing
technique for a wide variety of applications. High dimensional
data clustering is a challenging task in data mining. Reduced
set of features helps to make the patterns easier to understand.
Reduced set of features are more significant if they are
application specific. Almost all existing feature subset
selection algorithms are not automatic and are not application
specific. This paper made an attempt to find the feature subset
for optimal clusters while clustering. The proposed Automatic
Feature Subset Selection using Genetic Algorithm (AFSGA)
identifies the required features automatically and reduces
the computational cost in determining good clusters. The
performance of AFSGA is tested using public and synthetic
datasets with varying dimensionality. Experimental results
have shown the improved efficacy of the algorithm with optimal
clusters and computational cost.
With the development of database, the data volume stored in database increases rapidly and in the large
amounts of data much important information is hidden. If the information can be extracted from the
database they will create a lot of profit for the organization. The question they are asking is how to extract
this value. The answer is data mining. There are many technologies available to data mining practitioners,
including Artificial Neural Networks, Genetics, Fuzzy logic and Decision Trees. Many practitioners are
wary of Neural Networks due to their black box nature, even though they have proven themselves in many
situations. This paper is an overview of artificial neural networks and questions their position as a
preferred tool by data mining practitioners.
K-Medoids Clustering Using Partitioning Around Medoids for Performing Face Re...ijscmcj
Face recognition is one of the most unobtrusive biometric techniques that can be used for access control as well as surveillance purposes. Various methods for implementing face recognition have been proposed with varying degrees of performance in different scenarios. The most common issue with effective facial biometric systems is high susceptibility of variations in the face owing to different factors like changes in pose, varying illumination, different expression, presence of outliers, noise etc. This paper explores a novel technique for face recognition by performing classification of the face images using unsupervised learning approach through K-Medoids clustering. Partitioning Around Medoids algorithm (PAM) has been used for performing K-Medoids clustering of the data. The results are suggestive of increased robustness to noise and outliers in comparison to other clustering methods. Therefore the technique can also be used to increase the overall robustness of a face recognition system and thereby increase its invariance and make it a reliably usable biometric modality.
Mine Blood Donors Information through Improved K-Means Clusteringijcsity
The number of accidents and health diseases which are increasing at an alarming rate are resulting in a huge increase in the demand for blood. There is a necessity for the organized analysis of the blood donor database or blood banks repositories. Clustering analysis is one of the data mining applications and K-means clustering algorithm is the fundamental algorithm for modern clustering techniques. K-means clustering algorithm is traditional approach and iterative algorithm. At every iteration, it attempts to find the distance from the centroid of each cluster to each and every data point. This paper gives the improvement to the original k-means algorithm by improving the initial centroids with distribution of data. Results and discussions show that improved K-means algorithm produces accurate clusters in less computation time to find the donors information
This paper presents a review & performs a comparative evaluation of few known machine learning
algorithms in terms of their suitability & code performance on any given data set of any size. In this paper,
we describe our Machine Learning ToolBox that we have built using python programming language. The
algorithms used in the toolbox consists of supervised classification algorithms such as Naïve Bayes,
Decision Trees, SVM, K-nearest Neighbors and Neural Network (Backpropagation). The algorithms are
tested on iris and diabetes dataset and are compared on the basis of their accuracy under different
conditions. However using our tool one can apply any of the implemented ML algorithms on any dataset of
any size. The main goal of building a toolbox is to provide users with a platform to test their datasets on
different Machine Learning algorithms and use the accuracy results to determine which algorithms fits the
data best. The toolbox allows the user to choose a dataset of his/her choice either in structured or
unstructured form and then can choose the features he/she wants to use for training the machine We have
given our concluding remarks on the performance of implemented algorithms based on experimental
analysis
Experimental study of Data clustering using k- Means and modified algorithmsIJDKP
The k- Means clustering algorithm is an old algorithm that has been intensely researched owing to its ease
and simplicity of implementation. Clustering algorithm has a broad attraction and usefulness in
exploratory data analysis. This paper presents results of the experimental study of different approaches to
k- Means clustering, thereby comparing results on different datasets using Original k-Means and other
modified algorithms implemented using MATLAB R2009b. The results are calculated on some performance
measures such as no. of iterations, no. of points misclassified, accuracy, Silhouette validity index and
execution time
Particle Swarm Optimization based K-Prototype Clustering Algorithm iosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Performance Comparision of Machine Learning AlgorithmsDinusha Dilanka
In this paper Compare the performance of two
classification algorithm. I t is useful to differentiate
algorithms based on computational performance rather
than classification accuracy alone. As although
classification accuracy between the algorithms is similar,
computational performance can differ significantly and it
can affect to the final results. So the objective of this paper
is to perform a comparative analysis of two machine
learning algorithms namely, K Nearest neighbor,
classification and Logistic Regression. In this paper it
was considered a large dataset of 7981 data points and 112
features. Then the performance of the above mentioned
machine learning algorithms are examined. In this paper
the processing time and accuracy of the different machine
learning techniques are being estimated by considering the
collected data set, over a 60% for train and remaining
40% for testing. The paper is organized as follows. In
Section I, introduction and background analysis of the
research is included and in section II, problem statement.
In Section III, our application and data analyze Process,
the testing environment, and the Methodology of our
analysis are being described briefly. Section IV comprises
the results of two algorithms. Finally, the paper concludes
with a discussion of future directions for research by
eliminating the problems existing with the current
research methodology.
A h k clustering algorithm for high dimensional data using ensemble learningijitcs
Advances made to the traditional clustering algorithms solves the various problems such as curse of
dimensionality and sparsity of data for multiple attributes. The traditional H-K clustering algorithm can
solve the randomness and apriority of the initial centers of K-means clustering algorithm. But when we
apply it to high dimensional data it causes the dimensional disaster problem due to high computational
complexity. All the advanced clustering algorithms like subspace and ensemble clustering algorithms
improve the performance for clustering high dimension dataset from different aspects in different extent.
Still these algorithms will improve the performance form a single perspective. The objective of the
proposed model is to improve the performance of traditional H-K clustering and overcome the limitations
such as high computational complexity and poor accuracy for high dimensional data by combining the
three different approaches of clustering algorithm as subspace clustering algorithm and ensemble
clustering algorithm with H-K clustering algorithm.
A Formal Machine Learning or Multi Objective Decision Making System for Deter...Editor IJCATR
Decision-making typically needs the mechanisms to compromise among opposing norms. Once multiple objectives square measure is concerned of machine learning, a vital step is to check the weights of individual objectives to the system-level performance. Determinant, the weights of multi-objectives is associate in analysis method, associated it's been typically treated as a drawback. However, our preliminary investigation has shown that existing methodologies in managing the weights of multi-objectives have some obvious limitations like the determination of weights is treated as one drawback, a result supporting such associate improvement is limited, if associated it will even be unreliable, once knowledge concerning multiple objectives is incomplete like an integrity caused by poor data. The constraints of weights are also mentioned. Variable weights square measure is natural in decision-making processes. Here, we'd like to develop a scientific methodology in determinant variable weights of multi-objectives. The roles of weights in a creative multi-objective decision-making or machine-learning of square measure analyzed, and therefore the weights square measure determined with the help of a standard neural network.
An Automatic Clustering Technique for Optimal ClustersIJCSEA Journal
This paper proposes a simple, automatic and efficient clustering algorithm, namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate nearly optimal clusters for the given datasets automatically. The AMOC is an extension to standard k-means with a two phase iterative procedure combining certain validation techniques in order to find optimal clusters with automation of merging of clusters. Experiments on both synthetic and real data have proved that the proposed algorithm finds nearly optimal clustering structures in terms of number of clusters, compactness and separation.
Research Inventy : International Journal of Engineering and Science is published by the group of young academic and industrial researchers with 12 Issues per year. It is an online as well as print version open access journal that provides rapid publication (monthly) of articles in all areas of the subject such as: civil, mechanical, chemical, electronic and computer engineering as well as production and information technology. The Journal welcomes the submission of manuscripts that meet the general criteria of significance and scientific excellence. Papers will be published by rapid process within 20 days after acceptance and peer review process takes only 7 days. All articles published in Research Inventy will be peer-reviewed.
An Heterogeneous Population-Based Genetic Algorithm for Data Clusteringijeei-iaes
As a primary data mining method for knowledge discovery, clustering is a technique of classifying a dataset into groups of similar objects. The most popular method for data clustering K-means suffers from the drawbacks of requiring the number of clusters and their initial centers, which should be provided by the user. In the literature, several methods have proposed in a form of k-means variants, genetic algorithms, or combinations between them for calculating the number of clusters and finding proper clusters centers. However, none of these solutions has provided satisfactory results and determining the number of clusters and the initial centers are still the main challenge in clustering processes. In this paper we present an approach to automatically generate such parameters to achieve optimal clusters using a modified genetic algorithm operating on varied individual structures and using a new crossover operator. Experimental results show that our modified genetic algorithm is a better efficient alternative to the existing approaches.
Assessment of Cluster Tree Analysis based on Data Linkagesjournal ijrtem
Abstract: Details linkage is a procedure which almost adjoins two or more places of data (surveyed or proprietary) from different companies to generate a value chest of information which can be used for further analysis. This allows for the real application of the details. One-to-Many data linkage affiliates an enterprise from the first data set with a number of related companies from the other data places. Before performs concentrate on accomplishing one-to-one data linkages. So formerly a two level clustering shrub known as One-Class Clustering Tree (OCCT) with designed in Jaccard Likeness evaluate was suggested in which each flyer contains team instead of only one categorized sequence. OCCT's strategy to use Jaccard's similarity co-efficient increases time complexness significantly. So we recommend to substitute jaccard's similarity coefficient with Jaro wrinket similarity evaluate to acquire the team similarity related because it requires purchase into consideration using positional indices to calculate relevance compared with Jaccard's. An assessment of our suggested idea suffices as approval of an enhanced one-to-many data linkage system.
Index Terms: Maximum-Weighted Bipartite Matching, Ant Colony Optimization, Graph Partitioning Technique
A PSO-Based Subtractive Data Clustering AlgorithmIJORCS
There is a tremendous proliferation in the amount of information available on the largest shared information source, the World Wide Web. Fast and high-quality clustering algorithms play an important role in helping users to effectively navigate, summarize, and organize the information. Recent studies have shown that partitional clustering algorithms such as the k-means algorithm are the most popular algorithms for clustering large datasets. The major problem with partitional clustering algorithms is that they are sensitive to the selection of the initial partitions and are prone to premature converge to local optima. Subtractive clustering is a fast, one-pass algorithm for estimating the number of clusters and cluster centers for any given set of data. The cluster estimates can be used to initialize iterative optimization-based clustering methods and model identification methods. In this paper, we present a hybrid Particle Swarm Optimization, Subtractive + (PSO) clustering algorithm that performs fast clustering. For comparison purpose, we applied the Subtractive + (PSO) clustering algorithm, PSO, and the Subtractive clustering algorithms on three different datasets. The results illustrate that the Subtractive + (PSO) clustering algorithm can generate the most compact clustering results as compared to other algorithms.
Ensemble based Distributed K-Modes ClusteringIJERD Editor
Clustering has been recognized as the unsupervised classification of data items into groups. Due to the explosion in the number of autonomous data sources, there is an emergent need for effective approaches in distributed clustering. The distributed clustering algorithm is used to cluster the distributed datasets without gathering all the data in a single site. The K-Means is a popular clustering method owing to its simplicity and speed in clustering large datasets. But it fails to handle directly the datasets with categorical attributes which are generally occurred in real life datasets. Huang proposed the K-Modes clustering algorithm by introducing a new dissimilarity measure to cluster categorical data. This algorithm replaces means of clusters with a frequency based method which updates modes in the clustering process to minimize the cost function. Most of the distributed clustering algorithms found in the literature seek to cluster numerical data. In this paper, a novel Ensemble based Distributed K-Modes clustering algorithm is proposed, which is well suited to handle categorical data sets as well as to perform distributed clustering process in an asynchronous manner. The performance of the proposed algorithm is compared with the existing distributed K-Means clustering algorithms, and K-Modes based Centralized Clustering algorithm. The experiments are carried out for various datasets of UCI machine learning data repository.
A Threshold Fuzzy Entropy Based Feature Selection: Comparative StudyIJMER
Feature selection is one of the most common and critical tasks in database classification. It
reduces the computational cost by removing insignificant and unwanted features. Consequently, this
makes the diagnosis process accurate and comprehensible. This paper presents the measurement of
feature relevance based on fuzzy entropy, tested with Radial Basis Classifier (RBF) network,
Bagging(Bootstrap Aggregating), Boosting and stacking for various fields of datasets. Twenty
benchmarked datasets which are available in UCI Machine Learning Repository and KDD have been
used for this work. The accuracy obtained from these classification process shows that the proposed
method is capable of producing good and accurate results with fewer features than the original
datasets.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Automatic Unsupervised Data Classification Using Jaya Evolutionary Algorithm
1. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
DOI:10.5121/acii.2016.3204 35
AUTOMATIC UNSUPERVISED DATA CLASSIFICATION
USING JAYA EVOLUTIONARY ALGORITHM
Ramachandra Rao Kurada1
and Dr. Karteeka Pavan Kanadam2
1
Asst. Prof., Department of Computer Science & Engineering, Shri Vishnu Engineering
College for Women, Bhimavaram
2
Professor, Department of Information Technology, RVR & JC College of Engineering,
Guntur
ABSTRACT
In this paper we attempt to solve an automatic clustering problem by optimizing multiple objectives such as
automatic k-determination and a set of cluster validity indices concurrently. The proposed automatic
clustering technique uses the most recent optimization algorithm Jaya as an underlying optimization
stratagem. This evolutionary technique always aims to attain global best solution rather than a local best
solution in larger datasets. The explorations and exploitations imposed on the proposed work results to
detect the number of automatic clusters, appropriate partitioning present in data sets and mere optimal
values towards CVIs frontiers. Twelve datasets of different intricacy are used to endorse the performance
of aimed algorithm. The experiments lay bare that the conjectural advantages of multi objective clustering
optimized with evolutionary approaches decipher into realistic and scalable performance paybacks.
KEYWORDS
Multi objective optimization, evolutionary clustering, automatic clustering, cluster validity indexes, Jaya
evolutionary algorithm.
1. INTRODUCTION
For the past three decades, majority of optimization problems demands improvement issues with
multiple objectives and are attracted towards evolutionary computation methodologies due their
simplicity of transformative calculation. The leverage of these evolutionary approaches are
flexible, to add, remove, modify any prerequisite regarding problem conceptualization, generation
of comparative Pareto set and has ability to tackle higher complexities than the mainstream
methods. These robust and powerful search procedures generally portray a set of candidate
solutions, selection procedure for mating, segmenting and re-assembling of set of several
solutions to produce new solutions. This is reflected by the speedily increasing of interest in the
field of evolutionary clustering with multi objective optimizations [1].
Data clustering is recognized as the most prominent unsupervised technique in machine learning.
This technique apportions a given dataset into homogeneous groups in view of some
likeness/disparity metric. Conventional clustering algorithms regularly make previous
assumptions about grouping a cluster structure, and adoptable with a suitable objective function
so that it can be optimized with classical or metaheuristic techniques. These estimations grade
inadequately when clustering presumptions are not hold in data [2].
The natural paradigm to fit the data distribution in the entire feature space, discovering exact
number of partitions is violated in single objective clustering algorithm if distinctive locales of
2. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
36
the component space contain clusters of diverged space. Estimating a combined solution which is
stable, confident and lower sensitivity to noise is unattainable by any single objective clustering
algorithm. Multi-objective clustering can be perceived as a distinct case of multi-objective
optimization, targeting to concurrently optimize several trade-off with numerous objectives under
specific limitations. The aim objective of multi-objective clustering is to disintegrate a dataset
into comparable groups, by exploiting the multiple objectives analogously [3-4].
In this paper, we provide an clustering algorithms underplayed with Jaya evolutionary algorithm
[15] to solve large set of objectives, for affricating factual automatic k determination, that are
interesting, suitable detachment prompted in data sets, and optimizing a set of cluster validity
indices (CVIs) simultaneously for encouraging most favourable convergence at final solutions.
For conquering high intra-cluster likeness and low inter-cluster likeness, this algorithm uses CVIs
as objective functions as mentioned in [5]. The set of internal and external validity indices used as
fitness functions in this paper are Rand, Adjusted Rand, Siloutte, Chou Be, Davies–Bouldin and
Xie–Beni indexs [6].
The remainder of this paper is organized as follows. Section II presents a review of recent
automatic clustering algorithms. In Section III, describes the scalability of the proposed
AutoJAYA algorithm and original Jaya evolutionary algorithm. The effectiveness of our scheme
is discussed in Section IV. Finally, Section V concludes this paper.
2. LITERATURE REVIEW
The survey published by Mukhopadhyay, Maulik and Bandyopadhyay, S. in 2015 argue the
importance of using multiobjective clustering in the domains of image segmentation,
bioinformatics, web mining with real time applications. The survey urges the importance of
Multiobjective clustering for optimizing multiple objective functions simultaneously. The authors
highlights the techniques for encoding, selection of objective functions, evolutionary operators,
schemes for maintaining non-dominated solutions and assorting an end solution [7].
In order to improve searching skills, in 2015, Abadi, & Rezaei combined of continuous ant
colony optimization and particle swarm optimization and proposed a strategy which is a
combination of these two algorithms with genetic algorithm, the results demonstrated were of
high capacity and resistance [8].
In 2015, Ozturk, Hancer and Karaboga used artificial bee colony algorithm in dynamic
(automatic) clustering discrete artificial bee colony as a similarity measure between the binary
vectors through Jaccard coefficient [9]. In 2014, Kumar and Chhabra, gravitational search
algorithm in real life problems, where prior information about the number of clusters is not
known in image segmentation domain to attain automatic segmentation of both gray scale and
colour images [10].
In 2014, Kuo, Huang, Lin, Wu and Zulvia determined the appropriate number of clusters and
assigns data points to correct clusters, with kernel function to increase clustering capability, in
this study they have used with bee colony optimization for attaining stable and accurate results
[11]. In 2014, Wikaisuksakul presented a multi-objective genetic algorithm for data clustering
methods, to handle the overlapping clusters with multiple objectives, using the fuzzy c-means
method. The real-coded values are encoded in a string to represent cluster centers and Pareto
solutions corresponding to a trade-off between the two objectives are finally produced [12].
In 2014, Mukhopadhyay, Maulik,Bandyopadhyay, and CoelloCoello, published two survey’s
with Part I and Part II on multiobjective evolutionary algorithms for data mining with
3. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
37
Evolutionary Computation [13-14]. Part I survey holds literature for basic concepts related to
multi-objective optimization in data mining and evolutionary approaches for feature selection and
classification. In part II the authors present the rules for association, clustering and other data
mining tasks related to different multi-objective evolutionary algorithms.
3. AUTOMATIC CLUSTERING ALGORITHM - AUTOJAYA
This paper attempts to constellate exact number of proper detachment in datasets automatically
without any human intervention during the algorithm execution. The objective functions for
assorting an end solution is postured as a multi-objective optimization problem, by optimizing a
customary of cluster validity indices concurrently. The proposed multi-objective clustering
technique uses a most recently developed evolutionary algorithm Jaya [15], based on multi-
objective optimization method as the underlying optimization strategy. The points are assigned
randomly to selected cluster centres based on Euclidean distance. The Rand, Adjusted Rand,
Silhouette, Chou Be, Davies–Bouldin and Xie–Beni CVIs are optimized simultaneously to
endorse the validity of aimed algorithm. Determinately, the aimed algorithm is able to perceive
both the best possible number of clusters and proper apportioning in the dataset. The efficiency of
the proposed algorithm is shown for twelve real-time data sets of varying complexities. The
results of this multi objective clustering techniques presented in Table 1, Table 2.
3.1. INITIALIZATION
To initialize the candidate solutions, the cluster centres are encoded as chromosomes. The
population α or number of candidate solutions are initialized randomly with n rows and m
columns. The set of solutions are represented as α୧,୨ሺ0ሻ = α୨
୫୧୬
+ randሺ1ሻ ∗ ൫α୨
୫ୟ୶
− α୨
୫୧୬
൯ and
each solution contains Max୩number of selected cluster centers, where Max୩ is randomly chosen
activation thresholds in [0, 1].
3.2. OBJECTIVE / FITNESS FUNCTIONS
A straightforward way to pose clustering as an optimization problem is to optimize some CVIs
that reflect the goodness of the clustering solutions. The correctness or accuracy of any
optimization method depends on its objective or fitness function being used in the algorithm [2-
3]. In this manner, it is regular to instantaneously advance with numerous of such measures for
optimizing distinctive attributes of data. To compute the distance between the centroid and
candidate solutions Euclidean distance measure is used, along with it the other objective functions
optimized simultaneously are the RI, ARI, DB, CS, XI, SIL CVIs [6].
.
3.3. JAYA EVOLUTIONARY ALGORITHM
Jaya is a simple, powerful optimization algorithm proposed by R Venkata Rao in 2015 for solving
the constrained and unconstrained optimization problems [15]. This algorithm is predicated on the
idea that the outcome obtained for a given problem should move towards the best solution and
evade the worst solution. This evolutionary approach does not require any particular algorithm-
specific control parameter, rather mandates common control parameters. The working procedure
of this evolutionary method is as follows:
Let fሺαሻ is the objective function to be minimized or maximized. At any iteration , assume that
there are ′m′ number of design variables i. e ሺj = 1,2, … mሻ, ′n′ number of candidate solutions (i.e.
population size, k = 1,2, … nሻ. Let the best candidate best obtains the best value of
4. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
38
fሺαሻሺi. e. fሺαሻୠୣୱ୲ሻ in the entire candidate solutions and the worst candidate worst obtains the
worst value of fሺαሻሺi. e. fሺαሻ୵୭୰ୱ୲ሻ in the entire candidate solutions. If α୨,୩,୧ is the value of the j୲୦
variable for the k୲୦
candidate during the i୲୦
iteration, then this value is modified as per the
following equation
α୨,୩,୧
′
= α୨,୩,୧ + rଵ,୨,୧උ൫α୨,ୠୣୱ୲,୧൯ − หα୨,୩,୧หඏ − rଶ,୨,୧උ൫α୨,୵୭୰ୱ୲,୧൯ − หα୨,୩,୧หඏ. ሺ1ሻ
whereα୨,ୠୣୱ୲,୧ is the value of the variable for the best candidate and α୨,୵୭୰ୱ୲,୧ is the value of the
variable j for the worst candidate. α୨,୩,୧
′
is updated value of α୨,୩,୧ and rଵ,୨,୧ and rଶ,୨,୧ are the two
random numbers for the j୲୦
variable during the i୲୦
iteration in the range [0,1]. The term
rଵ,୨,୧උ൫α୨,ୠୣୱ୲,୧൯ − หα୨,୩,୧หඏ indicates the tendency of the solution to move closer to the best solution
and the term rଶ,୨,୧උ൫α୨,୵୭୰ୱ୲,୧൯ − หα୨,୩,୧หඏ indicates the tendency of the solution to avoid the worst
solution. α୨,୩,୧
′
is accepted if it gives better function value. All the accepted function values at the
end of the termination are maintained and these values become the input to the next iteration. At
the end of each iteration all the accepted function values are retained and are fed as inputs to the
next iteration. This algorithm intends to reach best solution and tries to avoid worst solution.
The steps in Jaya algorithm are as follows:
1. Initialize population size, number of design variables and termination condition
2. Identify best and worst solution in the population
3. Modify the solutions based on best and worst solutions using (1)
4. Is the solution corresponding to α୨,୩,୧
ᇱ
better than the corresponding to α୨,୩,୧
a. accept and replace the previous solution
5. Else keep the previous solution
6. Is the termination criterion satisfied
a. report as optimum solution
7. Else go to Step 2
3.4. PROPOSED AUTOJAYA ALGORITHM
The working procedure of the aimed algorithm AutoJAYA is as follows:
1. Initialize the number of candidate solutions randomly as , in ݊ rows and m columns.
2. The set of solutions are represented as and
each solution contain number of selected cluster centers, where is randomly
chosen activation thresholds in [0, 1].
3. The fitness function to be maximized by default is Rand Index, and the solution
of at current generation with design variables is represented as
4. Spot the active cluster centers with value greater than 0.5 as best candidates,
solutions and less than 0.5 as worst candidates, solutions
5. For do
a. For each data vector , calculate its distance from all active cluster centers using
Euclidean distance
b. Assign to closest cluster
c. Evaluate each candidate solution quality using the fitness functions and find
solutions
d. Modify the solutions based on best and worst solutions using (1)
5. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
39
6. If the solution corresponding to better than the corresponding to accept and
replace the previous solution else keep the previous solution
4. EXPERIMENTAL ANALYSIS AND DISCUSSION
In this section, we report on experiments that use multi-objective clustering to identify partitions
in diverged set of datasets. The enactment of the aimed algorithm is pragmatic from the results
conquered by the following criteria elected, i.e. automatic k-detection, minimal consumption of
CPU time, low percentage of error rate and ideal values in CVIs.The number of iterations is
restricted to 30 independent runs for all the datasets. Table 1, Table 2 demonstrates the results of
AutoJAYA algorithm used over real-time datasets. These real-time datasets are extracted from
UCI Machine Learning Repository [18].The best results are shown as bold face.
Table 1. Results of Automatic Clustering algorithms Real-time datasets
Datasets (size*dim, k)
No. of
auto
clusters
CPU
time
(sec)
% of
error
rate
Mean value of CVIs
ARI RI HIM SIL CS DB
Iris (150*4, 3) 3.01 19.45 10.11 0.9815 0.9987 0.0922 0.9214 0.8416 0.7152
Wine (178*13, 3) 3.00 105.32 40.64 0.8414 0.7912 0.4910 0.6048 0.6417 0.8915
Glass (214*9, 6) 6.00 114.23 30.78 0.8000 0.9000 0.6018 0.6980 0.5297 1.0050
Ionosphere (351*3, 4) 2.00 30.12 8.01 0.9580 0.9877 0.9632 1.2580 1.0470 0.9784
Ecoil (336*7, 8) 8.00 45.12 11.48 0.9587 1.0145 0.9964 1.0258 0.9478 0.7859
Rocks (208*60, 2) 2.01 74.20 12.45 0.9971 0.9999 1.0000 1.0001 0.9478 0.9481
Parkinson (195*22, 2) 2.10 42.14 12.08 0.9240 0.9920 0.9974 0.9608 0.9814 1.0040
Diabetic (768*9, 2) 2.00 74.25 10.45 0.9871 0.9997 1.0024 0.9999 0.8478 0.8481
Segment (1500*20, 2) 3.01 1041.02 14.12 0.9631 0.9941 0.0786 0.9740 0.8994 0.4448
Weighting (500*8, 2) 5.99 110.29 15.23 1.0019 0.9663 0.0222 1.0025 0.9648 0.2560
Sonar (208*60, 2) 1.98 67.20 24.23 0.9988 1.0205 0.9635 0.9845 0.8458 0.8932
Rippleset (250*3, 2) 2.00 10.23 5.48 1.2800 1.0200 0.9874 1.0000 0.9990 0.9011
The AutoJAYA renders exact number of automatic clusters in Wine, Glass, Ionosphere, Ecoil,
Diabetic, Sonar, Rippleset, when compared to actual number of clusters (k) shown in column 1
of Table 1. The Rippleset dataset is the only dataset where AutoJAYA consumes minimum
amount of CPU time among all the datasets of varying size and complexity. In general, the CPU
time consumed by all the comparing datasets is between 5.48 sec to 1041.12 sec, which is purely
dependent on the volume and complexity of the dataset. Likewise, minimal percentage of error
rate is logged for the aimed algorithm in Iris and Wine datasets and the other likening datasets
registers the error rate between 5.48 % and 40.64%.
The CVIs in RI in Iris, DB in Wine, DB in Glass, CS in Ionosphere, RI in Ecoil and HIM in
Rocks and mines dataset registers optimal mean value, by endorsing the validity of the algorithm.
The CVIs DB in Parkinson, RI in Diabetic, RI in Segment, ARI in Weighting, RI in Sonar and
SIL Rippleset also follow the same tendency by submitting optimal mean value towards the
frontiers of CVIs. All these implications elevate the supremacy of proposed algorithm in
obtaining favourable results. The automatic clusters generated by AutoJAYA algorithm are
shown in Figure 1.
6. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
40
Figure 1. Automatic clusters produced by AutoJAYA in Glass and Weighting datasets
Table 2. Values of F-measure, ROC and SSE in real-time datasets using AutoJAYA algorithm
Datasets F-Measure ROC SSE
Iris 0.940 0.955 7.81
Wine 1.115 1.132 9.25
Glass 0.186 0.472 52.18
Ionosphere 0.501 0.485 726.10
Ecoil 0.479 0.464 695.10
Rocks 0.171 0.420 49.812
Parkinson 1.180 0.459 50.22
Diabetic 0.513 0.497 149.51
Segment 0.043 0.496 532.78
Weighting 0.893 0.941 520.9
Sonar 0.153 0.464 21.71
Rippleset 0.441 0.421 44.81
The results of F-measure, ROC area and Sum of Squared Error (SSE) of the proposed algorithm
on each real-time dataset are included in Table 2. The value deviations of F-measure, ROC area
and SSE amongst all the datasets is shown in Fig. 2. It is observed from both Table 1 and Figure 2
that the aimed algorithm has obtained better result in most of the cases for all the real-time
datasets.
Table 2 shows the corresponding values of F-Measure, ROC area and SSE for all comparing real-
time datasets. A significant remark on Table 2 is all the datasets tender better values for F-
measure and ROC area. The SSE value is very nominal for Iris and Wine datasets and relatively
mere optimal values for remaining datasets.
The culminating remarks after examining the applicability of AutoJAYA algorithm over real-time
is the aimed algorithm lodges better in most of the datasets in identifying the exact number of
automatic partitions, with minimum consumption of CPU and relatively low percentage of error
rate.. Hence these experiments speculate fact that AutoJAYA algorithm lay bare the advantages
of multiobjective clustering optimized with evolutionary approaches decipher into realistic and
scalable performance paybacks.
7. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
41
Figure 2. Values of F-Measure, ROC and SSE rendered by AutoJAYA in real-time datasets
5. CONCLUSIONS
In this article, a novel multi-objective clustering technique AutoJAYA based on the newly
developed Jaya Evolutionary algorithm is proposed. The explorations and exploitations enforced
on the technique, automatically determine the proper number of clusters, proper partitioning from
a given dataset and mere optimal values towards CVIs frontiers, by optimizing fitness functions
simultaneously.
Furthermore, it is observed that the aimed algorithm exhibits better performance in most of the
considered real-time datasets and is able to cluster appropriate partitions. Much further work is
needed to investigate the profound algorithm using different and more objectives, compare with
well established automatic clustering algorithm and to test the approach still more extensively
over diversified domains of engineering.
REFERENCES
[1] Zitzler, Eckart, Marco Laumanns, and Stefan Bleuler. "A tutorial on evolutionary multiobjective
optimization." Metaheuristics for multiobjectiveoptimisation. Springer Berlin Heidelberg, 2004. 3-37.
[2] SriparnaSaha, SanghamitraBandyopadhyay,"A new point symmetry based fuzzy genetic clustering
technique for automatic evolution of clusters",Information Sciences 179, 2009, pp. 3230–3246,
doi:10.1016/j.ins.2009.06.013
[3] SriparnaSaha, SanghamitraBandyopadhyay,"A symmetry based multiobjective clustering technique
for automatic evolution of clusters", Pattern Recognitions 43, 2010, pp. 738-751,
doi:10.1016/j.patcog.2009.07.004
[4] Eduardo Raul Hruschka, Ricardo J. G. B. Campello, Alex A. Freitas, and Andr´e C. Ponce Leon F. de
Carvalho, "A Survey of Evolutionary Algorithms for Clustering", IEEE transactions on systems, man,
and cybernetics—part c: applications and reviews, Vol. 39-2, 2009, pp. 133-155.
[5] NobukazuMatake, Tomoyuki Hiroyasu, Mitsunori Miki, TomoharuSenda, "Multiobjective Clustering
with Automatic k-determination for Large-scale Data", GECCO’07, July 7–11, 2007, London,
England, United Kingdom, ACM 978-1-59593-697-4/07/0007
[6] EréndiraRendón, Itzel Abundez, Alejandra Arizmendi and Elvia M. Quiroz., "Internal versus External
cluster validation indexes", International journal of computers and communications, 1(5), 2011.
[7] Mukhopadhyay, A., Maulik, U., &Bandyopadhyay, S. (2015). A Survey of Multiobjective
Evolutionary Clustering. ACM Computing Surveys (CSUR),47(4), 61.
8. Advanced Computational Intelligence: An International Journal (ACII), Vol.3, No.2, April 2016
42
[8] Abadi, M. F. H., &Rezaei, H. (2015). Data Clustering Using Hybridization Strategies of Continuous
Ant Colony Optimization, Particle Swarm Optimization and Genetic Algorithm. British Journal of
Mathematics & Computer Science, 6(4), 336.
[9] Ozturk, C., Hancer, E., &Karaboga, D. (2015). Dynamic clustering with improved binary artificial
bee colony algorithm. Applied Soft Computing, 28, 69-80.
[10] Kumar, V., Chhabra, J. K., & Kumar, D. (2014). Automatic cluster evolution using gravitational
search algorithm and its application on image segmentation. Engineering Applications of Artificial
Intelligence, 29, 93-103.
[11] Kuo, R. J., Huang, Y. D., Lin, C. C., Wu, Y. H., &Zulvia, F. E. (2014). Automatic kernel clustering
with bee colony optimization algorithm.Information Sciences, 283, 107-122.
[12] Wikaisuksakul, S. (2014). A multi-objective genetic algorithm with fuzzy c-means for automatic data
clustering. Applied Soft Computing, 24, 679-691.
[13] Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., &CoelloCoello, C. (2014). A survey of
multiobjective evolutionary algorithms for data mining: Part I. Evolutionary Computation, IEEE
Transactions on, 18(1), 4-19.
[14] Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., &Coello, C. (2014). Survey of multiobjective
evolutionary algorithms for data mining: Part II.Evolutionary Computation, IEEE Transactions on,
18(1), 20-35.
[15] R. Venkata Rao, "Jaya: A simple and new optimization algorithm for solving constrained and
unconstrained optimization problems",International Journal of Industrial Engineering Computations,
7, 2016, doi: 10.5267/j.ijiec.2015.8.004
[16] Ramachandra Rao Kurada, KanadamKarteekaPavan, AllamAppaRao,"Automatic Teaching–
Learning-Based Optimization-A Novel Clustering Method for Gene Functional
Enrichments",Computational Intelligence Techniques for Comparative Genomics, SpringerBriefs in
Applied Sciences and Technology.2015. 10.1007/978-981-287-338-5.
[17] Ramachandra Rao Kurada, KarteekaPavanKanadam, "A generalized automatic clustering algorithm
using improved TLBO framework", Int. Journal of Applied Sciences and Engineering Research, Vol.
4, Issue 4, 2015, ISSN 2277 – 9442.
[18] Lichman, M. (2013). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA:
University of California, School of Information and Computer Science.