This research aims to develop kernel GNG, a kernelized version of the growing neural gas (GNG)
algorithm, and to investigate the features of the networks generated by the kernel GNG. The GNG is an
unsupervised artificial neural network that can transform a dataset into an undirected graph, thereby
extracting the features of the dataset as a graph. The GNG is widely used in vector quantization,
clustering, and 3D graphics. Kernel methods are often used to map a dataset to feature space, with support
vector machines being the most prominent application. This paper introduces the kernel GNG approach
and explores the characteristics of the networks generated by kernel GNG. Five kernels, including
Gaussian, Laplacian, Cauchy, inverse multiquadric, and log kernels, are used in this study. The results of
this study show that the average degree and the average clustering coefficient decrease as the kernel
parameter increases for Gaussian, Laplacian, Cauchy, and IMQ kernels. If we avoid more edges and a
higher clustering coefficient (or more triangles), the kernel GNG with a larger value of the parameter will
be more appropriate.
Half Gaussian-based wavelet transform for pooling layer for convolution neura...TELKOMNIKA JOURNAL
Pooling methods are used to select most significant features to be aggregated to small region. In this paper, anew pooling method is proposed based on probability function. Depending on the fact that, most information is concentrated from mean of the signal to its maximum values, upper half of Gaussian function is used to determine weights of the basic signal statistics, which is used to determine the transform of the original signal into more concise formula, which can represent signal features, this method named half gaussian transform (HGT). Based on strategy of transform computation, Three methods are proposed, the first method (HGT1) is used basic statistics after normalized it as weights to be multiplied by original signal, second method (HGT2) is used determined statistics as features of the original signal and multiply it with constant weights based on half Gaussian, while the third method (HGT3) is worked in similar to (HGT1) except, it depend on entire signal. The proposed methods are applied on three databases, which are (MNIST, CIFAR10 and MIT-BIH ECG) database. The experimental results show that, our methods are achieved good improvement, which is outperformed standard pooling methods such as max pooling and average pooling.
Node classification with graph neural network based centrality measures and f...IJECEIAES
Graph neural networks (GNNs) are a new topic of research in data science where data structure graphs are used as important components for developing and training neural networks. GNN always learns the weight importance of the neighbor for perform message aggregation in which the feature vectors of all neighbors are aggregated without considering whether the features are useful or not. Using such more informative features positively affect the performance of the GNN model. So, in this paper i) after selecting a subset of features to define important node features, we present new graph features’ explanation methods based on graph centrality measures to capture rich information and determine the most important node in a network. Through our experiments, we find that selecting certain subsets of these features and adding other features based on centrality measure can lead to better performance across a variety of datasets and ii) we introduce a major design strategy for graph neural networks. Specifically, we suggest using batch renormalization as normalization over GNN layers. Combining these techniques, representing features based on centrality measures that passed to multilayer perceptron (MLP) layer which is then passed to adjusted GNN layer, the proposed model achieves greater accuracy than modern GNN models.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
COUPLED FPGA/ASIC IMPLEMENTATION OF ELLIPTIC CURVE CRYPTO-PROCESSORIJNSA Journal
In this paper, we propose an elliptic curve key generation processor over GF(2163) scheme based on the Montgomery scalar multiplication algorithm. The new architecture is performed using polynomial basis. The Finite Field operations use a cellular automata multiplier and Fermat algorithm for inversion. For real time implementation, the architecture has been tested on an ISE 9.1 Software using Xilinx Virtex II Pro FPGA and on an ASIC CMOS 45 nm technology as well. The proposed implementation provides a time of 2.07 ms and 38 percent of Slices in Xilinx Virtex II Pro FPGA. Such features reveal the high efficiently of this implementation design.
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcscpconf
Image reconstruction is a process of obtaining the original image from corrupted data. Applications of image reconstruction include Computer Tomography, radar imaging, weather
forecasting etc. Recently steering kernel regression method has been applied for image reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is computationally intensive. Secondly, output of the algorithm suffers form spurious edges (especially in case of denoising). We propose a modified version of Steering Kernel Regression called as Median Based Parallel Steering Kernel Regression Technique. In the proposed algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The second problem is addressed by a gradient based suppression in which median filter is used. Our algorithm gives better output than that of the Steering Kernel Regression. The results are
compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of 21x using GPUs and shown speedup of 6x using multi-cores.
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcsandit
Image reconstruction is a process of obtaining the original image from corrupted data.Applications of image reconstruction include Computer Tomography, radar imaging, weather forecasting etc. Recently steering kernel regression method has been applied for image reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is computationally intensive. Secondly, output of the algorithm suffers form spurious edges(especially in case of denoising). We propose a modified version of Steering Kernel Regression called as Median Based Parallel Steering Kernel Regression Technique. In the proposed algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The second problem is addressed by a gradient based suppression in which median filter is used.Our algorithm gives better output than that of the Steering Kernel Regression. The results are compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of 21x using GPUs and shown speedup of 6x using multi-cores.
Median based parallel steering kernel regression for image reconstructioncsandit
Image reconstruction is a process of obtaining the original image from corrupted data.
Applications of image reconstruction include Computer Tomography, radar imaging, weather
forecasting etc. Recently steering kernel regression method has been applied for image
reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is
computationally intensive. Secondly, output of the algorithm suffers form spurious edges
(especially in case of denoising). We propose a modified version of Steering Kernel Regression
called as Median Based Parallel Steering Kernel Regression Technique. In the proposed
algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The
second problem is addressed by a gradient based suppression in which median filter is used.
Our algorithm gives better output than that of the Steering Kernel Regression. The results are
compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of
21x using GPUs and shown speedup of 6x using multi-cores.
Half Gaussian-based wavelet transform for pooling layer for convolution neura...TELKOMNIKA JOURNAL
Pooling methods are used to select most significant features to be aggregated to small region. In this paper, anew pooling method is proposed based on probability function. Depending on the fact that, most information is concentrated from mean of the signal to its maximum values, upper half of Gaussian function is used to determine weights of the basic signal statistics, which is used to determine the transform of the original signal into more concise formula, which can represent signal features, this method named half gaussian transform (HGT). Based on strategy of transform computation, Three methods are proposed, the first method (HGT1) is used basic statistics after normalized it as weights to be multiplied by original signal, second method (HGT2) is used determined statistics as features of the original signal and multiply it with constant weights based on half Gaussian, while the third method (HGT3) is worked in similar to (HGT1) except, it depend on entire signal. The proposed methods are applied on three databases, which are (MNIST, CIFAR10 and MIT-BIH ECG) database. The experimental results show that, our methods are achieved good improvement, which is outperformed standard pooling methods such as max pooling and average pooling.
Node classification with graph neural network based centrality measures and f...IJECEIAES
Graph neural networks (GNNs) are a new topic of research in data science where data structure graphs are used as important components for developing and training neural networks. GNN always learns the weight importance of the neighbor for perform message aggregation in which the feature vectors of all neighbors are aggregated without considering whether the features are useful or not. Using such more informative features positively affect the performance of the GNN model. So, in this paper i) after selecting a subset of features to define important node features, we present new graph features’ explanation methods based on graph centrality measures to capture rich information and determine the most important node in a network. Through our experiments, we find that selecting certain subsets of these features and adding other features based on centrality measure can lead to better performance across a variety of datasets and ii) we introduce a major design strategy for graph neural networks. Specifically, we suggest using batch renormalization as normalization over GNN layers. Combining these techniques, representing features based on centrality measures that passed to multilayer perceptron (MLP) layer which is then passed to adjusted GNN layer, the proposed model achieves greater accuracy than modern GNN models.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
COUPLED FPGA/ASIC IMPLEMENTATION OF ELLIPTIC CURVE CRYPTO-PROCESSORIJNSA Journal
In this paper, we propose an elliptic curve key generation processor over GF(2163) scheme based on the Montgomery scalar multiplication algorithm. The new architecture is performed using polynomial basis. The Finite Field operations use a cellular automata multiplier and Fermat algorithm for inversion. For real time implementation, the architecture has been tested on an ISE 9.1 Software using Xilinx Virtex II Pro FPGA and on an ASIC CMOS 45 nm technology as well. The proposed implementation provides a time of 2.07 ms and 38 percent of Slices in Xilinx Virtex II Pro FPGA. Such features reveal the high efficiently of this implementation design.
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcscpconf
Image reconstruction is a process of obtaining the original image from corrupted data. Applications of image reconstruction include Computer Tomography, radar imaging, weather
forecasting etc. Recently steering kernel regression method has been applied for image reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is computationally intensive. Secondly, output of the algorithm suffers form spurious edges (especially in case of denoising). We propose a modified version of Steering Kernel Regression called as Median Based Parallel Steering Kernel Regression Technique. In the proposed algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The second problem is addressed by a gradient based suppression in which median filter is used. Our algorithm gives better output than that of the Steering Kernel Regression. The results are
compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of 21x using GPUs and shown speedup of 6x using multi-cores.
MEDIAN BASED PARALLEL STEERING KERNEL REGRESSION FOR IMAGE RECONSTRUCTIONcsandit
Image reconstruction is a process of obtaining the original image from corrupted data.Applications of image reconstruction include Computer Tomography, radar imaging, weather forecasting etc. Recently steering kernel regression method has been applied for image reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is computationally intensive. Secondly, output of the algorithm suffers form spurious edges(especially in case of denoising). We propose a modified version of Steering Kernel Regression called as Median Based Parallel Steering Kernel Regression Technique. In the proposed algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The second problem is addressed by a gradient based suppression in which median filter is used.Our algorithm gives better output than that of the Steering Kernel Regression. The results are compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of 21x using GPUs and shown speedup of 6x using multi-cores.
Median based parallel steering kernel regression for image reconstructioncsandit
Image reconstruction is a process of obtaining the original image from corrupted data.
Applications of image reconstruction include Computer Tomography, radar imaging, weather
forecasting etc. Recently steering kernel regression method has been applied for image
reconstruction [1]. There are two major drawbacks in this technique. Firstly, it is
computationally intensive. Secondly, output of the algorithm suffers form spurious edges
(especially in case of denoising). We propose a modified version of Steering Kernel Regression
called as Median Based Parallel Steering Kernel Regression Technique. In the proposed
algorithm the first problem is overcome by implementing it in on GPUs and multi-cores. The
second problem is addressed by a gradient based suppression in which median filter is used.
Our algorithm gives better output than that of the Steering Kernel Regression. The results are
compared using Root Mean Square Error(RMSE). Our algorithm has also shown a speedup of
21x using GPUs and shown speedup of 6x using multi-cores.
This paper proposes a clustering algorithm based on the Self Organizing Map (SOM) method. To find the optimal number of clusters, our algorithm uses the Davies Bouldin index which has not been used previously in the multi-SOM. The proposed algorithm is compared to three clustering methods based on five databases. Results show that our algorithm is as performing as concurrent methods.
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
A Novel Graph Representation for Skeleton-based Action Recognitionsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so that it can effectively capture the features of related nodes
and improve the performance of model. More attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges with the existing methods based on GCNs.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
A NOVEL GRAPH REPRESENTATION FOR SKELETON-BASED ACTION RECOGNITIONsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so
that it can effectively capture the features of related nodes and improve the performance of model. More
attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges
with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored
due to extracting features node by node and frame by frame. We design a generic representation of
skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks
(TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph
describing the relation of joints are mostly depended on the physical connection between joints. We
propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton
graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features
of each joint and the contour features of important joints. Extensive experiments are conducted on two
large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN
with our graph strategy outperforms other state-of-the-art methods.
Field Programmable Gate Array for Data Processing in Medical SystemsIOSR Journals
Abstract: Two- dimensional &Three–dimensional (3-D) image segmentation is on of the most demanding tasks in image processing. It has been proven that only the 14-neighborship of a rhombic dodecahedron can satisfy the aforementioned requirements. The 3-D-GSC process is executed in the following three phases ,coding phases,linking phases,splitting phases. An FPGA-based digital signal processing board optimized for applications needing large memory with high bandwidth has been developed and successfully used for the parallelization of a modern image segmentation algorithm for medical and industrial real-time applications.The Use of this 128-bit coprocessor board is not limited to image segmentation.We propose the perfectly parallelizable 3-D Gray-Value Structure Code (3-D-GSC) for image segmentation on a new FPGA custom machine. This 128-Bit FPGA coprocessing board features an up-to-date Virtex-II Pro architecture, two large independent DDR-SDRAM channels, two fast independent ZBT-SRAM channels, and PCI-X bus and CameraLink interfaces. Key words: Field Programmable Gate Array, Segme-ntation, Vogel bruch, Gray-valve structure code ,Homogeneous, Linking phase, Coding phase, Splitting phase, SDRAM.
Parallel knn on gpu architecture using opencleSAT Journals
Abstract In data mining applications, one of the useful algorithms for classification is the kNN algorithm. The kNN search has a wide usage in many research and industrial domains like 3-dimensional object rendering, content-based image retrieval, statistics, biology (gene classification), etc. In spite of some improvements in the last decades, the computation time required by the kNN search remains the bottleneck for kNN classification, especially in high dimensional spaces. This bottleneck has created the necessity of the parallel kNN on commodity hardware. GPU and OpenCL architecture are the low cost high performance solutions for parallelising the kNN classifier. In regard to this, we have designed, implemented our proposed parallel kNN model to improve upon performance bottleneck issue of kNN algorithm. In this paper, we have proposed parallel kNN algorithm on GPU and OpenCL framework. In our approach, we distributed the distance computations of the data points among all GPU cores. Multiple threads invoked for each GPU core. We have implemented and tested our parallel kNN implementation on UCI datasets. The experimental results show that the speedup of the KNN algorithm is improved over the serial performance.
Keywords: kNN, GPU, CPU, Parallel Computing, Data Mining, Clustering Algorithm.
Improving The Performance of Viterbi Decoder using Window System IJECEIAES
An efficient Viterbi decoder is introduced in this paper; it is called Viterbi decoder with window system. The simulation results, over Gaussian channels, are performed from rate 1/2, 1/3 and 2/3 joined to TCM encoder with memory in order of 2, 3. These results show that the proposed scheme outperforms the classical Viterbi by a gain of 1 dB. On the other hand, we propose a function called RSCPOLY2TRELLIS, for recursive systematic convolutional (RSC) encoder which creates the trellis structure of a recursive systematic convolutional encoder from the matrix “H”. Moreover, we present a comparison between the decoding algorithms of the TCM encoder like Viterbi soft and hard, and the variants of the MAP decoder known as BCJR or forward-backward algorithm which is very performant in decoding TCM, but depends on the size of the code, the memory, and the CPU requirements of the application.
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
CONTRAST OF RESNET AND DENSENET BASED ON THE RECOGNITION OF SIMPLE FRUIT DATA...rinzindorjej
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used
Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different
kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer)
are used. The mathematic principle, experiment detail and the experiment result will be explained through
comparison.
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer) are used. The mathematic principle, experiment detail and the experiment result will be explained through comparison.
CONTRAST OF RESNET AND DENSENET BASED ON THE RECOGNITION OF SIMPLE FRUIT DATA...rinzindorjej
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer)
are used. The mathematic principle, experiment detail and the experiment result will be explained through comparison.
APPLICATION OF GENETIC ALGORITHM IN DESIGNING A SECURITY MODEL FOR MOBILE ADH...cscpconf
In recent years, the static shortest path (SP) problem has been well addressed using intelligent
optimization techniques, e.g., artificial neural networks, genetic algorithms (GAs), particle
swarm optimization, etc. However, with the advancement in wireless communications, more and
more mobile wireless networks appear, e.g., mobile networks [mobile ad hoc networks
(MANETs)], wireless sensor networks, etc. One of the most important characteristics in mobile
wireless networks is the topology dynamics, i.e., the network topology changes over time due to
energy conservation or node mobility. Therefore, the SP routing problem in MANETs turns out to
be a dynamic optimization problem. GA's are able to find, if not the shortest, at least an optimal
path between source and destination in mobile ad-hoc network nodes. And we obtain the alternative path or backup path to avoid reroute discovery in the case of link failure or nodeex
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
More Related Content
Similar to CHARACTERISTICS OF NETWORKS GENERATED BY KERNEL GROWING NEURAL GAS
This paper proposes a clustering algorithm based on the Self Organizing Map (SOM) method. To find the optimal number of clusters, our algorithm uses the Davies Bouldin index which has not been used previously in the multi-SOM. The proposed algorithm is compared to three clustering methods based on five databases. Results show that our algorithm is as performing as concurrent methods.
COMPARATIVE PERFORMANCE ANALYSIS OF RNSC AND MCL ALGORITHMS ON POWER-LAW DIST...acijjournal
Cluster analysis of graph related problems is an important issue now-a-day. Different types of graph
clustering techniques are appeared in the field but most of them are vulnerable in terms of effectiveness
and fragmentation of output in case of real-world applications in diverse systems. In this paper, we will
provide a comparative behavioural analysis of RNSC (Restricted Neighbourhood Search Clustering) and
MCL (Markov Clustering) algorithms on Power-Law Distribution graphs. RNSC is a graph clustering
technique using stochastic local search. RNSC algorithm tries to achieve optimal cost clustering by
assigning some cost functions to the set of clusterings of a graph. This algorithm was implemented by A.
D. King only for undirected and unweighted random graphs. Another popular graph clustering
algorithm MCL is based on stochastic flow simulation model for weighted graphs. There are plentiful
applications of power-law or scale-free graphs in nature and society. Scale-free topology is stochastic i.e.
nodes are connected in a random manner. Complex network topologies like World Wide Web, the web of
human sexual contacts, or the chemical network of a cell etc., are basically following power-law
distribution to represent different real-life systems. This paper uses real large-scale power-law
distribution graphs to conduct the performance analysis of RNSC behaviour compared with Markov
clustering (MCL) algorithm. Extensive experimental results on several synthetic and real power-law
distribution datasets reveal the effectiveness of our approach to comparative performance measure of
these algorithms on the basis of cost of clustering, cluster size, modularity index of clustering results and
normalized mutual information (NMI).
A Novel Graph Representation for Skeleton-based Action Recognitionsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so that it can effectively capture the features of related nodes
and improve the performance of model. More attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges with the existing methods based on GCNs.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
A NOVEL GRAPH REPRESENTATION FOR SKELETON-BASED ACTION RECOGNITIONsipij
Graph convolutional networks (GCNs) have been proven to be effective for processing structured data, so
that it can effectively capture the features of related nodes and improve the performance of model. More
attention is paid to employing GCN in Skeleton-Based action recognition. But there are some challenges
with the existing methods based on GCNs. First, the consistency of temporal and spatial features is ignored
due to extracting features node by node and frame by frame. We design a generic representation of
skeleton sequences for action recognition and propose a novel model called Temporal Graph Networks
(TGN), which can obtain spatiotemporal features simultaneously. Secondly, the adjacency matrix of graph
describing the relation of joints are mostly depended on the physical connection between joints. We
propose a multi-scale graph strategy to appropriately describe the relations between joints in skeleton
graph, which adopts a full-scale graph, part-scale graph and core-scale graph to capture the local features
of each joint and the contour features of important joints. Extensive experiments are conducted on two
large datasets including NTU RGB+D and Kinetics Skeleton. And the experiments results show that TGN
with our graph strategy outperforms other state-of-the-art methods.
Field Programmable Gate Array for Data Processing in Medical SystemsIOSR Journals
Abstract: Two- dimensional &Three–dimensional (3-D) image segmentation is on of the most demanding tasks in image processing. It has been proven that only the 14-neighborship of a rhombic dodecahedron can satisfy the aforementioned requirements. The 3-D-GSC process is executed in the following three phases ,coding phases,linking phases,splitting phases. An FPGA-based digital signal processing board optimized for applications needing large memory with high bandwidth has been developed and successfully used for the parallelization of a modern image segmentation algorithm for medical and industrial real-time applications.The Use of this 128-bit coprocessor board is not limited to image segmentation.We propose the perfectly parallelizable 3-D Gray-Value Structure Code (3-D-GSC) for image segmentation on a new FPGA custom machine. This 128-Bit FPGA coprocessing board features an up-to-date Virtex-II Pro architecture, two large independent DDR-SDRAM channels, two fast independent ZBT-SRAM channels, and PCI-X bus and CameraLink interfaces. Key words: Field Programmable Gate Array, Segme-ntation, Vogel bruch, Gray-valve structure code ,Homogeneous, Linking phase, Coding phase, Splitting phase, SDRAM.
Parallel knn on gpu architecture using opencleSAT Journals
Abstract In data mining applications, one of the useful algorithms for classification is the kNN algorithm. The kNN search has a wide usage in many research and industrial domains like 3-dimensional object rendering, content-based image retrieval, statistics, biology (gene classification), etc. In spite of some improvements in the last decades, the computation time required by the kNN search remains the bottleneck for kNN classification, especially in high dimensional spaces. This bottleneck has created the necessity of the parallel kNN on commodity hardware. GPU and OpenCL architecture are the low cost high performance solutions for parallelising the kNN classifier. In regard to this, we have designed, implemented our proposed parallel kNN model to improve upon performance bottleneck issue of kNN algorithm. In this paper, we have proposed parallel kNN algorithm on GPU and OpenCL framework. In our approach, we distributed the distance computations of the data points among all GPU cores. Multiple threads invoked for each GPU core. We have implemented and tested our parallel kNN implementation on UCI datasets. The experimental results show that the speedup of the KNN algorithm is improved over the serial performance.
Keywords: kNN, GPU, CPU, Parallel Computing, Data Mining, Clustering Algorithm.
Improving The Performance of Viterbi Decoder using Window System IJECEIAES
An efficient Viterbi decoder is introduced in this paper; it is called Viterbi decoder with window system. The simulation results, over Gaussian channels, are performed from rate 1/2, 1/3 and 2/3 joined to TCM encoder with memory in order of 2, 3. These results show that the proposed scheme outperforms the classical Viterbi by a gain of 1 dB. On the other hand, we propose a function called RSCPOLY2TRELLIS, for recursive systematic convolutional (RSC) encoder which creates the trellis structure of a recursive systematic convolutional encoder from the matrix “H”. Moreover, we present a comparison between the decoding algorithms of the TCM encoder like Viterbi soft and hard, and the variants of the MAP decoder known as BCJR or forward-backward algorithm which is very performant in decoding TCM, but depends on the size of the code, the memory, and the CPU requirements of the application.
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
CONTRAST OF RESNET AND DENSENET BASED ON THE RECOGNITION OF SIMPLE FRUIT DATA...rinzindorjej
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used
Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different
kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer)
are used. The mathematic principle, experiment detail and the experiment result will be explained through
comparison.
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer) are used. The mathematic principle, experiment detail and the experiment result will be explained through comparison.
CONTRAST OF RESNET AND DENSENET BASED ON THE RECOGNITION OF SIMPLE FRUIT DATA...rinzindorjej
In this paper, a fruit image data set is used to compare the efficiency and accuracy of two widely used Convolutional Neural Network, namely the ResNet and the DenseNet, for the recognition of 50 different kinds of fruits. In the experiment, the structure of ResNet-34 and DenseNet_BC-121 (with bottleneck layer)
are used. The mathematic principle, experiment detail and the experiment result will be explained through comparison.
APPLICATION OF GENETIC ALGORITHM IN DESIGNING A SECURITY MODEL FOR MOBILE ADH...cscpconf
In recent years, the static shortest path (SP) problem has been well addressed using intelligent
optimization techniques, e.g., artificial neural networks, genetic algorithms (GAs), particle
swarm optimization, etc. However, with the advancement in wireless communications, more and
more mobile wireless networks appear, e.g., mobile networks [mobile ad hoc networks
(MANETs)], wireless sensor networks, etc. One of the most important characteristics in mobile
wireless networks is the topology dynamics, i.e., the network topology changes over time due to
energy conservation or node mobility. Therefore, the SP routing problem in MANETs turns out to
be a dynamic optimization problem. GA's are able to find, if not the shortest, at least an optimal
path between source and destination in mobile ad-hoc network nodes. And we obtain the alternative path or backup path to avoid reroute discovery in the case of link failure or nodeex
Similar to CHARACTERISTICS OF NETWORKS GENERATED BY KERNEL GROWING NEURAL GAS (20)
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Immunizing Image Classifiers Against Localized Adversary Attacksgerogepatton
This paper addresses the vulnerability of deep learning models, particularly convolutional neural networks
(CNN)s, to adversarial attacks and presents a proactive training technique designed to counter them. We
introduce a novel volumization algorithm, which transforms 2D images into 3D volumetric representations.
When combined with 3D convolution and deep curriculum learning optimization (CLO), itsignificantly improves
the immunity of models against localized universal attacks by up to 40%. We evaluate our proposed approach
using contemporary CNN architectures and the modified Canadian Institute for Advanced Research (CIFAR-10
and CIFAR-100) and ImageNet Large Scale Visual Recognition Challenge (ILSVRC12) datasets, showcasing
accuracy improvements over previous techniques. The results indicate that the combination of the volumetric
input and curriculum learning holds significant promise for mitigating adversarial attacks without necessitating
adversary training.
Immunizing Image Classifiers Against Localized Adversary Attacks
CHARACTERISTICS OF NETWORKS GENERATED BY KERNEL GROWING NEURAL GAS
1. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
DOI: 10.5121/ijaia.2023.14503 25
CHARACTERISTICS OF NETWORKS GENERATED BY
KERNEL GROWING NEURAL GAS
Kazuhisa Fujita
Komatsu University, Komatsu, Ishikawa, Japan
ABSTRACT
This research aims to develop kernel GNG, a kernelized version of the growing neural gas (GNG)
algorithm, and to investigate the features of the networks generated by the kernel GNG. The GNG is an
unsupervised artificial neural network that can transform a dataset into an undirected graph, thereby
extracting the features of the dataset as a graph. The GNG is widely used in vector quantization,
clustering, and 3D graphics. Kernel methods are often used to map a dataset to feature space, with support
vector machines being the most prominent application. This paper introduces the kernel GNG approach
and explores the characteristics of the networks generated by kernel GNG. Five kernels, including
Gaussian, Laplacian, Cauchy, inverse multiquadric, and log kernels, are used in this study. The results of
this study show that the average degree and the average clustering coefficient decrease as the kernel
parameter increases for Gaussian, Laplacian, Cauchy, and IMQ kernels. If we avoid more edges and a
higher clustering coefficient (or more triangles), the kernel GNG with a larger value of the parameter will
be more appropriate.
KEYWORDS
Growing neural gas, kernel method, self-organizing map
1. INTRODUCTION
Today, the amount of data has grown enormously [1, 2]. To efficiently process such massive
datasets, vector quantization methods are often used to reduce the number of data points. Thus,
vector quantization methods have become increasingly important for handling large datasets.
Self-organizing maps (SOMs) and their alternatives are widely used vector quantization methods
commonly applied in various fields such as data visualization, feature extraction, and data
classification. These approaches encode a dataset into a set of interconnected units. Kohonen’s
SOM is the most popular and widely used of these methods. Kohonen’s SOM creates a network
with fixed topology, such as a d-dimensional lattice.
The growing neural gas (GNG) proposed by Fritzke [3] is an alternative to SOM. GNG can
flexibly change the network topology during training. GNG can adapt not only the reference
vectors but also the network topology to an input data set. GNG can gradually increase the
number of neurons and reconstruct the network topology according to the input data. GNG is a
useful method for extracting the topology of the input data. Thus, GNG can not only quantize a
dataset but also preserve the topology of the dataset as the topology of the network.
The kernel method is useful for projecting data into a high-dimensional feature space. The
support vector machine [4] gains good performance for non-linear data, applied kernel method.
The kernel Kohonen’s SOM can perform better than Kohonen’s network [5]. However, a kernel
2. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
26
version of GNG has not yet been developed, and the characteristics of networks generated by
kernel GNG remain unknown.
This study aims to develop the kernel GNG and investigate the characteristics of networks
generated by the kernel GNGs with Gaussian, Laplacian, Cauchy, inverse multiquadric (IMQ),
and log kernels. First, the method of the kernel GNG is derived as shown in Sec.3. Second, the
paper shows the feature of networks generated by the kernel GNGs with these kernels in Sec.6.
This paper shows that the kernel GNGs with these kernels can generate networks that effectively
represent input datasets, similar to the original GNG algorithm.
2. RELATED WORK
The best-known and most widely used the SOM is Kohonen’s SOM. The SOM can project
multidimensional data onto a low-dimensional map [6]. The SOM is used in various applications
such as color quantization [7, 8], data visualization [9], and skeletonization [10]. The most
popular SOM is Kohonen’s SOM [11]. However, the network structure generated by Kohonen’s
SOM is static (generally an d-dimensional lattice) [12]. Thus, Kohonen’s SOM cannot flexibly
change the network topology depending on the input dataset.
The growing neural gas (GNG) [3] is a type of SOMs [13] and can find the topology of an input
distribution [14]. The network of the GNG is flexible, and its structure represents the data
structure. GNG has been widely applied to topology learning, such as the extraction of the two-
dimensional outline of an image [15, 16, 17], the reconstruction of 3D models [18], landmark
extraction [19], object tracking [20], anomaly detection [12], and cluster analysis [21, 22, 23].
The kernel method is often used for nonlinear separations. The most famous application of the
kernel method is the support vector machine [4, 24]. Many researchers have used the kernel
method to improve the performance of various methods. The kernel k-means [25] partitions the
data points into a higher dimensional feature space and can partition a dataset non-linearly.
Kohonen’s SOM has also been kernelized [26, 27, 28]. The kernel Kohonen’s SOM shows better
performance. However, the kernel GNG is not yet proposed.
3. KERNEL GROWING NEURAL GAS
The kernel growing neural gas (kernel GNG) is a modified version of the GNG that uses a kernel
function. The kernel GNG projects the dataset into a higher dimensional feature space using a
non-linear function and converts it into a network. The kernel trick allows the kernel GNG to
learn the topology of the input without the need for direct projection of the data into feature
space.
The kernel GNG consists of a set of units connected by a set of unweighted and undirected edges.
Each unit 𝑖 has a weight 𝒘𝑖 ∈ 𝑅𝑑
corresponding to a reference vector in the input space and a
cumulative error 𝐸𝑖. Given a data point from the dataset 𝑿 = {𝒙1,… , 𝒙𝑛, … , 𝒙𝑁}, where 𝒙𝑛 ∈ 𝑅𝑑
at each iteration, the kernel GNG updates the unit and the network.
Consider a data point 𝒙𝑛, a unit weight 𝒘𝑖, and a nonlinear mapping function 𝝓(⋅) that maps 𝒙𝑛
and 𝒘𝑖 to 𝝓(𝒙𝑛) and 𝝓(𝒘𝑖) in feature space. The dot product of the two points, 𝝓(𝒙𝑛) and
𝝓(𝒘𝑖), is denoted as 𝐾(𝒙𝑛,𝒘𝑖) = 𝝓(𝒙𝑛)T
𝝓(𝒘𝑖), where 𝐾(⋅,⋅) is the kernel function.
3. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
27
The winning unit 𝑠1 of the kernel GNG is the one closest to an input data point 𝒙𝑛 in the feature
space. The criterion to identify 𝑠1 is the squared distance between 𝒙𝑛 and the weight 𝒘𝑖 of unit 𝑖
in the feature space. The squared distance 𝐷2
(𝒙𝑛,𝒘𝑖) in the feature space is defined as follows:
𝐷2(𝒙𝑛, 𝒘𝑖) = ‖𝝓(𝒙𝑛) − 𝝓(𝒘𝑖)‖2 = 𝐾(𝒙𝑛,𝒙𝑛) − 2𝐾(𝒙𝑛, 𝒘𝑖) + 𝐾(𝒘𝑖,𝒘𝑖 ).
The kernel GNG identifies a winning unit by minimizing the squared distance between the
mapped point and the mapped weight using the above equation. The winning unit 𝑠1with respect
to an input 𝑥𝑛 is thus obtained by
𝑠1 = argmini 𝐷2(𝒙𝑛, 𝒘𝑖).
After that, the weight of the winning unit 𝒘𝑠1
is updated according to the following rule:
𝒘𝑠1
(𝑡 + 1) = 𝒘𝑠1
(𝑡) − 𝜀𝑠1
1
2
𝜕
𝜕𝒘𝑠1
𝐷2
(𝒙𝑛,𝒘𝑠1
),
where 𝑡 is the iteration index and 𝜀𝑠1
is the learning rate of the winning unit 𝑠1. This equation is
based on gradient descent to minimize the squared distance 𝐷2
(𝒙𝑛,𝒘𝑠1
). Therefore, the update
equation for the weight 𝒘𝑠1
in the kernel GNG is as follows
𝒘𝑠1
(𝑡 + 1) = 𝒘𝑠1
(𝑡) − 𝜀𝑠1
1
2
(
𝜕
𝜕𝒘𝑠1
𝐾(𝒘𝑠1
,𝒘𝑠1
) −
2𝜕
𝜕𝒘𝑠1
𝐾(𝒘𝑠1
,𝒙𝑛)).
This equation is consistent with that of the kernel SOM update rule proposed by [26]. This
method eliminates the need to maintain the transformed weights and the transformed data points,
allowing direct updating of the weights in the input space without updating the high-dimensional
weights in the feature space.
Five kernel functions are used in this study, including Gaussian, Laplacian, Cauchy, inverse
multiquadric (IMQ), and log kernels. Table 1 shows the kernelized 𝐷2(𝒙,𝒘)and
𝜕
𝜕𝒘
𝐷2(𝒙,𝒘).
The code for the kernel GNG is openly available on GitHub (https://github.com/ KazuhisaFujita/
KernelGNG).
Table 1: 𝐾(𝒙, 𝒘), 𝐷2
(𝒙,𝒘), and differentiations of 𝐷2
(𝒙, 𝒘)
kernel 𝐾(𝒙, 𝒘) 𝐷2
(𝒙, 𝒘) 𝜕
𝜕𝒘
𝐷2(𝒙,𝒘)
Gaussian
exp(−
‖𝒙 − 𝒘‖2
2𝛾2
) 2 (1 − exp (−
‖𝒙 − 𝒘‖2
2𝛾2
)) −2
𝒙 − 𝒘
𝛾2
exp(−
‖𝒙 − 𝒘‖2
2𝛾2
)
Laplacian
exp(−
‖𝒙 − 𝒘‖
𝛾
) 2 (1 − exp(−
‖𝒙 − 𝒘‖
𝛾
)) −
2
𝛾
𝒙 − 𝒘
‖𝒙 − 𝒘‖
exp(−
‖𝒙 − 𝒘‖
𝛾
)
Cauchy 1
1 + ‖𝒙 − 𝒘‖2/𝛾2
2 (1 −
1
1 + ‖𝒙 − 𝒘‖2/𝛾2
) −
4
𝛾2
𝒙 − 𝒘
(1 + ‖𝒙 − 𝒘‖2/𝛾2)2
IMQ 1
√‖𝒙 − 𝒘‖2 + 𝛾2
2 (
1
𝑐
−
1
√‖𝒙 − 𝒘‖2 + 𝛾2
) −2
𝒙 − 𝒘
(‖𝒙 − 𝒘‖2 + 𝛾2)3/2
log −log(
∥ 𝒙 − 𝒘 ∥𝛾
+ 1)
2 log(∥ 𝒙 − 𝒘 ∥𝛾
+ 1)
−2𝑑(𝒙 − 𝒘)
‖𝒙 − 𝒘‖𝛾−2
‖𝒙 − 𝒘‖𝛾 + 1
4. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
28
3.1. Algorithm of the Kernel GNG
The kernel GNG, based on the same principles as the original GNG algorithm, extracts the
network structure from the input data, but uses kernelized equations. The algorithm is formulated
as
1. Initialize the network with two connected neurons. Their weights are two randomly selected
data points.
2. Randomly select an input data point 𝒙𝑛 from the dataset.
3. Identify the winning unit 𝑠1, the one closest to 𝒙𝑛, as defined by
𝑠1 = argmini 𝐷2(𝒙𝑛, 𝒘𝑖).
where 𝐷2
is shown in table1. At the same time, find the second nearest unit, 𝑠2.
4.Increment the ages of all edges connected to the winning unit 𝑠1.
5.Increase the cumulative error 𝐸𝑠1
(𝑡) by the squared distance between the input data point 𝒙𝑖and
the weight of the winning unit 𝒘𝑠1
:
𝐸𝑠1
(𝑡 + 1) = 𝐸𝑠1
(𝑡) + 𝐷2(𝒙𝑛,𝒘𝑠1
).
6.Adapt the winning unit 𝑠1 and its neighbors 𝑗 to better reflect the input data point 𝒙𝑛 by
updating their weights:
𝒘𝑠1
(𝑡 + 1) = 𝒘𝑠1
(𝑡) − 𝜀𝑠1
1
2
𝜕
𝜕𝒘𝑠1
𝐷2(𝒙𝑛,𝒘𝑠1
),
𝒘𝑗(𝑡 + 1) = 𝒘𝑗(𝑡) − 𝜀𝑛
1
2
𝜕
𝜕𝒘𝑗
𝐷2(𝒙𝑛, 𝒘𝑗),
where
𝜕
𝜕𝒘
𝐷2(𝒙, 𝒘)is in table1.
7.If the units 𝑠1 and 𝑠2 are connected, reset the age of their connecting edge to zero. Otherwise,
create an edge between them.
8.Discard any edges whose ages exceed the maximum age 𝑎max. If this leaves any units isolated,
remove them.
9.Insert a new unit after every 𝜆 iteration:
•Identify the unit 𝑞 with the largest cumulative error 𝐸𝑞.
•Among the neighbors of 𝑞, find the node 𝑓 with the largest error.
•Insert a new unit 𝑟 between 𝑞 and 𝑓 as follows:
𝒘𝑟 = (𝒘𝑞 + 𝒘𝑓)/2.
•Create edges between neurons 𝑟 and 𝑞, and between 𝑟 and 𝑓, while removing the edge
between 𝑞 and 𝑓.
•Decrease the cumulative errors of 𝑞 and 𝑓 by multiplying them by a constant 𝛼, and
initialize the cumulative error of 𝑟 to the updated error of 𝑞.
10.Multiply all cumulative errors by a constant 𝛽 to reduce them.
11.Repeat from step 2 until the number of iterations reaches 𝑇.
5. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
29
In this study we used 𝑁max = 100, 𝑎max = 50, 𝜆 = 100, 𝛼 = 0.5, 𝛽 = 0.995, 𝜀𝑠1
= 0.2, and
𝜀𝑛 = 0.006. These parameter settings are based on [3].
4. EVALUATION METRICS FOR KERNEL GNG PERFORMANCE AND
NETWORK TOPOLOGY
4.1. Evaluation metrics for kernel GNG
The effectiveness of the kernel GNG is evaluated using two different metrics.
The first metric, mean square error (MSE), evaluates the average of the squared distances
between each input data point and its nearest unit in the input space. This metric is expressed as
MSE = ∑ min
𝑖
‖𝒙𝑛 − 𝒘𝑖‖2
𝑁
𝑛=1
,
where 𝑥𝑛 is an input data point, 𝑤𝑖 is the weight of neuron 𝑖, and ‖⋅‖ is the Euclidean norm.
The second metric, kernel mean square error (kMSE), extends the MSE by measuring the squared
distance between the data points and their corresponding weight vectors in the transformed
feature space defined by the kernel function. The kMSE is expressed as
kMSE = ∑ min
𝑖
𝐷2(𝒙𝑛, 𝒘𝑖)
𝑁
𝑛=1
.
In the above equation, 𝐷2(𝒙𝑛, 𝒘𝑖) is the distance metric computed in the feature space between
the input data point 𝒙𝑛 and the weight vector 𝒘𝑖. For more information about the kernel distance
metric 𝐷2
(𝒙𝑛, 𝒘𝑖), see table1.
4.2. Network Analysis Metrics for Topology Evaluation
In the field of complex network research, measures are used to study the structure of networks. In
this study, two measures are used to examine the generated networks: the average degree and the
average clustering coefficient.
The average degree, denoted as 𝑘, quantifies the average number of edges per node. It is
calculated using the following formula
𝑘 =
1
𝑁
∑ 𝑘𝑖
𝑁
𝑖 = 1
,
where 𝑘𝑖 is the degree (the number of edges) of node 𝑖.
On the other hand, the average clustering coefficient 𝐶 indicates how much nodes in the network
tend to form connected triangles on average. It is given by
6. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
30
𝐶 =
1
𝑁
∑ 𝑐𝑖
𝑁
𝑖=1
,
where 𝑐𝑖 is the clustering coefficient of node 𝑖. The clustering coefficient 𝑐𝑖 for node 𝑖 [29] is
given by
𝑐𝑖 =
2𝑡𝑖
𝑘𝑖(𝑘𝑖 − 1)
,
where 𝑡𝑖 is the number of triangles around 𝑖 and ki is the degree of 𝑖. If 𝑘𝑖 < 2, ci is set to zero.
These metrics are derived using NetworkX, a comprehensive Python library tailored for network
analysis.
5. EXPERIMENTAL SETTING
For this research, we used several Python libraries, namely NumPy for calculations related to
linear algebra, NetworkX for handling network operations and computing coefficients, and scikit-
learn for generating synthetic data.
Synthetic and real-world data sets are used to evaluate the characteristics of the network
generated by the kernel GNG. The synthetic datasets include Square, Blobs, Circles, Moons,
Swiss_roll, and S_curve. Square dataset is constructed using NumPy’s random.rand function,
which generates two-dimensional data points uniformly distributed between 0 and 1. Blobs
dataset is constructed using scikit-learn’sdatasets.make_blobs function, which uses a Gaussian
mixture model of three isotropic Gaussian distributions with default parameters. Circles dataset,
created using the datasets.make_circles function with noise and scale parameters of 0.05 and 0.5,
respectively, contains two concentric circles of data points. The Moons dataset, a distribution
mimicking the shape of crescents, was created using the datasets.make_moons function with a
noise parameter of 0.05. Swiss_roll and S_curve datasets are generated using
datasets.make_swiss_roll and datasets.make_s_curve, respectively. Each synthetic dataset
contains 1000 data points. In addition to these synthetic datasets, we also use two-dimensional
datasets such as Aggregation [30], Compound [31], Pathbased [32], Spiral [32], D31 [33], R15
[33], Jain [34], Flame [35], and t4.8k [36]. The real-world datasets used are Iris, Wine, Ecoli,
Glass, Yeast, Spam, CNAE-9, and Digits from the UCI Machine Learning Repository.
In all experiments performed, the pre-processing includes normalizing each data point 𝒙𝑛 =
(𝑥𝑛1,… ,𝑥𝑛𝑑,… ,𝑥𝑛𝐷) in a dataset 𝑿 = {𝒙1,… ,𝒙𝑛, … ,𝒙𝑁} using the following formula:
𝒙𝑛 = (
𝑥𝑛1 − 𝑥̅1
𝜎1
,… ,
𝑥𝑛𝑑 − 𝑥̅𝑑
𝜎𝑑
,… ,
𝑥𝑛𝐷 − 𝑥̅𝐷
𝜎𝐷
),
where 𝑥̅𝑑 =
1
𝑁
∑ 𝑥𝑛𝑑
𝑁
𝑛=1 , and 𝜎𝑑 = √1/𝑁 ∑ (𝑥𝑛𝑑 − 𝑥̅𝑑)2
𝑁
𝑛=1 .
6. RESULTS
In this section, we present four experimental results that demonstrate the effectiveness of kernel
GNGs. First, we provide a 2-dimensional visualization of the networks generated by kernel
GNGs, illustrating their structure and connectivity. Second, we present the evolution of MSE and
kernel MSE over iterations 𝑡. Third, we explore the dependence of MSE and network structure on
7. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
31
the kernel parameter 𝛾, revealing the role of this parameter in shaping the network. Finally, we
describe the characteristics of the network generated by kernel GNGs.
6.1. Visualization of Networks Generated by GNG and Kernel GNGs
Figure 1 shows the networks generated from synthetic datasets by the GNG and kernel GNGs.
The kernels used for the kernel GNGs include the Gaussian kernel with 𝛾 = 1.8, the Laplacian
kernel with 𝛾 = 1.8, the IMQ kernel with 𝛾 = 1.8, the Cauchy kernel with 𝛾 = 1.8, and the log
kernel with 𝛾 = 3. All networks are derived with END = 2 × 104
, and the random seed is set
to 1.In all cases, the networks generated by kernel GNGs accurately reflect the input topology.
However, the Laplacian kernel produces a significantly more complex network structure, but it
spreads the units over the input topology. This result shows the ability of kernel GNGs to
effectively extract the topology of the dataset.
Figure 1. This figure shows networks generated from synthetic data by the GNG and the kernel GNGs
with Gaussian, Laplacian, IMQ, Cauchy, and log kernels. Data points are shown as gray dots, while the
network units are shown as black dots, with their positions indicating the reference vectors. Black lines
mark the edges of the networks.
8. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
32
6.2. Convergence of MSE and kMSE for Kernel GNGs
Figure 2 shows the convergence patterns of both the MSE of the kernel GNG and the GNG over
iterations. For Blobs and Iris, the MSEs corresponding to all kernel GNGs begin to converge at
about 104 iterations. In contrast, for Wine, the MSE of the kernel GNG using the Laplacian and
the log starts to converge at about 2 × 104iterations, while the others continuously and slowly
decrease even after 2 × 104
iterations. For Wine, the MSEs of the kernel GNGs are larger than
that of the GNG, except when the log kernel is used. For Ecoli, the MSEs associated with the
kernel GNGs with the Laplacian, Cauchy, and IMQ kernels converge at 4 × 104iterations. The
kernel GNGs with Gaussian and logarithmic kernels converge at 2 × 105
and 105
iterations,
respectively. For Ecoli, the convergence values of the kernel GNGs with all kernels are larger
than those of the GNG.
Figure 3 shows the convergence behavior of the kMSE over iterations for the kernel GNGs. For
Blobs and Iris, the kMSE for all kernel GNGs starts to converge at about 104
iterations. In
contrast, for Wine, the kMSE approaches a low value at about 2 × 104
iterations. For Ecoli, while
the kMSE reaches low values at 104iterations, it exhibits a slow and continuous decline after this
iteration. These observations suggest that the kernel GNGs reach appropriately low MSE and
kMSE values around 2 × 104
iterations.
Figure 2. Evolution of the MSE over iterations. Subfigures (A) to (D) show the MSE for different data
sets: (A) Blobs, (B) Iris, (C) Wine, and (D) Ecoli, respectively. Each data point represents the average
kMSE obtained from 10 independent runs, all initialized with random values.
9. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
33
Figure 3. Evolution of the kernel mean square error (kMSE) over iterations. Subfigures (A) to (D) show
the kMSE for different data sets: (A) Blobs, (B) Iris, (C) Wine, and (D) Ecoli, respectively. The values
shown are the averages derived from 10 independent runs, each initialized with random values.
6.3. Influence of Kernel Parameters on Network Characteristics in kernel GNGs
Figure 4 shows the dependence of the kernel parameters on the MSE, the average degree, and the
average clustering coefficient of the networks generated by the kernel GNG with Gaussian,
Laplacian, Cauchy, and IMQ kernels. The MSE, average degree, and average clustering
coefficient are computed from the network at 4 × 105
iterations. Interestingly, as the kernel
parameters increase, the MSE, the average degree, and the average clustering coefficient
decrease. The kernel GNG with the Laplacian kernel tends to generate a network characterized by
a higher degree and a higher clustering coefficient than the others.
Figure 5 shows the effect of the kernel parameters on the MSE, the average degree, and the
average clustering coefficient of the networks generated by the kernel GNG with log kernel, with
the computations ending at the 4 × 105
iterations. A noteworthy observation is the absence of
any discernible dependence of the network features on the kernel parameter of the kernel GNG
with log kernel.
10. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
34
Figure 4. Dependence of various network metrics on kernel parameters for networks generated by kernel
GNG with Gaussian, Laplacian, Cauchy, and IMQ kernels for Blobs, Iris, Wine, and Ecoli datasets.
Subfigures (A), (D), (G), and (J) show the dependence of the MSE on the kernel parameter. Subfigures (B),
(E), (H), and (K) show the average degree, while subfigures (C), (F), (I), and (L) show the average
clustering coefficient. Each value shown represents an average derived from 10 independent runs, each
initialized with random values.
11. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
35
Figure 5. Dependence of various network metrics on kernel parameters in networks generated by kernel
GNG with log kernel and GNG for Blobs, Iris, Wine, and Ecoli datasets. Subfigures (A), (D), (G), and (J)
show the influence of kernel parameters on the MSE. Subfigures (B), (E), (H), and (K) show the average
degree, while subfigures (C), (F), (I), and (L) show the average clustering coefficient. Each displayed value
represents the average derived from 10 independent runs, each initialized with random values.
6.4. Comparison of Network Characteristics in kernel GNGs
Table 2 gives a comprehensive overview of the average degree 𝑁𝑑 of the networks generated by
the GNG and the kernel GNGs with the Gaussian, Laplacian, Cauchy, IMQ, and log kernels. The
Gaussian, Laplacian, Cauchy, and IMQ kernels use a 𝛾 parameter value of 1.8. In contrast, the
log kernel uses a 𝛾 parameter value of 3. Each row represents a dataset, and the corresponding
data dimension is documented in the second column, denoted by 𝐷. For networks generated by
the kernel GNG using the Gaussian, Cauchy, and IMQ kernels, Nd is less than or equal to that of
GNG. While the Laplacian kernel often produces a larger Nd than the GNG, especially for two-
dimensional datasets, it produces a smaller 𝑁𝑑 for datasets such as CNAE and Digits. In many
cases, the log kernel’s 𝑁𝑑 is equal to or smaller than the GNG’s, although it is larger than the
GNG’s values for datasets such as Wine, Spam, Glass, Yeast, and Digits.
In parallel, table 3 shows the average clustering coefficient 𝐶 of the networks generated by the
GNG and the kernel GNGs. The values of 𝐶 for the kernel GNGs using Gaussian, Cauchy, and
12. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
36
IMQ kernels are often less than or equal to those of the GNG. The Laplacian kernel often
produces a 𝐶 larger than the GNG’s for lower dimensional data. In addition, the log kernel’s 𝐶
tends to be higher than the other kernels for data points with larger dimensions.
In summary, Gaussian and Cauchy kernels typically produce networks of equal or reduced
complexity compared to the GNG. The IMQ kernel tends to produce simpler networks, the
Laplacian kernel produces more complex networks, and the log kernel produces more complex
networks, especially for high-dimensional data.
7. CONCLUSIONS
This paper describes the kernel growing neural gas (GNG) and investigates the characteristics of
the networks it generates. Several kernel functions are tested, including Gaussian, Laplacian,
IMQ, Cauchy, and log kernels. The results show that the reference vectors produced by the
kernel GNG match the input dataset, which is confirmed by the sufficiently small mean square
error (MSE) values. However, the topology of the network generated by kernel GNG depends on
the kernel function parameter 𝛾. The average degree and the average clustering coefficient
decrease as 𝛾 increases for Gaussian, Laplacian, Cauchy, and IMQ kernels.
The choice between kernel GNG and GNG is complex, mainly because the only discernible
difference from our results is in the metrics of the network topology. However, if we avoid more
edges and a higher clustering coefficient (or more triangles), kernel GNG with a larger value of
𝛾 will be more appropriate. Such a feature will be particularly valuable for 3D graphics
applications, where the kernel GNG may be able to simplify the mesh structure of polygons for a
more efficient rendering.
Table 2: The average degree of a network, Nd, generated by GNG and kernel GNGs
dataset D GNG Gaussian Laplacian Cauchy IMQ Log
Square 2 4.24 3.97 6.58 4.09 3.92 4.18
Blobs 2 4.09 3.85 10.89 3.91 3.70 3.94
Circles 2 2.78 2.63 4.89 2.70 2.59 2.61
Moons 2 3.06 2.83 6.76 3.02 2.79 2.93
Swiss_roll 3 4.14 3.90 4.84 4.01 3.80 4.25
S_curve 3 4.19 3.98 4.92 4.07 3.82 4.16
Aggregation 2 3.99 3.73 7.29 3.87 3.67 3.83
Compound 2 3.18 2.77 6.59 3.06 2.51 2.82
t4.8k 2 4.09 4.05 6.55 3.98 4.08 4.06
Iris 4 1.97 1.83 3.11 1.88 2.07 1.79
Wine 13 2.91 2.55 3.05 2.63 2.51 3.16
Spam 57 11.52 10.46 11.76 11.94 11.86 12.02
CNAE 857 8.36 2.13 4.70 2.04 2.09 7.24
Ecoli 7 4.28 3.62 4.76 3.86 3.37 4.11
Glass 9 2.49 2.19 2.94 2.36 2.33 2.67
Yeast 8 9.03 8.95 9.33 9.19 8.63 11.20
Digits 64 5.07 4.24 4.61 5.31 5.05 6.45
D indicates the dimension of a data point. The Gaussian, Cauchy, and IMQ kernels are used with
a 𝛾parameter set to 1.8, while the logarithmic kernel uses a 𝛾parameter set to 3. The purities are
the mean of 10 runs with random initial values. The largest and the smallest values are bold and
italic, respectively.
13. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
37
Table 3: Average clustering coefficient of a network generated by GNG and kernel GNGs
dataset D GNG Gaussian Laplacian Cauchy IMQ log
Square 2 0.32 0.28 0.47 0.31 0.26 0.32
Blobs 2 0.38 0.30 0.55 0.34 0.28 0.33
Circles 2 0.34 0.29 0.59 0.31 0.26 0.27
Moons 2 0.37 0.28 0.65 0.34 0.27 0.31
Swiss_roll 3 0.33 0.27 0.41 0.30 0.26 0.34
S_curve 3 0.34 0.30 0.42 0.31 0.28 0.32
Aggregation 2 0.46 0.37 0.62 0.41 0.38 0.42
Compound 2 0.28 0.22 0.55 0.27 0.18 0.23
t4.8k 2 0.33 0.30 0.50 0.30 0.31 0.32
Iris 4 0.08 0.05 0.30 0.08 0.08 0.04
Wine 13 0.16 0.12 0.18 0.13 0.11 0.18
Spam 57 0.28 0.25 0.27 0.27 0.26 0.28
CNAE 857 0.26 0.04 0.21 0.02 0.03 0.38
Ecoli 7 0.25 0.19 0.29 0.24 0.19 0.24
Glass 9 0.14 0.10 0.30 0.13 0.12 0.14
Yeast 8 0.37 0.34 0.34 0.34 0.32 0.38
Digits 64 0.30 0.25 0.23 0.29 0.27 0.39
D indicates the dimension of a data point. The purities are the mean of 10 runs with random
initial values. The largest and the smallest values are bold and italic, respectively.
REFERENCES
[1] Cabanes, G., Bennani, Y. &Fresneau, D. (2012) Enriched topological learning for cluster detection
and visualization. Neural Networks 32, 186–195.
[2] Rossi, F. (2014) How many dissimilarity/kernelself organizing map variants do we need? In
Villmann, T., Schleif, F.-M., Kaden, M. & Lange, M. (eds.) Advances in Self-Organizing Maps and
Learning Vector Quantization, 3–23 (Springer International Publishing, Cham, 2014).
[3] Fritzke, B. (1994). A growing neural gas network learns topologies. In Proceedings of the 7th
International Conference on Neural Information Processing Systems, NIPS’94, 625–632 (MIT
Press, Cambridge, MA, USA).
[4] Cortes, C. &Vapnik, V. (1995) Support-vector networks. In Machine Learning, 273–297.
[5] Chen, N. & Zhang, H. (2009) Extended kernel self-organizing map clustering algorithm. In 5th
International Conference on Natural Computation, ICNC 2009, 454–458.
[6] Vesanto, J. &Alhoniemi, E. (2000) Clustering of the self-organizing map. IEEE Transactions on
Neural Networks 11, 586–600.
[7] Chang, C.-H., Xu, P., Xiao, R. &Srikanthan, T. (2005). New adaptive color quantization method
based on self-organizing maps. IEEE Transactions on Neural Networks 16, 237–249.
[8] Rasti, J., Monadjemi, A. &Vafaei, A. (2011) Color reduction using a multi-stage
kohonenselforganizing map with redundant features. Expert Systems with Applications 38, 13188–
13197.
[9] Heskes, T. (2001) Self-organizing maps, vector quantization, and mixture modeling. IEEE
Transactions on Neural Networks 12, 12–1299.
[10] Singh, R., Cherkassky, V. &Papanikolopoulos, N. (2000) Self-organizing maps for the
skeletonization of sparse shapes. IEEE Transactions on Neural Networks and Learning Systems 11,
241–248.
[11] Kohonen, T. (1982) Self-organized formation of topologically correct feature maps. Biological
Cybernetics 43, 59–69.
[12] Sun, Q., Liu, H. & Harada, T. (2017) Online growing neural gas for anomaly detection in changing
surveillance scenes. Pattern Recognition 64, 187–201.
14. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
38
[13] Fiser, D., Faigl, J. & Kulich, M. (2013) Growing neural gas efficiently. Neurocomputing 104, 72–
82.
[14] García-RodríGuez, J. et al. (2012) Autonomous growing neural gas for applications with time
constraint: Optimal parameter estimation. Neural Networks 32, 196–208.
[15] Angelopoulou, A., Psarrou, A. & García-Rodríguez, J. (2011) A growing neural gas algorithm with
applications in hand modelling and tracking. In Cabestany, J., Rojas, I. & Joya, G. (eds.) Advances
in Computational Intelligence, 236–243 (Springer Berlin Heidelberg, Berlin, Heidelberg).
[16] Angelopoulou, A., García-Rodríguez, J., Orts-Escolano, S., Gupta, G. &Psarrou, A. (2018) Fast
2d/3d object representation with growing neural gas. Neural Computing and Applications 29, 903–
919.
[17] Fujita, K. (2013) Extract an essential skeleton of a character as a graph from a character image.
International Journal of Computer Science Issues 10, 35–39.
[18] Holdstein, Y. & Fischer, A. (2008) Three-dimensional surface reconstruction using meshing
growing neural gas (MGNG). The Visual Computer 24, 295–302.
[19] Fatemizadeh, E., Lucas, C. &Soltanian-Zadeh, H. (2003) Automatic landmark extraction from
image data using modified growing neural gas network. IEEE Transactions on Information
Technology in Biomedicine 7, 77–85.
[20] Frezza-Buet, H. (2008) Following non-stationary distributions by controlling the vector
quantization accuracy of a growing neural gas network. Neurocomputing 71, 1191–1202.
[21] Canales, F. &Chacón, M. (2007) Modification of the growing neural gas algorithm for cluster
analysis. In Rueda, L., Mery, D. & Kittler, J. (eds.) Progress in Pattern Recognition, Image
Analysis and Applications, 684–693.
[22] Costa, J. A. F. & Oliveira, R. S. (2007) Cluster analysis using growing neural gas and graph
partitioning. In Proceedings of 2007 International Joint Conference on Neural Networks, 3051–
3056 (2007).
[23] Fujita, K. (2021) Approximate spectral clustering using both reference vectors and topology of the
network generated by growing neural gas. PeerJComput. Sci. 7, e679.
[24] Lee, M.-C. & Chang, T. (2010) Comparison of support vector machine and back propagation neural
network in evaluating the enterprise financial distress. International Journal of Artificial Intelligence
& Applications 1, 31–43.
[25] Girolami, M. (2002) Mercer kernel-based clustering in feature space. IEEE Transactions on Neural
Networks 13, 780–784.
[26] Aiolli, F., Martino, G. D. S., Sperduti, A. &Hagenbuchner, M. (2007) "kernelized" selforganizing
maps for structured data. In ESANN 2007, 15th European Symposium on Artificial Neural
Networks, Bruges, Belgium, April 25-27, 2007, Proceedings, 19–24.
[27] András, P. (2002) Kernel-kohonen networks. International Journal of Neural Systems 12, 117–135.
[28] Lau, K. W., Yin, H. & Hubbard, S. (2006) Kernel self-organising maps for classification.
Neurocomputing 69, 2033–2040.
[29] Saramäki, J., Kivelä, M., Onnela, J.-P., Kaski, K. &Kertész, J. (2007) Generalizations of the
clustering coefficient to weighted complex networks. Physical Review E 75, 027105.
[30] Gionis, A., Mannila, H. &Tsaparas, P. (2007) Clustering aggregation. ACM Trans. Knowl. Discov.
Data 1, 1–30.
[31] Zahn, C. T. (1971) Graph-theoretical methods for detecting and describing gestalt clusters. IEEE
Trans. Computers 20, 68–86.
[32] Chang, H. & Yeung, D.-Y. (2008) Robust path-based spectral clustering. Pattern Recognition 41,
191 – 203.
[33] Veenman, C., Reinders, M. & Backer, E. (2002) A maximum variance cluster algorithm. IEEE
Transactions on Pattern Analysis and Machine Intelligence 24, 1273–1280.
[34] Jain, A. K. & Law, M. H. C. (2005) Data clustering: A user’s dilemma. In Pal, S. K.,
Bandyopadhyay, S. & Biswas, S. (eds.) Pattern Recognition and Machine Intelligence, 1–10
(Springer Berlin Heidelberg, Berlin, Heidelberg).
[35] Fu, L. & Medico, E. (2007) Flame, a novel fuzzy clustering method for the analysis of dna
microarray data. BMC bioinformatics 8, 3.
[36] Karypis, G., Han, E. & Kumar, V. (1999) ChameleonHierarchical clustering using dynamic
modeling.
15. International Journal of Artificial Intelligence and Applications (IJAIA), Vol.14, No.5, September 2023
39
AUTHOR
Kazuhisa Fujita received a Ph.D. degree from the University of Electro-
Communications in 2007. He was a Research Associate at the Department of Computer
and Information Engineering, National Institute of Technology, Tsuyama College, in
2007-2013. He was an Assistant Professor at National Institute of Technology,
Tsuyama College, in 2013-2018. He was also a Visiting Fellow at the University of
Electro-Communications in 2011-2022. From 2018, he worked as an Associate
Professor at Komatsu University. His current research interests include game-playing
artificial intelligence, neural networks, a clustering method, and the free energy principle.