Kohonen Self Organizing Map
(KSOM)
 A self-Organizing Map (SOM) varies from typical artificial neural networks (ANNs) both in its architecture and
algorithmic properties.
 Its structure consists of a single layer linear 2D grid of neurons, rather than a series of layers.
 All the nodes on this lattice are associated directly to the input vector, but not to each other.
 It means the nodes don't know the values of their neighbors, and only update the weight of their
associations as a function of the given input.
 The grid itself is the map that coordinates itself at each iteration as a function of the input data. As such,
after clustering, each node has its own coordinate (i,j), which enables one to calculate Euclidean distance
between two nodes by means of the Pythagoras theorem.
 A Self-Organizing Map utilizes competitive learning instead of error-correction learning, to
modify its weights.
 It implies that only an individual node is activated at each cycle in which the features of an
occurrence of the input vector are introduced to the neural network, as all nodes
compete for the privilege to respond to the input.
 The selected node- the Best Matching Unit (BMU) is selected according to the similarity
between the current input values and all the other nodes in the network.
 Kohonen Self-Organizing feature map (SOM) refers to a neural network, which is trained
using competitive learning.
 Basic competitive learning implies that the competition process takes place before the
cycle of learning.
The architecture of the Self Organizing Map with two clusters and n input features of any sample is given below:
How do SOM works?
Let’s say an input data of size (m, n) where m is the number of training examples and n is the number of
features in each example. First, it initializes the weights of size (n, C) where C is the number of clusters.
Then iterating over the input data, for each training example, it updates the winning vector (weight vector with
the shortest distance (e.g Euclidean distance) from training example). Weight updation rule is given by :
wij = wij(old) + alpha(t) * (xi
k - wij(old))
where alpha is a learning rate at time t, j denotes the winning vector, i denotes the ith feature of training
example and k denotes the kth training example from the input data. After training the SOM network, trained
weights are used for clustering new examples. A new example falls in the cluster of winning vectors.
Algorithm
The steps involved are :
1. Weight initialization
2. For 1 to N number of epochs
3. Select a training example
4. Compute the winning vector
5. Update the winning vector
6. Repeat steps 3, 4, 5 for all training examples.
7. Clustering the test sample
Kohonen Self Organizing Map,Kohonen Self Organizing Map
Kohonen Self Organizing Map,Kohonen Self Organizing Map
Kohonen Self Organizing Map,Kohonen Self Organizing Map

Kohonen Self Organizing Map,Kohonen Self Organizing Map

  • 1.
  • 2.
     A self-OrganizingMap (SOM) varies from typical artificial neural networks (ANNs) both in its architecture and algorithmic properties.  Its structure consists of a single layer linear 2D grid of neurons, rather than a series of layers.
  • 3.
     All thenodes on this lattice are associated directly to the input vector, but not to each other.  It means the nodes don't know the values of their neighbors, and only update the weight of their associations as a function of the given input.  The grid itself is the map that coordinates itself at each iteration as a function of the input data. As such, after clustering, each node has its own coordinate (i,j), which enables one to calculate Euclidean distance between two nodes by means of the Pythagoras theorem.  A Self-Organizing Map utilizes competitive learning instead of error-correction learning, to modify its weights.  It implies that only an individual node is activated at each cycle in which the features of an occurrence of the input vector are introduced to the neural network, as all nodes compete for the privilege to respond to the input.  The selected node- the Best Matching Unit (BMU) is selected according to the similarity between the current input values and all the other nodes in the network.  Kohonen Self-Organizing feature map (SOM) refers to a neural network, which is trained using competitive learning.  Basic competitive learning implies that the competition process takes place before the cycle of learning.
  • 4.
    The architecture ofthe Self Organizing Map with two clusters and n input features of any sample is given below:
  • 5.
    How do SOMworks? Let’s say an input data of size (m, n) where m is the number of training examples and n is the number of features in each example. First, it initializes the weights of size (n, C) where C is the number of clusters. Then iterating over the input data, for each training example, it updates the winning vector (weight vector with the shortest distance (e.g Euclidean distance) from training example). Weight updation rule is given by : wij = wij(old) + alpha(t) * (xi k - wij(old)) where alpha is a learning rate at time t, j denotes the winning vector, i denotes the ith feature of training example and k denotes the kth training example from the input data. After training the SOM network, trained weights are used for clustering new examples. A new example falls in the cluster of winning vectors.
  • 6.
    Algorithm The steps involvedare : 1. Weight initialization 2. For 1 to N number of epochs 3. Select a training example 4. Compute the winning vector 5. Update the winning vector 6. Repeat steps 3, 4, 5 for all training examples. 7. Clustering the test sample