SlideShare a Scribd company logo
1 of 38
Download to read offline
Introduction to Machine Learning
BECC0305
Prof Santanu Chowdhury
GLA University, Mathura
Supervised Learning
Unsupervised Learning
Supervised
Unsupervised
Types of Learning
UNSUPERVISED CLASSIFICATION
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
# x y
1 0 4
2 0 5
3 1 5
4 4 0
5 5 0
6 5 1
7 4 5
8 5 5
To find Tentative Clusters
Examples
UNSUPERVISED CLASSIFICATION
1. Cluster Seeking Algorithm
2. Cluster Refinement Algorithms
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax
D(𝑷𝒊, 𝑪𝟎)
𝑪𝟏
1 (0,4)
(0,5)
2 (0,5)
3 (1,5)
4 (4,0)
5 (5,0)
6 (5,1)
7 (4,5)
8 (5,5)
To find Tentative Clusters
1. Initiate with arbitrary 𝐶0 (𝑥0, 𝑦0)
2. Find next Tentative 𝐶1 (𝑥1, 𝑦1)
𝐶1= 𝑃𝑖(x,y) for which D(𝑃𝑖, 𝐶0) is maximum
Use Euclidean Distance
Let 𝐶0 = (0,5) Row wise Scanning
Pass 1
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax
D(𝑷𝒊, 𝑪𝟎)
𝑪𝟏
1 (0,4)
(0,5)
1
5 (5,0)
2 (0,5) 0
3 (1,5) 1
4 (4,0) 41
5 (5,0) 50
6 (5,1) 41
7 (4,5) 16
8 (5,5) 25
To find Tentative Clusters
1. Initiate with arbitrary 𝐶0 (𝑥0, 𝑦0)
2. Find next Tentative 𝐶1 (𝑥1, 𝑦1)
𝐶1= 𝑃𝑖(x,y) for which D(𝑃𝑖, 𝐶0) is maximum
Use Euclidean Distance
Let 𝐶0 = (0,5) Row wise Scanning
Pass 1
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax
D(𝑷𝒊, 𝑪𝟎)
𝑪𝟏 D(𝑷𝒊, 𝑪𝟏) 𝑫𝒎𝒊𝒏 =min
(D(𝑷𝒊, 𝑪𝟎),
D(𝑷𝒊, 𝑪𝟏) )
Argmax
(𝑫𝒎𝒊𝒏)
𝑪𝟐
1 (0,4)
(0,5)
1
5 (5,0)
2 (0,5) 0
3 (1,5) 1
4 (4,0) 41
5 (5,0) 50
6 (5,1) 41
7 (4,5) 16
8 (5,5) 25
To find Tentative Clusters
3. Find next Tentative 𝐶2 (𝑥2, 𝑦2)
𝐂𝟐=argmax { min (D(𝐏𝐢, 𝐂𝟎), D(𝐏𝐢, 𝐂𝟏))
Use Euclidean Distance
Pass 2
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax
D(𝑷𝒊, 𝑪𝟎)
𝑪𝟏 D(𝑷𝒊, 𝑪𝟏) 𝑫𝒎𝒊𝒏 =min
(D(𝑷𝒊, 𝑪𝟎),
D(𝑷𝒊, 𝑪𝟎) )
Argmax
(𝑫𝒎𝒊𝒏)
𝑪𝟐
1 (0,4)
(0,5)
1
5 (5,0)
41 1
8 (5,5)
2 (0,5) 0 50 0
3 (1,5) 1 41 1
4 (4,0) 41 1 1
5 (5,0) 50 0 0
6 (5,1) 41 1 1
7 (4,5) 16 26 16
8 (5,5) 25 25 25
To find Tentative Clusters
3. Find next Tentative 𝐶2 (𝑥2, 𝑦2)
𝐂𝟐=argmax { min (D(𝐏𝐢, 𝐂𝟎), D(𝐏𝐢, 𝐂𝟏))
Use Euclidean Distance
Pass 2
To find Tentative Clusters
Use Euclidean Distance
𝐶0 = (0,5) Row wise Scanning
𝐶1 = (5,0) Pass 1
𝐶2 = (5,5) Pass 2
Summary
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
𝐶0
𝐶1
𝐶2
1. Cluster Seeking Algorithm
2. Cluster Refinement Algorithms
UNSUPERVISED CLASSIFICATION
Unsupervised Classification :
Cluster seeking by Maximin Algorithm :
1. Choose any Xl as the first cluster center.
2. The vector Xm that maximises the vector-to-first-cluster
distance becomes the second cluster center.
3. The vector Xn that maximizes the minimum-vector-to
cluster-distance-among-k-clusters becomes the (k+l)th
cluster center.
1 2
1
3
2
k
l
k
l
1
0
1
2
3
4
5
6
0 1 2 3 4 5 6
y
x
Feature Space
𝐶0
𝐶1
𝐶2
Refinement of Cluster Centres
using KMeans Algorithm
# x y
1 0 4
2 0 5
3 1 5
4 4 0
5 5 0
6 5 1
7 4 5
8 5 5
Examples
𝐶0𝑖 = (0,5)
𝐶1𝑖 = (5,0)
𝐶2𝑖 = (5,5)
Initial Cluster Centres
2D Feature Space Refinement of Cluster Centres
using KMeans Algorithm
3 Cluster Problem
No. 𝑷𝒊
(x,y)
D
(𝑷𝒊,𝑪𝟎)
D
(𝑷𝒊,𝑪𝟏)
D
(𝑷𝒊,𝑪𝟐)
min
(D)
k=
Argmin
(min(D))
for
k=0,1,2
𝑁𝑘
for
k=0,1,2
𝐶𝑘𝑓
1 (0,4)
2 (0,5)
3 (1,5)
4 (4,0)
5 (5,0)
6 (5,1)
7 (4,5)
8 (5,5)
KMeans Algorithm for First Iteration
𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5)
𝐶0𝑓 = (0.33,, 4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
2D Feature Space Refinement of Cluster Centres
using KMeans Algorithm
3 Cluster Problem
No. 𝑷𝒊
(x,y)
D
(𝑷𝒊,𝑪𝟎)
D
(𝑷𝒊,𝑪𝟏)
D
(𝑷𝒊,𝑪𝟐)
min
(D)
k=
Argmin
(min(D))
for
k=0,1,2
𝑁𝑘
for
k=0,1,2
𝐶𝑘𝑓
1 (0,4) 1 41 26 1 0
2 (0,5) 0 50 25 0 0
3 (1,5) 1 41 16 1 0
4 (4,0) 41 1 26 1 1
5 (5,0) 50 0 25 0 1
6 (5,1) 41 1 16 1 1
7 (4,5) 16 26 1 1 2
8 (5,5) 25 25 0 0 2
KMeans Algorithm for First Iteration
𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5)
𝐶2𝑓 = (4.5,5)
2D Feature Space Refinement of Cluster Centres
using KMeans Algorithm
3 Cluster Problem
No. 𝑷𝒊
(x,y)
𝑫𝟎
(𝑷𝒊,𝑪𝟎)
𝑫𝟏
(𝑷𝒊,𝑪𝟏)
𝑫𝟐
(𝑷𝒊,𝑪𝟐)
min
(𝑫𝒊)
k=
Argmin
(min(𝐷𝑖))
for
k=0,1,2
𝑁𝑘
for
k=0,1,2
𝐶𝑘𝑓
1 (0,4) 1 41 26 1 0
(1, 14) 3 (0.33,4.67)
2 (0,5) 0 50 25 0 0
3 (1,5) 1 41 16 1 0
4 (4,0) 41 1 26 1 1
(14, 1) 3 (4.67,0.33)
5 (5,0) 50 0 25 0 1
6 (5,1) 41 1 16 1 1
7 (4,5) 16 26 1 1 2
(9, 10) 2 (4.5. 5)
8 (5,5) 25 25 0 0 2
KMeans Algorithm for First Iteration
𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5)
𝐶0𝑓 = (0.33,, 4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
2D Feature Space Refinement of Cluster Centres
using KMeans Algorithm
3 Cluster Problem
No. 𝑷𝒊
(x,y)
D
(𝑷𝒊,𝑪𝟎)
D
(𝑷𝒊,𝑪𝟏)
D
(𝑷𝒊,𝑪𝟐)
min
(D)
k=
Argmin
(min(D))
for
k=0,1,2
𝑁𝑘
for
k=0,1,2
𝐶𝑘𝑓
1 (0,4) 1 41 17 1 0
(1, 14) 3 (0.33,4.67)
2 (0,5) 0 50 16 0 0
3 (1,5) 1 41 9 1 0
4 (4,0) 41 1 25 1 1
(14, 1) 3 (4.67,0.33)
5 (5,0) 50 0 26 0 1
6 (5,1) 41 1 17 1 1
7 (4,5) 16 26 0 0 2
(9, 10) 2 (4.5. 5)
8 (5,5) 25 25 1 1 2
KMeans Algorithm for First Iteration
𝐶0𝑓 = (0.33,4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
𝐶0𝑓 = (0.33,, 4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
Cluster Refininement by K-Means Algorithm
1. Determine initial K cluster centers by maximin algorithm
2. Assign each object to the group that has the closest centroid.
3. When all objects have been assigned, recalculate the
positions of the K centroids.
4. Repeat steps 2 and 3 until the centroids no longer move.
This produces a separation of the objects into groups from
which the metric to be minimised can be calculated.
1 2
1
2
Classification :
1. Supervised Classification
• Training samples have known labels – enabling estimating the
characteristics of classes –by a set of parameters to represent a class.
• The goal is to assign each pixel vector a label by computing the
distance of the pixel vector to each class and finding the class to
which it has minimum distance using a Euclidean, City Block,
Mahalanobis or other distance measures.
2. Unsupervised Classification
• Initially samples do not have a label.
• Employ a clustering technique to partition the n samples from your
dataset into k clusters where each dataset belongs to one of the
clusters
• K-means is a clustering algorithm where initial cluster centres can be
obtained randomly or using a cluster seeking algorithm like Maximin
or a neural network like Self Organization Map.
ISODATA Algorithms – Extension of K-Means Algorithm
ISODATA – Iterative Self
Organising Data Analysis
Techniques
2D Feature Space
𝑥2
𝑥1
𝑪𝟐
𝑪𝟏 𝑪𝟑 𝑪𝟒
𝑪𝟓 𝑪𝟔
𝑪𝟕
#Examples Mean
Vector
Variance
Vector
Principal
Eigen Vector
𝒗𝟏
𝑻
Principal
Eigen
Value
𝑪𝟏 100 (2,8) (-0.25,0.25)
𝑪𝟐 100 (2,7.5) (-0.25,0.25)
𝑪𝟑 100 (2.5,8) (-0.25,0.25)
𝑪𝟒 1000 (5,5) (7,7) (0.707,0.707) 10
𝑪𝟓 100 (6,3) (0.25,0.25)
𝑪𝟔 300 (8,3) (1,1) (1,0)
𝑪𝟕 20 (9,6) (0.12,0.12)
Need for
a. Splitting
b. Merging
c. Rejecting
d. Refining (K-Means)
ISODATA Algorithms – Extension of K-Means Algorithm
Rejecting a Clusters
#Examples Mean
Vector
𝑪𝟏 100 (2,8)
𝑪𝟐 100 (2,7.5)
𝑪𝟑 100 (2.5,8)
𝑪𝟒 1000 (5,5)
𝑪𝟓 100 (6,3)
𝑪𝟔 300 (8,3)
𝑪𝟕 20 (9,6)
Regect a Cluster if population ≤ 50
#Examples Mean
Vector
𝑪𝟏 100 (2,8)
𝑪𝟐 100 (2,7.5)
𝑪𝟑 100 (2.5,8)
𝑪𝟒 1000 (5,5)
𝑪𝟓 100 (6,3)
𝑪𝟔 300 (8,3)
Rejecting Cluster 7 :
Examples of Cluster 7 redistributed to other clusters
7 clusters to 6 clusters
ISODATA Algorithms – Extension of K-Means Algorithm
Merging of Clusters
Mean
Vector
𝑪𝟏 (2,8)
𝑪𝟐 (2,7.5)
𝑪𝟑 (2.5,8)
𝑪𝟒 (5,5)
𝑪𝟓 (6,3)
𝑪𝟔 (8,3)
𝑪𝟏 𝑪𝟐 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔
𝑪𝟏 0
𝑪𝟐 0
𝑪𝟑 0
𝑪𝟒 0
𝑪𝟓 0
𝑪𝟔 0
Inter-Cluster Distances
Use Euclidean Distances
Merging Criteria : Merge if Inter-Cluster Distance ≤ 1
𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 :
Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑
ISODATA Algorithms – Extension of K-Means Algorithm
Merging of Clusters
Mean
Vector
𝑪𝟏 (2,8)
𝑪𝟐 (2,7.5)
𝑪𝟑 (2.5,8)
𝑪𝟒 (5,5)
𝑪𝟓 (6,3)
𝑪𝟔 (8,3)
𝑪𝟏 𝑪𝟐 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔
𝑪𝟏 0 0.5 0.5 4.24 6.4 7.8
𝑪𝟐 0 0.707 3.9 6.02 7.5
𝑪𝟑 0 3.9 6.1 7.43
𝑪𝟒 0 2.24 3.6
𝑪𝟓 0 2
𝑪𝟔 0
Inter-Cluster Distances
Use Euclidean Distances
Merging Criteria : Merge if Inter-Cluster Distance ≤ 1
𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 :
Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑
ISODATA Algorithms – Extension of K-Means Algorithm
Merging of Clusters 𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 :
Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑 into 𝑪𝑵𝟏
#Examples Mean
Vector
𝑪𝟏 100 (2,8)
𝑪𝟐 100 (2,7.5)
𝑪𝟑 100 (2.5,8)
Mean of Merged Cluster 𝑪𝑵𝟏
=
𝟏𝟎𝟎× 𝟐,𝟖 +𝟏𝟎𝟎× 𝟐,𝟕.𝟓 +𝟏𝟎𝟎×(𝟐.𝟓,𝟖)
𝟑𝟎𝟎
=
Population of Merged Cluster 𝑪𝑵𝟏
= 100 + 100 + 100 =
#Examples Mean
Vector
𝑪𝟏 100 (2,8)
𝑪𝟐 100 (2,7.5)
𝑪𝟑 100 (2.5,8)
𝑪𝟒 1000 (5,5)
𝑪𝟓 100 (6,3)
𝑪𝟔 300 (8,3)
#Examples Mean
Vector
𝑪𝑵𝟏 300 (2.16, 7,83)
𝑪𝟒 1000 (5,5)
𝑪𝟓 100 (6,3)
𝑪𝟔 300 (8,3)
6 clusters to 4 clusters
ISODATA Algorithms – Extension of K-Means Algorithm
Merging of Clusters 𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 :
Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑 into 𝑪𝑵𝟏
#Examples Mean
Vector
𝑪𝟏 100 (2,8)
𝑪𝟐 100 (2,7.5)
𝑪𝟑 100 (2.5,8)
Mean of Merged Cluster 𝑪𝑵𝟏
=
𝟏𝟎𝟎× 𝟐,𝟖 +𝟏𝟎𝟎× 𝟐,𝟕.𝟓 +𝟏𝟎𝟎×(𝟐.𝟓,𝟖)
𝟑𝟎𝟎
= (2.16, 7.83)
Population of Merged Cluster 𝑪𝑵𝟏
= 100 + 100 + 100 = 300
#Examples Mean
Vector
𝑪𝟏 100 (2,8)
𝑪𝟐 100 (2,7.5)
𝑪𝟑 100 (2.5,8)
𝑪𝟒 1000 (5,5)
𝑪𝟓 100 (6,3)
𝑪𝟔 300 (8,3)
#Examples Mean
Vector
𝑪𝑵𝟏 300 (2.16, 7,83)
𝑪𝟒 1000 (5,5)
𝑪𝟓 100 (6,3)
𝑪𝟔 300 (8,3)
6 clusters to 4 clusters
ISODATA Algorithms – Extension of K-Means Algorithm
Splitting of Clusters
Split a cluster if Principal Eigen Value > 7 units
#Examples Mean
Vector
Variance
Vector
Principal
Eigen Vector
𝒗𝟏
𝑻
Principal
Eigen
Value
𝝀𝟏
𝑪𝑵𝟏 100 (2.16,7.83)
𝑪𝟒 1000 (5,5) (7,7) (0.707,0.707) 10
𝑪𝟓 100 (6,3) (0.25,0.25)
𝑪𝟔 300 (8,3) (1,1) (1,0)
Split 𝑪𝟒 into 2 new clusters 𝑪𝟒𝒂 & 𝑪𝟒𝒃
Mean of 𝑪𝟒𝒂 = (5,5) - 𝝀𝟏 × 𝒗𝟏
𝑻
=
Mean of 𝑪𝟒𝒂 = (5,5) + 𝝀𝟏 × 𝒗𝟏
𝑻
=
ISODATA Algorithms – Extension of K-Means Algorithm
Splitting of Clusters
Split a cluster if Principal Eigen Value > 7 units
#Examples Mean
Vector
Variance
Vector
Principal
Eigen Vector
𝒗𝟏
𝑻
Principal
Eigen
Value
𝝀𝟏
𝑪𝑵𝟏 100 (2.16,7.83)
𝑪𝟒 1000 (5,5) (7,7) (0.707,0.707) 10
𝑪𝟓 100 (6,3) (0.25,0.25)
𝑪𝟔 300 (8,3) (1,1) (1,0)
Split 𝑪𝟒 into 2 new clusters 𝑪𝟒𝒂 & 𝑪𝟒𝒃
Mean of 𝑪𝟒𝒂 = (5,5) - 𝝀𝟏 × 𝒗𝟏
𝑻
= (5,5) - 3.16 ×(0.707,0.707)
= (2.76, 2.76)
Mean of 𝑪𝟒𝒂 = (5,5) + 𝝀𝟏 × 𝒗𝟏
𝑻
= (5,5) +3.16 × (0.707,0.707)
= (7.23, 7.23)
ISODATA Algorithms – Extension of K-Means Algorithm
Splitting of Clusters
Split a cluster if Principal Eigen Value > 7 units
Mean
Vector
𝑪𝑵𝟏 (2.16,7.83)
𝑪𝟒𝒂 (2.76, 2.76)
𝑪𝟒𝒃 (7.23, 7.23)
𝑪𝟓 (6,3)
𝑪𝟔 (8,3)
Mean
Vector
𝑪𝑵𝟏 (2.16, 7,83)
𝑪𝟒 (5,5)
𝑪𝟓 (6,3)
𝑪𝟔 (8,3)
4 clusters to 5 clusters
ISODATA Algorithms – Extension of K-Means Algorithm
K-Mean Algorithm – Fixed Number of Clusters
ISODATA Algorithms – Number of Clusters varying with iteration
UNSUPERVISED CLASSIFICATION
Competitive Learning
Competitive & Co-operative Learning : Self Organization Map
1. Initialization
Initialize Neuron Weights (Output Layer) to Random Nos
2. Competition : Select Winner Node
Using a Distance Measure – Euclidean, Manhattan, etc
Minimum distance Node is winner 𝑤𝑤𝑖𝑛
𝑡
3. Adaptation : Adapt Weights of the winning node
𝑤𝑤𝑖𝑛
𝑡+1
= 𝑤𝑤𝑖𝑛
𝑡
+ 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛
𝑡
)
4. Continue Steps 2 to 3 for each 𝑥𝑖 ( Presentation of all xi constitute one epoch)
5. Continue Epochs till convergence (Steps 2 to 4 )
4. Result : The converged Neuron Weights 𝑤𝑘
Algorithm
Data 𝑥𝑖
Neurons 𝑤𝑘
Features : 𝑥0, 𝑥1, 𝑥2,…, 𝑥𝑁−1
Neurons : 𝑤𝑐0, 𝑤𝑐1,…, 𝑤𝑐𝑀−1
Examples are sequentially
presented to the input layer
& Neuron weights updated
Competitive Learning
Example
Data
Vectors
𝑥𝑖
(𝑥0𝑖, 𝑥1𝑖)
Current
Neuron 0
Weights
𝑤𝑐0
Current
Neuron 1
Weights
𝑤𝑐1
(1,2)
(3,4) (6,5)
(2,1)
(8,9)
(10,9)
(9,9)
Execute for one Epoch
At the start of an epoch :
Let Neuron 1 𝑤𝑐0(3,4)
Neuron 2 𝑤𝑐1(6,5)
Step 1: Initialize Neuron Weights/or Take last epoch’s weights
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10 12
x1 x0
2D Feature Space
(𝒘𝟎𝒄𝟎, 𝒘𝟏𝒄𝟎)
(𝒘𝟎𝒄𝟏, 𝒘𝟏𝒄𝟏)
Neural Network
Input
Node
𝒙𝟎𝒊
𝒙𝟏𝒊
Competitive Learning
Data
Vectors
𝑥𝑖
Current
Neuron 1
Weights
𝑤𝑐0
Current
Neuron 2
Weights
𝑤𝑐1
(8,9)
(3,4) (6,5)
(1,2)
(10.9)
(2,1)
(9,9)
Executing for an Epoch
Step 2: Scrambling of Example Set
Example
Data
Vectors
𝑥𝑖
Current
Neuron 0
Weights
𝑤𝑐0
Current
Neuron 1
Weights
𝑤𝑐1
(1,2)
(3,4) (6,5)
(2,1)
(8,9)
(10,9)
(9,9)
Data
Vecto
rs
𝑥𝑖
Current
Neuron 0
Weights
𝑤𝑐0
Current
Neuron 1
weights
𝑤𝑐1
𝑑𝑐0 𝑑𝑐1 Current
Winning
Neuron
Weights
𝑤𝑤𝑖𝑛
𝑡
𝛼. (𝑥𝑖-𝑤𝑤𝑖𝑛
𝑡
) New
Winning
Neuron
Weights
𝑤𝑤𝑖𝑛
𝑡+1
(8,9) (3,4) (6,5) 10 6 (6,5) (0.5,1.0) (6.5,6.0)
(1,2) (3,4) (6.5,6.0)
Competitive Learning
Learning Rate 𝛼 = 0.25
Distance Metric : Manhattan
Step 3 : Learning Executing for one Epoch
At the start of an epoch :
Let Neuron 1 𝑤𝑐0(3,4)
Neuron 2 𝑤𝑐1(6,5)
Adaptation : Adapt Weights of the winning node
𝑤𝑤𝑖𝑛
𝑡+1
= 𝑤𝑤𝑖𝑛
𝑡
+ 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛
𝑡
)
Data
Vecto
rs
𝑥𝑖
Current
Neuron 0
Weights
𝑤𝑐0
Current
Neuron 1
weights
𝑤𝑐1
𝑑𝑐0 𝑑𝑐1 Current
Winning
Neuron
Weights
𝑤𝑤𝑖𝑛
𝑡
𝛼. (𝑥𝑖-𝑤𝑤𝑖𝑛
𝑡
) New
Winning
Neuron
Weights
𝑤𝑤𝑖𝑛
𝑡+1
(8,9) (3,4) (6,5) 10 6 (6,5) (0.5,1.0) (6.5,6.0)
(1,2) (3,4) (6.5,6.0) 4 9.5 (3,4) (-0.5,-0.5) (2.5,3.5)
(10,9) (2.5,3.5) (6.5,6.0)
Competitive Learning
Learning Rate 𝛼 = 0.25
Distance Metric : Manhattan
Step 3 : Learning Executing for one Epoch
At the start of an epoch :
Let Neuron 1 𝑤𝑐0(3,4)
Neuron 2 𝑤𝑐1(6,5)
Adaptation : Adapt Weights of the winning node
𝑤𝑤𝑖𝑛
𝑡+1
= 𝑤𝑤𝑖𝑛
𝑡
+ 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛
𝑡
)
Data
Vecto
rs
𝑥𝑖
Current
Neuron 0
Weights
𝑤𝑐0
Current
Neuron 1
weights
𝑤𝑐1
𝑑𝑐0 𝑑𝑐1 Current
Winning
Neuron
Weights
𝑤𝑤𝑖𝑛
𝑡
𝛼. (𝑥𝑖-𝑤𝑤𝑖𝑛
𝑡
) New
Winning
Neuron
Weights
𝑤𝑤𝑖𝑛
𝑡+1
(8,9) (3,4) (6,5) 10 6 (6,5) (0.5,1.0) (6.5,6.0)
(1,2) (3,4) (6.5,6.0) 4 9.5 (3,4) (-0.5,-0.5) (2.5,3.5)
(10,9) (2.5,3.5) (6.5,6.0) 13 6.5 (6.5,6.0) (0.875,0.75) (7.375,6.75)
(2,1) (2.5,3.5) (7.375,6.75) 3 11.1 (2.5,3.5) (-0.125,-0.625) (2.375,2.88)
(9,9) (2.375,2.875) (7.375,6.75) 12.75 3.88 (7.38,6.8) (0.406,0.56) (7.786,7.36)
Competitive Learning
Learning Rate 𝛼 = 0.25
Distance Metric : Manhattan
Step 3 : Learning Executing for one Epoch
At the start of an epoch :
Let Neuron 1 𝑤𝑐0(3,4)
Neuron 2 𝑤𝑐1(6,5)
At the end of an epoch :
Neuron 1 𝑤𝑐0(2.375,2.875)
Neuron 2 𝑤𝑐1(7.786,7.36)
Competitive Learning
Executing for an Epoch
Data
Vectors
𝑥𝑖
Current
Neuron 1
Weights
𝑤𝑐0
Current
Neuron 2
Weights
𝑤𝑐1
(8,9)
(3,4) (6,5)
(1,2)
(10.9)
(2,1)
(9,9)
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10 12
x1
x0
2D Feature Space
0
1
2
3
4
5
6
7
8
9
10
0 2 4 6 8 10 12
x1
x0
2D Feature Space
At the end of an epoch :
Neuron 1 𝑤𝑐0(2.375,2.875)
Neuron 2 𝑤𝑐1(7.786,7.36)
Competitive & Co-operative Learning : Self Organization Map
1. Initialization
Initialize Neuron Weights (Output Layer) to Random Nos
2. Competition : Select Winner Node
Using a Distance Measure – Euclidean, Manhattan, etc
Minimum distance Node is winner 𝑤𝑤𝑖𝑛
𝑡
Neighbour of winning node
Neighbours of winning node 𝑤𝑛𝑒𝑖𝑔ℎ1
𝑡
, 𝑤𝑛𝑒𝑖𝑔ℎ2
𝑡
, 𝑤𝑛𝑒𝑖𝑔ℎ3
𝑡
, etc,
3. Adaptation : Adapt Weights of the winning node
𝑤𝑤𝑖𝑛
𝑡+1
= 𝑤𝑤𝑖𝑛
𝑡
+ 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛
𝑡
)
4. Cooperation: Adapt Weights of the neighboring nodes k
𝑤𝑛𝑒𝑖𝑔ℎ−𝑘
𝑡+1
= 𝑤𝑛𝑒𝑖𝑔ℎ−𝑘
𝑡
+ 𝛼1. (𝑥𝑖 - 𝑤𝑛𝑒𝑖𝑔ℎ−𝑘
𝑡
) for all k , 𝛼1 < 𝛼
5. Continue Steps 2 to 4 for each 𝑥𝑖 ( Presentation of all xi constitute one epoch)
6. Continue Epochs till convergence (Steps 2 to 5 )
7. Result : The converged Neuron Weights 𝑤𝑘
Algorithm
Data 𝑥𝑖
Neurons 𝑤𝑘
Thank You

More Related Content

Similar to unsupervised classification.pdf

Matrix Methods of Structural Analysis
Matrix Methods of Structural AnalysisMatrix Methods of Structural Analysis
Matrix Methods of Structural AnalysisDrASSayyad
 
ML basic &amp; clustering
ML basic &amp; clusteringML basic &amp; clustering
ML basic &amp; clusteringmonalisa Das
 
11-2-Clustering.pptx
11-2-Clustering.pptx11-2-Clustering.pptx
11-2-Clustering.pptxpaktari1
 
09-FL Defuzzyfication I.pptx
09-FL Defuzzyfication I.pptx09-FL Defuzzyfication I.pptx
09-FL Defuzzyfication I.pptxssusercae49e
 
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approach
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial ApproachPREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approach
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approachahmet furkan emrehan
 
Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
 
Introduction to PyTorch
Introduction to PyTorchIntroduction to PyTorch
Introduction to PyTorchJun Young Park
 
Event classification & prediction using support vector machine
Event classification & prediction using support vector machineEvent classification & prediction using support vector machine
Event classification & prediction using support vector machineRuta Kambli
 
Efficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingEfficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingHsing-chuan Hsieh
 
Pert 05 aplikasi clustering
Pert 05 aplikasi clusteringPert 05 aplikasi clustering
Pert 05 aplikasi clusteringaiiniR
 
surveying and levelling
surveying and levellingsurveying and levelling
surveying and levellingSelf-employed
 
Maximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of BeetleMaximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of BeetleLiang Kai Hu
 

Similar to unsupervised classification.pdf (20)

Clustering
ClusteringClustering
Clustering
 
WEEK-1.pdf
WEEK-1.pdfWEEK-1.pdf
WEEK-1.pdf
 
Unidad 3 tarea 3 grupo208046_379
Unidad 3 tarea 3 grupo208046_379Unidad 3 tarea 3 grupo208046_379
Unidad 3 tarea 3 grupo208046_379
 
Cs36565569
Cs36565569Cs36565569
Cs36565569
 
Cryptographic system in polynomial residue classes for channels with noise an...
Cryptographic system in polynomial residue classes for channels with noise an...Cryptographic system in polynomial residue classes for channels with noise an...
Cryptographic system in polynomial residue classes for channels with noise an...
 
I046850
I046850I046850
I046850
 
Matrix Methods of Structural Analysis
Matrix Methods of Structural AnalysisMatrix Methods of Structural Analysis
Matrix Methods of Structural Analysis
 
ML basic &amp; clustering
ML basic &amp; clusteringML basic &amp; clustering
ML basic &amp; clustering
 
11-2-Clustering.pptx
11-2-Clustering.pptx11-2-Clustering.pptx
11-2-Clustering.pptx
 
09-FL Defuzzyfication I.pptx
09-FL Defuzzyfication I.pptx09-FL Defuzzyfication I.pptx
09-FL Defuzzyfication I.pptx
 
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approach
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial ApproachPREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approach
PREDICTION MODELS BASED ON MAX-STEMS Episode Two: Combinatorial Approach
 
Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...Kernal based speaker specific feature extraction and its applications in iTau...
Kernal based speaker specific feature extraction and its applications in iTau...
 
Introduction to PyTorch
Introduction to PyTorchIntroduction to PyTorch
Introduction to PyTorch
 
Fuzzy c means manual work
Fuzzy c means manual workFuzzy c means manual work
Fuzzy c means manual work
 
Event classification & prediction using support vector machine
Event classification & prediction using support vector machineEvent classification & prediction using support vector machine
Event classification & prediction using support vector machine
 
Rough K Means - Numerical Example
Rough K Means - Numerical ExampleRough K Means - Numerical Example
Rough K Means - Numerical Example
 
Efficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketchingEfficient anomaly detection via matrix sketching
Efficient anomaly detection via matrix sketching
 
Pert 05 aplikasi clustering
Pert 05 aplikasi clusteringPert 05 aplikasi clustering
Pert 05 aplikasi clustering
 
surveying and levelling
surveying and levellingsurveying and levelling
surveying and levelling
 
Maximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of BeetleMaximum Likelihood Estimation of Beetle
Maximum Likelihood Estimation of Beetle
 

Recently uploaded

(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 

Recently uploaded (20)

(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 

unsupervised classification.pdf

  • 1. Introduction to Machine Learning BECC0305 Prof Santanu Chowdhury GLA University, Mathura
  • 4. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space # x y 1 0 4 2 0 5 3 1 5 4 4 0 5 5 0 6 5 1 7 4 5 8 5 5 To find Tentative Clusters Examples
  • 5. UNSUPERVISED CLASSIFICATION 1. Cluster Seeking Algorithm 2. Cluster Refinement Algorithms
  • 6. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax D(𝑷𝒊, 𝑪𝟎) 𝑪𝟏 1 (0,4) (0,5) 2 (0,5) 3 (1,5) 4 (4,0) 5 (5,0) 6 (5,1) 7 (4,5) 8 (5,5) To find Tentative Clusters 1. Initiate with arbitrary 𝐶0 (𝑥0, 𝑦0) 2. Find next Tentative 𝐶1 (𝑥1, 𝑦1) 𝐶1= 𝑃𝑖(x,y) for which D(𝑃𝑖, 𝐶0) is maximum Use Euclidean Distance Let 𝐶0 = (0,5) Row wise Scanning Pass 1
  • 7. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax D(𝑷𝒊, 𝑪𝟎) 𝑪𝟏 1 (0,4) (0,5) 1 5 (5,0) 2 (0,5) 0 3 (1,5) 1 4 (4,0) 41 5 (5,0) 50 6 (5,1) 41 7 (4,5) 16 8 (5,5) 25 To find Tentative Clusters 1. Initiate with arbitrary 𝐶0 (𝑥0, 𝑦0) 2. Find next Tentative 𝐶1 (𝑥1, 𝑦1) 𝐶1= 𝑃𝑖(x,y) for which D(𝑃𝑖, 𝐶0) is maximum Use Euclidean Distance Let 𝐶0 = (0,5) Row wise Scanning Pass 1
  • 8. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax D(𝑷𝒊, 𝑪𝟎) 𝑪𝟏 D(𝑷𝒊, 𝑪𝟏) 𝑫𝒎𝒊𝒏 =min (D(𝑷𝒊, 𝑪𝟎), D(𝑷𝒊, 𝑪𝟏) ) Argmax (𝑫𝒎𝒊𝒏) 𝑪𝟐 1 (0,4) (0,5) 1 5 (5,0) 2 (0,5) 0 3 (1,5) 1 4 (4,0) 41 5 (5,0) 50 6 (5,1) 41 7 (4,5) 16 8 (5,5) 25 To find Tentative Clusters 3. Find next Tentative 𝐶2 (𝑥2, 𝑦2) 𝐂𝟐=argmax { min (D(𝐏𝐢, 𝐂𝟎), D(𝐏𝐢, 𝐂𝟏)) Use Euclidean Distance Pass 2
  • 9. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space No 𝑷𝒊(x,y) 𝑪𝟎 D(𝑷𝒊, 𝑪𝟎) Argmax D(𝑷𝒊, 𝑪𝟎) 𝑪𝟏 D(𝑷𝒊, 𝑪𝟏) 𝑫𝒎𝒊𝒏 =min (D(𝑷𝒊, 𝑪𝟎), D(𝑷𝒊, 𝑪𝟎) ) Argmax (𝑫𝒎𝒊𝒏) 𝑪𝟐 1 (0,4) (0,5) 1 5 (5,0) 41 1 8 (5,5) 2 (0,5) 0 50 0 3 (1,5) 1 41 1 4 (4,0) 41 1 1 5 (5,0) 50 0 0 6 (5,1) 41 1 1 7 (4,5) 16 26 16 8 (5,5) 25 25 25 To find Tentative Clusters 3. Find next Tentative 𝐶2 (𝑥2, 𝑦2) 𝐂𝟐=argmax { min (D(𝐏𝐢, 𝐂𝟎), D(𝐏𝐢, 𝐂𝟏)) Use Euclidean Distance Pass 2
  • 10. To find Tentative Clusters Use Euclidean Distance 𝐶0 = (0,5) Row wise Scanning 𝐶1 = (5,0) Pass 1 𝐶2 = (5,5) Pass 2 Summary 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space 𝐶0 𝐶1 𝐶2 1. Cluster Seeking Algorithm 2. Cluster Refinement Algorithms UNSUPERVISED CLASSIFICATION
  • 11. Unsupervised Classification : Cluster seeking by Maximin Algorithm : 1. Choose any Xl as the first cluster center. 2. The vector Xm that maximises the vector-to-first-cluster distance becomes the second cluster center. 3. The vector Xn that maximizes the minimum-vector-to cluster-distance-among-k-clusters becomes the (k+l)th cluster center. 1 2 1 3 2 k l k l 1
  • 12. 0 1 2 3 4 5 6 0 1 2 3 4 5 6 y x Feature Space 𝐶0 𝐶1 𝐶2 Refinement of Cluster Centres using KMeans Algorithm # x y 1 0 4 2 0 5 3 1 5 4 4 0 5 5 0 6 5 1 7 4 5 8 5 5 Examples 𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5) Initial Cluster Centres
  • 13. 2D Feature Space Refinement of Cluster Centres using KMeans Algorithm 3 Cluster Problem No. 𝑷𝒊 (x,y) D (𝑷𝒊,𝑪𝟎) D (𝑷𝒊,𝑪𝟏) D (𝑷𝒊,𝑪𝟐) min (D) k= Argmin (min(D)) for k=0,1,2 𝑁𝑘 for k=0,1,2 𝐶𝑘𝑓 1 (0,4) 2 (0,5) 3 (1,5) 4 (4,0) 5 (5,0) 6 (5,1) 7 (4,5) 8 (5,5) KMeans Algorithm for First Iteration 𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5) 𝐶0𝑓 = (0.33,, 4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
  • 14. 2D Feature Space Refinement of Cluster Centres using KMeans Algorithm 3 Cluster Problem No. 𝑷𝒊 (x,y) D (𝑷𝒊,𝑪𝟎) D (𝑷𝒊,𝑪𝟏) D (𝑷𝒊,𝑪𝟐) min (D) k= Argmin (min(D)) for k=0,1,2 𝑁𝑘 for k=0,1,2 𝐶𝑘𝑓 1 (0,4) 1 41 26 1 0 2 (0,5) 0 50 25 0 0 3 (1,5) 1 41 16 1 0 4 (4,0) 41 1 26 1 1 5 (5,0) 50 0 25 0 1 6 (5,1) 41 1 16 1 1 7 (4,5) 16 26 1 1 2 8 (5,5) 25 25 0 0 2 KMeans Algorithm for First Iteration 𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5) 𝐶2𝑓 = (4.5,5)
  • 15. 2D Feature Space Refinement of Cluster Centres using KMeans Algorithm 3 Cluster Problem No. 𝑷𝒊 (x,y) 𝑫𝟎 (𝑷𝒊,𝑪𝟎) 𝑫𝟏 (𝑷𝒊,𝑪𝟏) 𝑫𝟐 (𝑷𝒊,𝑪𝟐) min (𝑫𝒊) k= Argmin (min(𝐷𝑖)) for k=0,1,2 𝑁𝑘 for k=0,1,2 𝐶𝑘𝑓 1 (0,4) 1 41 26 1 0 (1, 14) 3 (0.33,4.67) 2 (0,5) 0 50 25 0 0 3 (1,5) 1 41 16 1 0 4 (4,0) 41 1 26 1 1 (14, 1) 3 (4.67,0.33) 5 (5,0) 50 0 25 0 1 6 (5,1) 41 1 16 1 1 7 (4,5) 16 26 1 1 2 (9, 10) 2 (4.5. 5) 8 (5,5) 25 25 0 0 2 KMeans Algorithm for First Iteration 𝐶0𝑖 = (0,5) 𝐶1𝑖 = (5,0) 𝐶2𝑖 = (5,5) 𝐶0𝑓 = (0.33,, 4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
  • 16. 2D Feature Space Refinement of Cluster Centres using KMeans Algorithm 3 Cluster Problem No. 𝑷𝒊 (x,y) D (𝑷𝒊,𝑪𝟎) D (𝑷𝒊,𝑪𝟏) D (𝑷𝒊,𝑪𝟐) min (D) k= Argmin (min(D)) for k=0,1,2 𝑁𝑘 for k=0,1,2 𝐶𝑘𝑓 1 (0,4) 1 41 17 1 0 (1, 14) 3 (0.33,4.67) 2 (0,5) 0 50 16 0 0 3 (1,5) 1 41 9 1 0 4 (4,0) 41 1 25 1 1 (14, 1) 3 (4.67,0.33) 5 (5,0) 50 0 26 0 1 6 (5,1) 41 1 17 1 1 7 (4,5) 16 26 0 0 2 (9, 10) 2 (4.5. 5) 8 (5,5) 25 25 1 1 2 KMeans Algorithm for First Iteration 𝐶0𝑓 = (0.33,4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5) 𝐶0𝑓 = (0.33,, 4.67) 𝐶1𝑓 = (4.67,0.33) 𝐶2𝑓 = (4.5,5)
  • 17. Cluster Refininement by K-Means Algorithm 1. Determine initial K cluster centers by maximin algorithm 2. Assign each object to the group that has the closest centroid. 3. When all objects have been assigned, recalculate the positions of the K centroids. 4. Repeat steps 2 and 3 until the centroids no longer move. This produces a separation of the objects into groups from which the metric to be minimised can be calculated. 1 2 1 2
  • 18. Classification : 1. Supervised Classification • Training samples have known labels – enabling estimating the characteristics of classes –by a set of parameters to represent a class. • The goal is to assign each pixel vector a label by computing the distance of the pixel vector to each class and finding the class to which it has minimum distance using a Euclidean, City Block, Mahalanobis or other distance measures. 2. Unsupervised Classification • Initially samples do not have a label. • Employ a clustering technique to partition the n samples from your dataset into k clusters where each dataset belongs to one of the clusters • K-means is a clustering algorithm where initial cluster centres can be obtained randomly or using a cluster seeking algorithm like Maximin or a neural network like Self Organization Map.
  • 19. ISODATA Algorithms – Extension of K-Means Algorithm ISODATA – Iterative Self Organising Data Analysis Techniques 2D Feature Space 𝑥2 𝑥1 𝑪𝟐 𝑪𝟏 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔 𝑪𝟕 #Examples Mean Vector Variance Vector Principal Eigen Vector 𝒗𝟏 𝑻 Principal Eigen Value 𝑪𝟏 100 (2,8) (-0.25,0.25) 𝑪𝟐 100 (2,7.5) (-0.25,0.25) 𝑪𝟑 100 (2.5,8) (-0.25,0.25) 𝑪𝟒 1000 (5,5) (7,7) (0.707,0.707) 10 𝑪𝟓 100 (6,3) (0.25,0.25) 𝑪𝟔 300 (8,3) (1,1) (1,0) 𝑪𝟕 20 (9,6) (0.12,0.12) Need for a. Splitting b. Merging c. Rejecting d. Refining (K-Means)
  • 20. ISODATA Algorithms – Extension of K-Means Algorithm Rejecting a Clusters #Examples Mean Vector 𝑪𝟏 100 (2,8) 𝑪𝟐 100 (2,7.5) 𝑪𝟑 100 (2.5,8) 𝑪𝟒 1000 (5,5) 𝑪𝟓 100 (6,3) 𝑪𝟔 300 (8,3) 𝑪𝟕 20 (9,6) Regect a Cluster if population ≤ 50 #Examples Mean Vector 𝑪𝟏 100 (2,8) 𝑪𝟐 100 (2,7.5) 𝑪𝟑 100 (2.5,8) 𝑪𝟒 1000 (5,5) 𝑪𝟓 100 (6,3) 𝑪𝟔 300 (8,3) Rejecting Cluster 7 : Examples of Cluster 7 redistributed to other clusters 7 clusters to 6 clusters
  • 21. ISODATA Algorithms – Extension of K-Means Algorithm Merging of Clusters Mean Vector 𝑪𝟏 (2,8) 𝑪𝟐 (2,7.5) 𝑪𝟑 (2.5,8) 𝑪𝟒 (5,5) 𝑪𝟓 (6,3) 𝑪𝟔 (8,3) 𝑪𝟏 𝑪𝟐 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔 𝑪𝟏 0 𝑪𝟐 0 𝑪𝟑 0 𝑪𝟒 0 𝑪𝟓 0 𝑪𝟔 0 Inter-Cluster Distances Use Euclidean Distances Merging Criteria : Merge if Inter-Cluster Distance ≤ 1 𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 : Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑
  • 22. ISODATA Algorithms – Extension of K-Means Algorithm Merging of Clusters Mean Vector 𝑪𝟏 (2,8) 𝑪𝟐 (2,7.5) 𝑪𝟑 (2.5,8) 𝑪𝟒 (5,5) 𝑪𝟓 (6,3) 𝑪𝟔 (8,3) 𝑪𝟏 𝑪𝟐 𝑪𝟑 𝑪𝟒 𝑪𝟓 𝑪𝟔 𝑪𝟏 0 0.5 0.5 4.24 6.4 7.8 𝑪𝟐 0 0.707 3.9 6.02 7.5 𝑪𝟑 0 3.9 6.1 7.43 𝑪𝟒 0 2.24 3.6 𝑪𝟓 0 2 𝑪𝟔 0 Inter-Cluster Distances Use Euclidean Distances Merging Criteria : Merge if Inter-Cluster Distance ≤ 1 𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 : Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑
  • 23. ISODATA Algorithms – Extension of K-Means Algorithm Merging of Clusters 𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 : Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑 into 𝑪𝑵𝟏 #Examples Mean Vector 𝑪𝟏 100 (2,8) 𝑪𝟐 100 (2,7.5) 𝑪𝟑 100 (2.5,8) Mean of Merged Cluster 𝑪𝑵𝟏 = 𝟏𝟎𝟎× 𝟐,𝟖 +𝟏𝟎𝟎× 𝟐,𝟕.𝟓 +𝟏𝟎𝟎×(𝟐.𝟓,𝟖) 𝟑𝟎𝟎 = Population of Merged Cluster 𝑪𝑵𝟏 = 100 + 100 + 100 = #Examples Mean Vector 𝑪𝟏 100 (2,8) 𝑪𝟐 100 (2,7.5) 𝑪𝟑 100 (2.5,8) 𝑪𝟒 1000 (5,5) 𝑪𝟓 100 (6,3) 𝑪𝟔 300 (8,3) #Examples Mean Vector 𝑪𝑵𝟏 300 (2.16, 7,83) 𝑪𝟒 1000 (5,5) 𝑪𝟓 100 (6,3) 𝑪𝟔 300 (8,3) 6 clusters to 4 clusters
  • 24. ISODATA Algorithms – Extension of K-Means Algorithm Merging of Clusters 𝐷𝑐1𝑐2, 𝐷𝑐1𝑐3, 𝐷𝑐2𝑐3 < 1 : Merge Clusters 𝑪𝟏 , 𝑪𝟐 & 𝑪𝟑 into 𝑪𝑵𝟏 #Examples Mean Vector 𝑪𝟏 100 (2,8) 𝑪𝟐 100 (2,7.5) 𝑪𝟑 100 (2.5,8) Mean of Merged Cluster 𝑪𝑵𝟏 = 𝟏𝟎𝟎× 𝟐,𝟖 +𝟏𝟎𝟎× 𝟐,𝟕.𝟓 +𝟏𝟎𝟎×(𝟐.𝟓,𝟖) 𝟑𝟎𝟎 = (2.16, 7.83) Population of Merged Cluster 𝑪𝑵𝟏 = 100 + 100 + 100 = 300 #Examples Mean Vector 𝑪𝟏 100 (2,8) 𝑪𝟐 100 (2,7.5) 𝑪𝟑 100 (2.5,8) 𝑪𝟒 1000 (5,5) 𝑪𝟓 100 (6,3) 𝑪𝟔 300 (8,3) #Examples Mean Vector 𝑪𝑵𝟏 300 (2.16, 7,83) 𝑪𝟒 1000 (5,5) 𝑪𝟓 100 (6,3) 𝑪𝟔 300 (8,3) 6 clusters to 4 clusters
  • 25. ISODATA Algorithms – Extension of K-Means Algorithm Splitting of Clusters Split a cluster if Principal Eigen Value > 7 units #Examples Mean Vector Variance Vector Principal Eigen Vector 𝒗𝟏 𝑻 Principal Eigen Value 𝝀𝟏 𝑪𝑵𝟏 100 (2.16,7.83) 𝑪𝟒 1000 (5,5) (7,7) (0.707,0.707) 10 𝑪𝟓 100 (6,3) (0.25,0.25) 𝑪𝟔 300 (8,3) (1,1) (1,0) Split 𝑪𝟒 into 2 new clusters 𝑪𝟒𝒂 & 𝑪𝟒𝒃 Mean of 𝑪𝟒𝒂 = (5,5) - 𝝀𝟏 × 𝒗𝟏 𝑻 = Mean of 𝑪𝟒𝒂 = (5,5) + 𝝀𝟏 × 𝒗𝟏 𝑻 =
  • 26. ISODATA Algorithms – Extension of K-Means Algorithm Splitting of Clusters Split a cluster if Principal Eigen Value > 7 units #Examples Mean Vector Variance Vector Principal Eigen Vector 𝒗𝟏 𝑻 Principal Eigen Value 𝝀𝟏 𝑪𝑵𝟏 100 (2.16,7.83) 𝑪𝟒 1000 (5,5) (7,7) (0.707,0.707) 10 𝑪𝟓 100 (6,3) (0.25,0.25) 𝑪𝟔 300 (8,3) (1,1) (1,0) Split 𝑪𝟒 into 2 new clusters 𝑪𝟒𝒂 & 𝑪𝟒𝒃 Mean of 𝑪𝟒𝒂 = (5,5) - 𝝀𝟏 × 𝒗𝟏 𝑻 = (5,5) - 3.16 ×(0.707,0.707) = (2.76, 2.76) Mean of 𝑪𝟒𝒂 = (5,5) + 𝝀𝟏 × 𝒗𝟏 𝑻 = (5,5) +3.16 × (0.707,0.707) = (7.23, 7.23)
  • 27. ISODATA Algorithms – Extension of K-Means Algorithm Splitting of Clusters Split a cluster if Principal Eigen Value > 7 units Mean Vector 𝑪𝑵𝟏 (2.16,7.83) 𝑪𝟒𝒂 (2.76, 2.76) 𝑪𝟒𝒃 (7.23, 7.23) 𝑪𝟓 (6,3) 𝑪𝟔 (8,3) Mean Vector 𝑪𝑵𝟏 (2.16, 7,83) 𝑪𝟒 (5,5) 𝑪𝟓 (6,3) 𝑪𝟔 (8,3) 4 clusters to 5 clusters
  • 28. ISODATA Algorithms – Extension of K-Means Algorithm K-Mean Algorithm – Fixed Number of Clusters ISODATA Algorithms – Number of Clusters varying with iteration
  • 30. Competitive & Co-operative Learning : Self Organization Map 1. Initialization Initialize Neuron Weights (Output Layer) to Random Nos 2. Competition : Select Winner Node Using a Distance Measure – Euclidean, Manhattan, etc Minimum distance Node is winner 𝑤𝑤𝑖𝑛 𝑡 3. Adaptation : Adapt Weights of the winning node 𝑤𝑤𝑖𝑛 𝑡+1 = 𝑤𝑤𝑖𝑛 𝑡 + 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛 𝑡 ) 4. Continue Steps 2 to 3 for each 𝑥𝑖 ( Presentation of all xi constitute one epoch) 5. Continue Epochs till convergence (Steps 2 to 4 ) 4. Result : The converged Neuron Weights 𝑤𝑘 Algorithm Data 𝑥𝑖 Neurons 𝑤𝑘 Features : 𝑥0, 𝑥1, 𝑥2,…, 𝑥𝑁−1 Neurons : 𝑤𝑐0, 𝑤𝑐1,…, 𝑤𝑐𝑀−1 Examples are sequentially presented to the input layer & Neuron weights updated
  • 31. Competitive Learning Example Data Vectors 𝑥𝑖 (𝑥0𝑖, 𝑥1𝑖) Current Neuron 0 Weights 𝑤𝑐0 Current Neuron 1 Weights 𝑤𝑐1 (1,2) (3,4) (6,5) (2,1) (8,9) (10,9) (9,9) Execute for one Epoch At the start of an epoch : Let Neuron 1 𝑤𝑐0(3,4) Neuron 2 𝑤𝑐1(6,5) Step 1: Initialize Neuron Weights/or Take last epoch’s weights 0 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10 12 x1 x0 2D Feature Space (𝒘𝟎𝒄𝟎, 𝒘𝟏𝒄𝟎) (𝒘𝟎𝒄𝟏, 𝒘𝟏𝒄𝟏) Neural Network Input Node 𝒙𝟎𝒊 𝒙𝟏𝒊
  • 32. Competitive Learning Data Vectors 𝑥𝑖 Current Neuron 1 Weights 𝑤𝑐0 Current Neuron 2 Weights 𝑤𝑐1 (8,9) (3,4) (6,5) (1,2) (10.9) (2,1) (9,9) Executing for an Epoch Step 2: Scrambling of Example Set Example Data Vectors 𝑥𝑖 Current Neuron 0 Weights 𝑤𝑐0 Current Neuron 1 Weights 𝑤𝑐1 (1,2) (3,4) (6,5) (2,1) (8,9) (10,9) (9,9)
  • 33. Data Vecto rs 𝑥𝑖 Current Neuron 0 Weights 𝑤𝑐0 Current Neuron 1 weights 𝑤𝑐1 𝑑𝑐0 𝑑𝑐1 Current Winning Neuron Weights 𝑤𝑤𝑖𝑛 𝑡 𝛼. (𝑥𝑖-𝑤𝑤𝑖𝑛 𝑡 ) New Winning Neuron Weights 𝑤𝑤𝑖𝑛 𝑡+1 (8,9) (3,4) (6,5) 10 6 (6,5) (0.5,1.0) (6.5,6.0) (1,2) (3,4) (6.5,6.0) Competitive Learning Learning Rate 𝛼 = 0.25 Distance Metric : Manhattan Step 3 : Learning Executing for one Epoch At the start of an epoch : Let Neuron 1 𝑤𝑐0(3,4) Neuron 2 𝑤𝑐1(6,5) Adaptation : Adapt Weights of the winning node 𝑤𝑤𝑖𝑛 𝑡+1 = 𝑤𝑤𝑖𝑛 𝑡 + 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛 𝑡 )
  • 34. Data Vecto rs 𝑥𝑖 Current Neuron 0 Weights 𝑤𝑐0 Current Neuron 1 weights 𝑤𝑐1 𝑑𝑐0 𝑑𝑐1 Current Winning Neuron Weights 𝑤𝑤𝑖𝑛 𝑡 𝛼. (𝑥𝑖-𝑤𝑤𝑖𝑛 𝑡 ) New Winning Neuron Weights 𝑤𝑤𝑖𝑛 𝑡+1 (8,9) (3,4) (6,5) 10 6 (6,5) (0.5,1.0) (6.5,6.0) (1,2) (3,4) (6.5,6.0) 4 9.5 (3,4) (-0.5,-0.5) (2.5,3.5) (10,9) (2.5,3.5) (6.5,6.0) Competitive Learning Learning Rate 𝛼 = 0.25 Distance Metric : Manhattan Step 3 : Learning Executing for one Epoch At the start of an epoch : Let Neuron 1 𝑤𝑐0(3,4) Neuron 2 𝑤𝑐1(6,5) Adaptation : Adapt Weights of the winning node 𝑤𝑤𝑖𝑛 𝑡+1 = 𝑤𝑤𝑖𝑛 𝑡 + 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛 𝑡 )
  • 35. Data Vecto rs 𝑥𝑖 Current Neuron 0 Weights 𝑤𝑐0 Current Neuron 1 weights 𝑤𝑐1 𝑑𝑐0 𝑑𝑐1 Current Winning Neuron Weights 𝑤𝑤𝑖𝑛 𝑡 𝛼. (𝑥𝑖-𝑤𝑤𝑖𝑛 𝑡 ) New Winning Neuron Weights 𝑤𝑤𝑖𝑛 𝑡+1 (8,9) (3,4) (6,5) 10 6 (6,5) (0.5,1.0) (6.5,6.0) (1,2) (3,4) (6.5,6.0) 4 9.5 (3,4) (-0.5,-0.5) (2.5,3.5) (10,9) (2.5,3.5) (6.5,6.0) 13 6.5 (6.5,6.0) (0.875,0.75) (7.375,6.75) (2,1) (2.5,3.5) (7.375,6.75) 3 11.1 (2.5,3.5) (-0.125,-0.625) (2.375,2.88) (9,9) (2.375,2.875) (7.375,6.75) 12.75 3.88 (7.38,6.8) (0.406,0.56) (7.786,7.36) Competitive Learning Learning Rate 𝛼 = 0.25 Distance Metric : Manhattan Step 3 : Learning Executing for one Epoch At the start of an epoch : Let Neuron 1 𝑤𝑐0(3,4) Neuron 2 𝑤𝑐1(6,5) At the end of an epoch : Neuron 1 𝑤𝑐0(2.375,2.875) Neuron 2 𝑤𝑐1(7.786,7.36)
  • 36. Competitive Learning Executing for an Epoch Data Vectors 𝑥𝑖 Current Neuron 1 Weights 𝑤𝑐0 Current Neuron 2 Weights 𝑤𝑐1 (8,9) (3,4) (6,5) (1,2) (10.9) (2,1) (9,9) 0 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10 12 x1 x0 2D Feature Space 0 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10 12 x1 x0 2D Feature Space At the end of an epoch : Neuron 1 𝑤𝑐0(2.375,2.875) Neuron 2 𝑤𝑐1(7.786,7.36)
  • 37. Competitive & Co-operative Learning : Self Organization Map 1. Initialization Initialize Neuron Weights (Output Layer) to Random Nos 2. Competition : Select Winner Node Using a Distance Measure – Euclidean, Manhattan, etc Minimum distance Node is winner 𝑤𝑤𝑖𝑛 𝑡 Neighbour of winning node Neighbours of winning node 𝑤𝑛𝑒𝑖𝑔ℎ1 𝑡 , 𝑤𝑛𝑒𝑖𝑔ℎ2 𝑡 , 𝑤𝑛𝑒𝑖𝑔ℎ3 𝑡 , etc, 3. Adaptation : Adapt Weights of the winning node 𝑤𝑤𝑖𝑛 𝑡+1 = 𝑤𝑤𝑖𝑛 𝑡 + 𝛼. (𝑥𝑖 - 𝑤𝑤𝑖𝑛 𝑡 ) 4. Cooperation: Adapt Weights of the neighboring nodes k 𝑤𝑛𝑒𝑖𝑔ℎ−𝑘 𝑡+1 = 𝑤𝑛𝑒𝑖𝑔ℎ−𝑘 𝑡 + 𝛼1. (𝑥𝑖 - 𝑤𝑛𝑒𝑖𝑔ℎ−𝑘 𝑡 ) for all k , 𝛼1 < 𝛼 5. Continue Steps 2 to 4 for each 𝑥𝑖 ( Presentation of all xi constitute one epoch) 6. Continue Epochs till convergence (Steps 2 to 5 ) 7. Result : The converged Neuron Weights 𝑤𝑘 Algorithm Data 𝑥𝑖 Neurons 𝑤𝑘