SlideShare a Scribd company logo
1 of 57
Unsupervised Learning
Contents
• Introduction
• Competitive Learning networks
• Kohenen self-organizing networks
• Learning vector quantization
• Hebbian learning
The main property of a neural network is an
ability to learn from its environment, and to
improve its performance through learning. So
far we have considered supervised or active
learning - learning with an external “teacher”
or a supervisor who presents a training set to the
network. But another type of learning also
exists: unsupervised learning.
Introduction
 In contrast to supervised learning, unsupervised or
self-organised learning does not require an
external teacher. During the training session, the
neural network receives a number of different
input patterns, discovers significant features in
these patterns and learns how to classify input data
into appropriate categories. Unsupervised
learning tends to follow the neuro-biological
organisation of the brain.
 Unsupervised learning algorithms aim to learn
rapidly and can be used in real-time.
In 1949, Donald Hebb proposed one of the key
ideas in biological learning, commonly known as
Hebb’s Law. Hebb’s Law states that if neuron i is
near enough to excite neuron j and repeatedly
participates in its activation, the synaptic connection
between these two neurons is strengthened and
neuron j becomes more sensitive to stimuli from
neuron i.
Hebbian learning
Hebb’s Law can be represented in the form of two
rules:
1. If two neurons on either side of a connection
are activated synchronously, then the weight of
that connection is increased.
2. If two neurons on either side of a connection
are activated asynchronously, then the weight
of that connection is decreased.
Hebb’s Law provides the basis for learning
without a teacher. Learning here is a local
phenomenon occurring without feedback from
the environment.
 In competitive learning, neurons compete among
themselves to be activated.
 While in Hebbian learning, several output neurons
can be activated simultaneously, in competitive
learning, only a single output neuron is active at
any time.
 The output neuron that wins the “competition” is
called the winner-takes-all neuron.
Competitive learning
Competitive learning neural networks
 The basic idea of competitive learning was
introduced in the early 1970s.
 In the late 1980s, Teuvo Kohonen introduced a
special class of artificial neural networks called
self-organizing feature maps. These maps are
based on competitive learning.
Our brain is dominated by the cerebral cortex, a
very complex structure of billions of neurons and
hundreds of billions of synapses. The cortex
includes areas that are responsible for different
human activities (motor, visual, auditory,
somatosensory, etc.), and associated with different
sensory inputs. We can say that each sensory
input is mapped into a corresponding area of the
cerebral cortex. The cortex is a self-organizing
computational map in the human brain.
What is a self-organizing feature map?
Kohenen Self-Organizing
Feature Maps
• Feature mapping converts a wide pattern
space into a typical feature space
• Apart from reducing higher dimensionality
it has to preserve the neighborhood
relations of input patterns
Feature-mapping Kohonen model
Input layer
Kohonenlayer
(a)
Input layer
Kohonenlayer
1 1
(b)
0
0
The Kohonen network
 The Kohonen model provides a topological
mapping. It places a fixed number of input
patterns from the input layer into a higher-
dimensional output or Kohonen layer.
 Training in the Kohonen network begins with the
winner’s neighborhood of a fairly large size. Then,
as training proceeds, the neighborhood size
gradually decreases.
Model: Horizontal & Vertical lines
Rumelhart & Zipser, 1985
• Problem – identify vertical or horizontal
signals
• Inputs are 6 x 6 arrays
• Intermediate layer with 8 units
• Output layer with 2 units
• Cannot work with one layer
Rumelhart & Zipser, Cntd
H V
Geometrical Interpretation
• So far the ordering of the output units
themselves was not necessarily informative
• The location of the winning unit can give us
information regarding similarities in the data
• We are looking for an input output mapping that
conserves the topologic properties of the
inputs  feature mapping
• Given any two spaces, it is not guaranteed that
such a mapping exits!
 In the Kohonen network, a neuron learns by
shifting its weights from inactive connections to
active ones. Only the winning neuron and its
neighborhood are allowed to learn. If a neuron
does not respond to a given input pattern, then
learning cannot occur in that particular neuron.
 The competitive learning rule defines the change
Dwij applied to synaptic weight wij as
where xi is the input signal and a is the learning
rate parameter.


 -

D
n
competitio
the
loses
neuron
if
,
0
n
competitio
the
wins
neuron
if
),
(
j
j
w
x
w
ij
i
ij
a
 The overall effect of the competitive learning rule
resides in moving the synaptic weight vector Wj of
the winning neuron j towards the input pattern X.
The matching criterion is equivalent to the
minimum Euclidean distance between vectors.
 The Euclidean distance between a pair of n-by-1
vectors X and Wj is defined by
where xi and wij are the ith elements of the vectors
X and Wj, respectively.
2
/
1
1
2
)
(








-

-
 

n
i
ij
i
j w
x
d W
X
 To identify the winning neuron, jX, that best
matches the input vector X, we may apply the
following condition:
where m is the number of neurons in the Kohonen
layer.
,
j
j
min
j W
X
X -
 j = 1, 2, . . .,m
 Suppose, for instance, that the 2-dimensional input
vector X is presented to the three-neuron Kohonen
network,
 The initial weight vectors, Wj, are given by







12
.
0
52
.
0
X







81
.
0
27
.
0
1
W 






70
.
0
42
.
0
2
W 






21
.
0
43
.
0
3
W
 We find the winning (best-matching) neuron jX
using the minimum-distance Euclidean criterion:
 Neuron 3 is the winner and its weight vector W3 is
updated according to the competitive learning rule.
2
21
2
2
11
1
1 )
(
)
( w
x
w
x
d -

-
 73
.
0
)
81
.
0
12
.
0
(
)
27
.
0
52
.
0
( 2
2

-

-

2
22
2
2
12
1
2 )
(
)
( w
x
w
x
d -

-
 59
.
0
)
70
.
0
12
.
0
(
)
42
.
0
52
.
0
( 2
2

-

-

2
23
2
2
13
1
3 )
(
)
( w
x
w
x
d -

-
 13
.
0
)
21
.
0
12
.
0
(
)
43
.
0
52
.
0
( 2
2

-

-

0.01
)
43
.
0
52
.
0
(
1
.
0
)
( 13
1
13 
-

-

D w
x
w
0.01
)
21
.
0
12
.
0
(
1
.
0
)
( 23
2
23 -

-

-

D w
x
w
 The updated weight vector W3 at iteration (p + 1)
is determined as:
 The weight vector W3 of the wining neuron 3
becomes closer to the input vector X with each
iteration.













-








D



20
.
0
44
.
0
01
.
0
0.01
21
.
0
43
.
0
)
(
)
(
)
1
( 3
3
3 p
p
p W
W
W
Measures of similarity
Distance Normalized scalar product
Kohenen Self-Organizing
networks
Neighbourhood Shapes
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * * * * *
* * * # * * *
r=2
r=1
r=0
* * *
* * *
* *
* *
#
*
* *
r=2
r=1
r=0
Square neighbourhood Hexagonal neighbourhood
• Also known as Kohenen Feature maps or
topology-preserving maps
• Learning procedure of Kohenen feature maps is
similar to that of competitive learning networks.
• Similarity (dissimilarity) measure is selected and
the winning unit is considered to be the one with
the largest (smallest) activation
• The weights of the winning neuron as well as the
neighborhood around the winning units are
adjusted.
• Neighborhood size decreases slowly with every
iteration.
Training of kohenon self organizing
network
1. Select the winning output unit as the one with
the largest similarity measure between all wi
and xi . The winning unit c satisfies the
equation
||x-wc||=min||x-wi|| where the index c refers to
the winning unit (Euclidean distance)
2. Let NBc denote a set of index corresponding to
a neighborhood around winner c. The weights
of the winner and it neighboring units are
updated by
Δwi=ɳ(x-wi) iεNBc
Start
Initialize
Weights,
Learning rate
Initialize
Topological
Neighborhood
params
For
Each i/p
X
For i=1 to n
For j=1 to m
D(j)=Ʃ(xi-wij)2
Winning unit index J is computed
D(J)=minimum
Update Weights of winning
unit
continue
continue
Reduce learning
rate
Reduce radius
of network
Test
(t+1)
Is reduced
Stop
Problem
• Construct a kohenen self-organizing map
to cluster the four given vectors [0 0 1 1],
[1 0 0 0], [0 1 1 0],[0 0 01]. The number of
cluster to be formed is two. Initial learning
rate is 0.5
• Initial weights =












3
.
0
8
.
0
5
.
0
6
.
0
7
.
0
4
.
0
9
.
0
2
.
0
Solution to the problem
Input vector Winner weights
[0 0 1 1] D(1) [0.1 0.2 0.8 0.9]
[1 0 0 0] D(2) [0.95 0.35 0.25 0.15]
[0 1 1 0] D(1) [0.05 0.6 0.9 0.95]
[0 0 0 1] D(1) [0.025 0.3 0.45 0.975]
Some Observations
• Ordering phase (initial period of adaptation) :
learning rate should be close to unity
• Learning rate should be decreased linearly,
exponentially or inversely with iteration over the first
1000 epochs while maintaining its value above 0.1
• Convergence phase: learning rate should be
maintained at around 0.01 for a large number of
epochs
– may typically run into many tens of thousands of
epochs
• During the ordering phase Nk
IJ shrinks linearly with k
to finally include only a few neurons
• During the convergence phase Nk
IJ may comprise
only one or no neighbours
Simulation Example
The data employed in the
experiment comprised
500 points distributed
uniformly over the bipolar
square [−1, 1] × [−1, 1]
The points thus describe
a geometrically square
topology
SOFM Simulation
SOFM Simulation
SOFM Simulation
Simulation Notes
• Initial value of the neighbourhood radius r = 6
– Neighbourhood is initially a square of width 12
centered around the winning neuron IJ
• Neighbourhood width contracts by 1 every 200
epochs
• After 1000 epochs, neighbourhood radius
maintained at 1
– Means that the winning neuron and its four adjacent
neurons are designated to update their weights on all
subsequent iterations
– Can also let this value go to zero which means that
eventually, during the learning phase only the winning
neuron updates its weights
Inference
Initial random weights
Network after 100 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1
W(2,j)
W(1,j)
Network after 1000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1
W(2,j)
W(1,j)
Network after 10,000 iterations
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
-1
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1
W(2,j)
W(1,j)
Cluster Visualisation with ANNs
• Another demo (from http://www.ai-junkie.com/ann/som/som5.html): self-
organisation of small coloured blocks on the basis of their RGB colour
values.
• It can be used for practical purposes in mapping world poverty, for example,
when measured by a complex series of variables (e.g. health, nutrition,
education, water supply etc.)
• All of these are forms of dimensionality reduction – take complex
multivariate data and reduce it to two (or N) dimensions.
Limitations of competitive learning
• Weights are initialized to random values which
might be far from any input vector and it never
gets updated
– Can be prevented by initializing the weights to
samples from the input data itself, thereby ensuring
that all weights get updated when all the input
patterns are presented
– Or else the weights of winning as well as losing
neurons can be updated by tuning the learning
constant by using a significantly smaller learning rate
for the losers. This is called as leaky learning
– Note:- Changing η is generally desired. An initial value of η explores the data
space widely. Later on progressively smaller value refines the weights.
Limitations of competitive learning
• Lacks the capability to add new clusters
when deemed necessary
• If ɳ is constant –no stability of clusters
• If ɳ is decreasing with time may become
too small to update cluster centers
• This is called as stability-plasticity
dilemma (Solved using adaptive
resonance theory (ART))
• If the output units are arranged in the form of a vector or
matrix then the weights of winners as well as
neighbouring losers can be updated. (Kohenen feature
maps)
• After learning the input space is divided into a number of
disjoint clusters. These cluster centers are known as
template or code book
• For any input pattern presented we can use an
appropriate code book vector (Vector Quantization)
• This vector quantization is used in data compression in
IP and communication systems.
Learning Vector Quantization
LVQ
LVQ
• Recall that a Kohonen SOM is a clustering technique, which can be
used to provide insight into the nature of data. We can transform this
unsupervised neural network into a supervised LVQ neural network.
• The network architecture is just like a SOM, but without a topological
structure.
• Each output neuron represents a known category (e.g. apple, pear,
orange).
• Input vector = x=(x1,x2…..xn)
• Weight vector for the jth output neuron wj=( w1j,w2j,….wnj)
• Cj= Category represented by the jth neuron. This is pre-assigned.
• T = Correct category for input
• Define Euclidean distance between the input vector and the weight
vector of the jth neuron as: Ʃ(xi-wij)2
• It is an adaptive data classification method
based on training data with desired class
information
• It is actually a supervised training method
but employs unsupervised data-clustering
techniques to preprocess the data set and
obtain cluster centers
• Resembles a competitive learning network
except that each output unit is associated
with a class.
Network representation of LVQ
Possible data distributions and
decision boundaries
LVQ learning algorithm
• Step 1: Initialize the cluster centers by a
clustering method
• Step 2: Label each cluster by the voting method
• Step 3: Randomly select a training input vector x
and find k such that ||x-wk|| is a minimum
• Step 4: If x and wk belong to the same class
update wk by
)
( wk
x
wk -

D 
else
)
( wk
x
wk -
-

D 
• The parameters used for the training
process of a LVQ include the following
– x=training vector (x1,x2,……xn)
– T=category or class for the training vector x
– wj= weight vector for j th output unit
(w1j,…wij….wnj)
– cj= cluster or class or category associated
with jth output unit
– The Euclidean distance of jth output unit is
D(j)=Ʃ(xi-wij)2
Start Initialize weight
Learning rate
For each i/p
x
A
B
Y Calculate winner
Winner = min D(j)
If T=Cj
Input T
wj(n)=wj(o) +
ɳ[x-wj(o)]
wj(n)=wj(o) -
ɳ[x-wj(o)]
Y N
Reduce ɳ
ɳ(t+1)=0.5 ɳ(t)
If ɳ reduces
negligible
Stop
Y
B
A
Problem
• Construct and test and LVQ net with five
vectors assigned to two classes. The
given vectors along with the classes are
as shown in the table below
Vector Class
[0 0 1 1] 1
[1 0 0 0] 2
[0 0 0 1] 2
[1 1 0 0] 1
[0 1 1 0] 1

More Related Content

Similar to Unsupervised Neural Nets Cluster Data

ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Randa Elanwar
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Networkssuserab4f3e
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4CSITSansar
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IIEdgar Carrillo
 
Self Organizing Maps: Fundamentals
Self Organizing Maps: FundamentalsSelf Organizing Maps: Fundamentals
Self Organizing Maps: FundamentalsSpacetoshare
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.pptRINUSATHYAN
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxpratik610182
 
20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdfTitleTube
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)sai anjaneya
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101AMIT KUMAR
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIUProf. Neeta Awasthy
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGmohanapriyastp
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learningRADO7900
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxraghavaram5555
 

Similar to Unsupervised Neural Nets Cluster Data (20)

ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo IINeural networks Self Organizing Map by Engr. Edgar Carrillo II
Neural networks Self Organizing Map by Engr. Edgar Carrillo II
 
Self Organizing Maps: Fundamentals
Self Organizing Maps: FundamentalsSelf Organizing Maps: Fundamentals
Self Organizing Maps: Fundamentals
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
 
20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf20200428135045cfbc718e2c.pdf
20200428135045cfbc718e2c.pdf
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
Artificial Neural Networks for NIU
Artificial Neural Networks for NIUArtificial Neural Networks for NIU
Artificial Neural Networks for NIU
 
Neural
NeuralNeural
Neural
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNINGARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 

Recently uploaded

What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)Suman Mia
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 

Recently uploaded (20)

What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)Software Development Life Cycle By  Team Orange (Dept. of Pharmacy)
Software Development Life Cycle By Team Orange (Dept. of Pharmacy)
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 

Unsupervised Neural Nets Cluster Data

  • 2. Contents • Introduction • Competitive Learning networks • Kohenen self-organizing networks • Learning vector quantization • Hebbian learning
  • 3. The main property of a neural network is an ability to learn from its environment, and to improve its performance through learning. So far we have considered supervised or active learning - learning with an external “teacher” or a supervisor who presents a training set to the network. But another type of learning also exists: unsupervised learning. Introduction
  • 4.  In contrast to supervised learning, unsupervised or self-organised learning does not require an external teacher. During the training session, the neural network receives a number of different input patterns, discovers significant features in these patterns and learns how to classify input data into appropriate categories. Unsupervised learning tends to follow the neuro-biological organisation of the brain.  Unsupervised learning algorithms aim to learn rapidly and can be used in real-time.
  • 5. In 1949, Donald Hebb proposed one of the key ideas in biological learning, commonly known as Hebb’s Law. Hebb’s Law states that if neuron i is near enough to excite neuron j and repeatedly participates in its activation, the synaptic connection between these two neurons is strengthened and neuron j becomes more sensitive to stimuli from neuron i. Hebbian learning
  • 6. Hebb’s Law can be represented in the form of two rules: 1. If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased. 2. If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. Hebb’s Law provides the basis for learning without a teacher. Learning here is a local phenomenon occurring without feedback from the environment.
  • 7.  In competitive learning, neurons compete among themselves to be activated.  While in Hebbian learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time.  The output neuron that wins the “competition” is called the winner-takes-all neuron. Competitive learning
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.  The basic idea of competitive learning was introduced in the early 1970s.  In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called self-organizing feature maps. These maps are based on competitive learning.
  • 14. Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and hundreds of billions of synapses. The cortex includes areas that are responsible for different human activities (motor, visual, auditory, somatosensory, etc.), and associated with different sensory inputs. We can say that each sensory input is mapped into a corresponding area of the cerebral cortex. The cortex is a self-organizing computational map in the human brain. What is a self-organizing feature map?
  • 15. Kohenen Self-Organizing Feature Maps • Feature mapping converts a wide pattern space into a typical feature space • Apart from reducing higher dimensionality it has to preserve the neighborhood relations of input patterns
  • 16. Feature-mapping Kohonen model Input layer Kohonenlayer (a) Input layer Kohonenlayer 1 1 (b) 0 0
  • 17. The Kohonen network  The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher- dimensional output or Kohonen layer.  Training in the Kohonen network begins with the winner’s neighborhood of a fairly large size. Then, as training proceeds, the neighborhood size gradually decreases.
  • 18. Model: Horizontal & Vertical lines Rumelhart & Zipser, 1985 • Problem – identify vertical or horizontal signals • Inputs are 6 x 6 arrays • Intermediate layer with 8 units • Output layer with 2 units • Cannot work with one layer
  • 20. Geometrical Interpretation • So far the ordering of the output units themselves was not necessarily informative • The location of the winning unit can give us information regarding similarities in the data • We are looking for an input output mapping that conserves the topologic properties of the inputs  feature mapping • Given any two spaces, it is not guaranteed that such a mapping exits!
  • 21.  In the Kohonen network, a neuron learns by shifting its weights from inactive connections to active ones. Only the winning neuron and its neighborhood are allowed to learn. If a neuron does not respond to a given input pattern, then learning cannot occur in that particular neuron.  The competitive learning rule defines the change Dwij applied to synaptic weight wij as where xi is the input signal and a is the learning rate parameter.    -  D n competitio the loses neuron if , 0 n competitio the wins neuron if ), ( j j w x w ij i ij a
  • 22.  The overall effect of the competitive learning rule resides in moving the synaptic weight vector Wj of the winning neuron j towards the input pattern X. The matching criterion is equivalent to the minimum Euclidean distance between vectors.  The Euclidean distance between a pair of n-by-1 vectors X and Wj is defined by where xi and wij are the ith elements of the vectors X and Wj, respectively. 2 / 1 1 2 ) (         -  -    n i ij i j w x d W X
  • 23.  To identify the winning neuron, jX, that best matches the input vector X, we may apply the following condition: where m is the number of neurons in the Kohonen layer. , j j min j W X X -  j = 1, 2, . . .,m
  • 24.  Suppose, for instance, that the 2-dimensional input vector X is presented to the three-neuron Kohonen network,  The initial weight vectors, Wj, are given by        12 . 0 52 . 0 X        81 . 0 27 . 0 1 W        70 . 0 42 . 0 2 W        21 . 0 43 . 0 3 W
  • 25.  We find the winning (best-matching) neuron jX using the minimum-distance Euclidean criterion:  Neuron 3 is the winner and its weight vector W3 is updated according to the competitive learning rule. 2 21 2 2 11 1 1 ) ( ) ( w x w x d -  -  73 . 0 ) 81 . 0 12 . 0 ( ) 27 . 0 52 . 0 ( 2 2  -  -  2 22 2 2 12 1 2 ) ( ) ( w x w x d -  -  59 . 0 ) 70 . 0 12 . 0 ( ) 42 . 0 52 . 0 ( 2 2  -  -  2 23 2 2 13 1 3 ) ( ) ( w x w x d -  -  13 . 0 ) 21 . 0 12 . 0 ( ) 43 . 0 52 . 0 ( 2 2  -  -  0.01 ) 43 . 0 52 . 0 ( 1 . 0 ) ( 13 1 13  -  -  D w x w 0.01 ) 21 . 0 12 . 0 ( 1 . 0 ) ( 23 2 23 -  -  -  D w x w
  • 26.  The updated weight vector W3 at iteration (p + 1) is determined as:  The weight vector W3 of the wining neuron 3 becomes closer to the input vector X with each iteration.              -         D    20 . 0 44 . 0 01 . 0 0.01 21 . 0 43 . 0 ) ( ) ( ) 1 ( 3 3 3 p p p W W W
  • 27. Measures of similarity Distance Normalized scalar product
  • 29. Neighbourhood Shapes * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * # * * * r=2 r=1 r=0 * * * * * * * * * * # * * * r=2 r=1 r=0 Square neighbourhood Hexagonal neighbourhood
  • 30. • Also known as Kohenen Feature maps or topology-preserving maps • Learning procedure of Kohenen feature maps is similar to that of competitive learning networks. • Similarity (dissimilarity) measure is selected and the winning unit is considered to be the one with the largest (smallest) activation • The weights of the winning neuron as well as the neighborhood around the winning units are adjusted. • Neighborhood size decreases slowly with every iteration.
  • 31. Training of kohenon self organizing network 1. Select the winning output unit as the one with the largest similarity measure between all wi and xi . The winning unit c satisfies the equation ||x-wc||=min||x-wi|| where the index c refers to the winning unit (Euclidean distance) 2. Let NBc denote a set of index corresponding to a neighborhood around winner c. The weights of the winner and it neighboring units are updated by Δwi=ɳ(x-wi) iεNBc
  • 32. Start Initialize Weights, Learning rate Initialize Topological Neighborhood params For Each i/p X For i=1 to n For j=1 to m D(j)=Ʃ(xi-wij)2 Winning unit index J is computed D(J)=minimum Update Weights of winning unit continue continue Reduce learning rate Reduce radius of network Test (t+1) Is reduced Stop
  • 33. Problem • Construct a kohenen self-organizing map to cluster the four given vectors [0 0 1 1], [1 0 0 0], [0 1 1 0],[0 0 01]. The number of cluster to be formed is two. Initial learning rate is 0.5 • Initial weights =             3 . 0 8 . 0 5 . 0 6 . 0 7 . 0 4 . 0 9 . 0 2 . 0
  • 34. Solution to the problem Input vector Winner weights [0 0 1 1] D(1) [0.1 0.2 0.8 0.9] [1 0 0 0] D(2) [0.95 0.35 0.25 0.15] [0 1 1 0] D(1) [0.05 0.6 0.9 0.95] [0 0 0 1] D(1) [0.025 0.3 0.45 0.975]
  • 35. Some Observations • Ordering phase (initial period of adaptation) : learning rate should be close to unity • Learning rate should be decreased linearly, exponentially or inversely with iteration over the first 1000 epochs while maintaining its value above 0.1 • Convergence phase: learning rate should be maintained at around 0.01 for a large number of epochs – may typically run into many tens of thousands of epochs • During the ordering phase Nk IJ shrinks linearly with k to finally include only a few neurons • During the convergence phase Nk IJ may comprise only one or no neighbours
  • 36. Simulation Example The data employed in the experiment comprised 500 points distributed uniformly over the bipolar square [−1, 1] × [−1, 1] The points thus describe a geometrically square topology
  • 40. Simulation Notes • Initial value of the neighbourhood radius r = 6 – Neighbourhood is initially a square of width 12 centered around the winning neuron IJ • Neighbourhood width contracts by 1 every 200 epochs • After 1000 epochs, neighbourhood radius maintained at 1 – Means that the winning neuron and its four adjacent neurons are designated to update their weights on all subsequent iterations – Can also let this value go to zero which means that eventually, during the learning phase only the winning neuron updates its weights
  • 42. Network after 100 iterations -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1 W(2,j) W(1,j)
  • 43. Network after 1000 iterations -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1 W(2,j) W(1,j)
  • 44. Network after 10,000 iterations -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 -1 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1 W(2,j) W(1,j)
  • 45. Cluster Visualisation with ANNs • Another demo (from http://www.ai-junkie.com/ann/som/som5.html): self- organisation of small coloured blocks on the basis of their RGB colour values. • It can be used for practical purposes in mapping world poverty, for example, when measured by a complex series of variables (e.g. health, nutrition, education, water supply etc.) • All of these are forms of dimensionality reduction – take complex multivariate data and reduce it to two (or N) dimensions.
  • 46. Limitations of competitive learning • Weights are initialized to random values which might be far from any input vector and it never gets updated – Can be prevented by initializing the weights to samples from the input data itself, thereby ensuring that all weights get updated when all the input patterns are presented – Or else the weights of winning as well as losing neurons can be updated by tuning the learning constant by using a significantly smaller learning rate for the losers. This is called as leaky learning – Note:- Changing η is generally desired. An initial value of η explores the data space widely. Later on progressively smaller value refines the weights.
  • 47. Limitations of competitive learning • Lacks the capability to add new clusters when deemed necessary • If ɳ is constant –no stability of clusters • If ɳ is decreasing with time may become too small to update cluster centers • This is called as stability-plasticity dilemma (Solved using adaptive resonance theory (ART))
  • 48. • If the output units are arranged in the form of a vector or matrix then the weights of winners as well as neighbouring losers can be updated. (Kohenen feature maps) • After learning the input space is divided into a number of disjoint clusters. These cluster centers are known as template or code book • For any input pattern presented we can use an appropriate code book vector (Vector Quantization) • This vector quantization is used in data compression in IP and communication systems.
  • 50. LVQ • Recall that a Kohonen SOM is a clustering technique, which can be used to provide insight into the nature of data. We can transform this unsupervised neural network into a supervised LVQ neural network. • The network architecture is just like a SOM, but without a topological structure. • Each output neuron represents a known category (e.g. apple, pear, orange). • Input vector = x=(x1,x2…..xn) • Weight vector for the jth output neuron wj=( w1j,w2j,….wnj) • Cj= Category represented by the jth neuron. This is pre-assigned. • T = Correct category for input • Define Euclidean distance between the input vector and the weight vector of the jth neuron as: Ʃ(xi-wij)2
  • 51. • It is an adaptive data classification method based on training data with desired class information • It is actually a supervised training method but employs unsupervised data-clustering techniques to preprocess the data set and obtain cluster centers • Resembles a competitive learning network except that each output unit is associated with a class.
  • 53. Possible data distributions and decision boundaries
  • 54. LVQ learning algorithm • Step 1: Initialize the cluster centers by a clustering method • Step 2: Label each cluster by the voting method • Step 3: Randomly select a training input vector x and find k such that ||x-wk|| is a minimum • Step 4: If x and wk belong to the same class update wk by ) ( wk x wk -  D  else ) ( wk x wk - -  D 
  • 55. • The parameters used for the training process of a LVQ include the following – x=training vector (x1,x2,……xn) – T=category or class for the training vector x – wj= weight vector for j th output unit (w1j,…wij….wnj) – cj= cluster or class or category associated with jth output unit – The Euclidean distance of jth output unit is D(j)=Ʃ(xi-wij)2
  • 56. Start Initialize weight Learning rate For each i/p x A B Y Calculate winner Winner = min D(j) If T=Cj Input T wj(n)=wj(o) + ɳ[x-wj(o)] wj(n)=wj(o) - ɳ[x-wj(o)] Y N Reduce ɳ ɳ(t+1)=0.5 ɳ(t) If ɳ reduces negligible Stop Y B A
  • 57. Problem • Construct and test and LVQ net with five vectors assigned to two classes. The given vectors along with the classes are as shown in the table below Vector Class [0 0 1 1] 1 [1 0 0 0] 2 [0 0 0 1] 2 [1 1 0 0] 1 [0 1 1 0] 1