SlideShare a Scribd company logo
1 of 62
Download to read offline
1
CS407 Neural Computation
Lecture 7:
Neural Networks Based on
Competition.
Lecturer: A/Prof. M. Bennamoun
2
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
3
Introduction…Competitive nets
Fausett
The most extreme form of competition among a group of
neurons is called “Winner Take All”.
As the name suggests, only one neuron in the competing
group will have a nonzero output signal when the
competition is completed.
A specific competitive net that performs Winner Take All
(WTA) competition is the Maxnet.
A more general form of competition, the “Mexican Hat” will
also be described in this lecture (instead of a non-zero o/p for the winner
and zeros for all other competing nodes, we have a bubble around the winner ).
All of the other nets we discuss in this lecture use WTA
competition as part of their operation.
With the exception of the fixed-weight competitive nets
(namely Maxnet, Mexican Hat, and Hamming net) all of the
other nets combine competition with some form of learning
to adjusts the weights of the net (i.e. the weights that are
not part of any interconnections in the competitive layer).
4
Introduction…Competitive nets
Fausett
The form of learning depends on the purpose for which the
net is being trained:
– LVQ and counterpropagation net are trained to
perform mappings. The learning in this case is
supervised.
– SOM (used for clustering of input data): a common
use of unsupervised learning.
– ART are also clustering nets: also unsupervised.
Several of the nets discussed use the same learning
algorithm known as “Kohonen learning”: where the units
that update their weights do so by forming a new weight
vector that is a linear combination of the old weight vector
and the current input vector.
Typically, the unit whose weight vector was closest to the
input vector is allowed to learn.
5
Introduction…Competitive nets
Fausett
The weight update for output (or cluster) unit j is given as:
where x is the input vector, w.j is the weight vector for unit j,
and α the learning rate, decreases as learning proceeds.
Two methods of determining the closest weight vector to a
pattern vector are commonly used for self-organizing nets.
Both are based on the assumption that the weight vector
for each cluster (output) unit serves as an exemplar for the
input vectors that have been assigned to that unit during
learning.
– The first method of determining the winner uses the
squared Euclidean distance between the I/P vector
[ ]
old)()1(
old)()old()new(
.
...
j
jjj
wx
wxww
αα
α
−+
−+=
6
Introduction…Competitive nets
Fausett
and the weight vector and chooses the unit whose
weight vector has the smallest Euclidean distance
from the I/P vector.
– The second method uses the dot product of the I/P
vector and the weight vector.
The dot product can be interpreted as giving the
correlation between the I/P and weight vector.
In general and for consistency, we will use Euclidean
distance squared.
Many NNs use the idea of competition among neurons to
enhance the contrast in activations of the neurons
In the most extreme situation, the case of the Winner-Take-
All, only the neuron with the largest activation is allowed to
remain “on”.
7
8
Introduction…Competitive nets
1. Euclidean distance between two vectors
2. Cosine of the angle between two vectors
( ) ( )i
t
ii xxxxxx −−=−
i
i
t
xx
xx
=ψcos
21
ψψψ << T
9
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
10
MAXNET
[ ]k
M
k
yWy Γ=+1
















−−
−−
−−
=
1..
.
.
..1
..1
εε
εε
εε
MW
( )



≥
<
=
0,
0,0
netnet
net
netf
p
10 << ε
11
Hamming Network and MAXNET
MAXNET
– A recurrent network involving both excitatory
and inhibitory connections
• Positive self-feedbacks and negative
cross-feedbacks
– After a number of recurrences, the only non-
zero node will be the one with the largest
initializing entry from i/p vector
12
• Maxnet Lateral inhibition between competitors
Maxnet Fixed-weight competitive net
Fausett


 >
=
otherwise0
0if
)(
:Maxnetfor thefunctionactivation
xx
xf
Competition:
• iterative process until the net stabilizes (at most one
node with positive activation)
– Step 0:
• Initialize activations and weight (set )
where m is the # of nodes (competitors)
,/10 m<< ε
13
Algorithm
Step 1: While stopping condition is false do steps 2-4
– Step 2: Update the activation of each node:
– Step 3: Save activation for use in next iteration:
– Step 4: Test stopping condition:
If more than one node has a nonzero activation,
continue; otherwise, stop.



−
=
=
=
otherwise
if1
:weights
nodeinput to)0(
ε
ji
w
Aa
ij
jj
mj(old)a(old) –af(new)a
jk
kjj ,...,1for =





= ∑≠
ε
mj(new)a(old)a jj ,...,1for ==
the total i/p to node Aj from
all nodes, including itself
14
Example
Note that in step 2, the i/p to the function f is the total
i/p to node Aj from all nodes, including itself.
Some precautions should be incorporated to handle
the situation in which two or more units have the
same, maximal, input.
Example:
ε = .2 and the initial activations (input signals) are:
a1(0) = .2, a2(0) = .4 a3(0) = .4 a4(0) = .8
As the net iterates, the activations are:
a1(1) = f{a1(0) -.2 [a2(0) + a3(0) + a4(0)]} = f(-.12)=0
a1(1) = .0, a2(1) = .08 a3(1) = .32 a4(1) = .56
a1(2) = .0, a2(2) = .0 a3(2) = .192 a4(2) = .48
a1(3) = .0, a2(3) = .0 a3(3) = .096 a4(3) = .442
a1(4) = .0, a2(4) = .0 a3(4) = .008 a4(4) = .422
a1(5) = .0, a2(5) = .0 a3(5) = .0 a4(5) = .421
The only node
to remain on
15
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
16
Mexican Hat Fixed-weight competitive net
Architecture
Each neuron is connected with excitatory links (positively
weighted) to a number of “cooperative neighbors” neurons
that are in close proximity.
Each neuron is also connected with inhibitory links (with
negative weights) to a number of “competitive neighbors”
neurons that are somewhat further away.
There may also be a number of neurons, further away still, to
which the neurons is not connected.
All of these connections are within a particular layer of a
neural net.
The neurons receive an external signal in addition to these
interconnections signals (just like Maxnet).
This pattern of interconnections is repeated for each neuron
in the layer.
The interconnection pattern for unit Xi is as follows:
17
Mexican Hat Fixed-weight competitive net
The size of the region of cooperation (positive connections) and
the region of competition (negative connections) may vary, as
may vary the relative magnitudes of the +ve and –ve weights
and the topology of the regions (linear, rectangular, hexagonal,
etc..)
si = external activation
w1 = excitatory connectionw2 = inhibitory connection
18
Mexican Hat Fixed-weight competitive net
The contrast enhancement of the signal si received by unit Xi
is accomplished by iteration for several time steps.
The activation of unit Xi at time t is given by:
Where the terms in the summation are the weighted signal
from other units cooperative and competitive neighbors) at
the previous time step.
1 





−+= ∑ +
k
kikii )(txw(t)sf(t)x
inputs
+
-
-
-
-
19
Mexican Hat Fixed-weight competitive net
Nomenclature
R2=Radius of region of interconnections; Xi is connected to units Xi+k
and Xi-k for k=1, …R2
R1=Radius of region with +ve reinforcement;
R1< R2
wk = weights on interconnections between Xi and units Xi+k and Xi-k;
wk is positive for 0 ≤ k ≤ R1
wk is negative for R1 < k ≤ R2
x = vector of activations
x_old = vector of activations at previous time step
t_max = total number of iterations of contrast enhancement
s = external signal
20
Mexican Hat Fixed-weight competitive net
Algorithm
As presented, the algorithm corresponds to the external signal
being given only for the first iteration (step 1).
Step 0 Initialize parameters t_max, R1, R2 as desired
Initialize weights:
Step 1 Present external signal s:
Set iteration counter: t = 1
0x_old toInitialize
...,1for0
...,0for0
212
11



+=<
=>
=
RRkC
RkC
wij
ii xoldx =
=
=
_
:n)...,1,i(forarrayinactivationSave x_old
sx
21
Mexican Hat Fixed-weight competitive net
Algorithm
Step 2 While t is less than t_max, do steps 3-7
Step 3 Compute net input (i = 1, …n)
Step 4 Apply activation function f (ramp from 0 to x_max,
slope 1):
∑∑ ∑ +−=
+
−=
−−
−=
++ ++=
2
1
1
1
1
2 1
2
1
21 ___
R
Rk
ki
R
Rk
R
Rk
kikii oldxColdxColdxCx





>
≤≤
<
=
maxmax_
max0
00
)(
xifx
xifx
xif
xf
nixxx ii ,...,1)),0max(max,_min( ==
22
Mexican Hat Fixed-weight competitive net
Step 5 Save current activations in x_old:
Step 6 Increment iteration counter:
Step 7 Test stopping condition:
If t < t_max, continue; otherwise, stop.
Note:
The +ve reinforcement from nearby units and –ve reinforcement
from units that are further away have the effect of
– increasing the activation of units with larger initial activations,
– reducing the activations of those that had a smaller external
signal.
nixoldx ii ,...,1_ ==
1+= tt
23
Mexican Hat Example
Let’s consider a simple net with 7 units. The activation
function for this net is:
Step 0 Initialize parameters R1=1, R2 =2, C1 =.6, C2 =-.4
Step 1 t=0 The external signal s = (.0, .5, .8, 1, .8, .5, .0)
Step 2 t=1, the update formulas used in step 3, are listed
as follows for reference:
)0.0,5.0,8.0,0.1,8.0,5.0,0.0()1(old_
:_oldinSave
)0.0,5.0,8.0,0.1,8.0,5.0,0.0()0(
=
=
x
x
x





>
≤≤
<
=
22
20
00
)(
xif
xifx
xif
xf
The node
to be
enhanced
24
Mexican Hat Example
7657
76546
765435
654324
543213
43212
3211
_6.0_6.0_4.0
_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0
_4.0_6.0_6.0
oldxoldxoldxx
oldxoldxoldxoldxx
oldxoldxoldxoldxoldxx
oldxoldxoldxoldxoldxx
oldxoldxoldxoldxoldxx
oldxoldxoldxoldxx
oldxoldxoldxx
++−=
+++−=
−+++−=
−+++−=
−+++−=
−++=
−+=
2.0)0.0(6.0)5.0(6.0)8.0(4.0
38.0)0.0(6.0)5.0(6.0)8.0(6.0)0.1(4.0
06.1)0.0(4.0)5.0(6.0)8.0(6.0)0.1(6.0)8.0(4.0
16.1)5.0(4.0)8.0(6.0)0.1(6.0)8.0(6.0)5.0(4.0
06.1)8.0(4.0)0.1(6.0)8.0(6.0)5.0(6.0)0.0(4.0
38.0)0.1(4.0)8.0(6.0)5.0(6.0)0.0(6.0
2.0)8.0(4.0)5.0(6.0)0.0(6.0
7
6
5
4
3
2
1
−=++−=
=+++−=
=−+++−=
=−+++−=
=−+++−=
=−++=
−=−+=
x
x
x
x
x
x
x
Step 3 t=1
25
Mexican Hat Example
Step 4
Step 5-7 Bookkeeping for next iteration
Step 3: t=2
Step 4:
Step 5-7 Bookkeeping for next iteration
196.0)0.0(6.0)38.0(6.0)06.1(4.0
39.0)0.0(6.0)38.0(6.0)06.1(6.0)16.1(4.0
14.1)0.0(4.0)38.0(6.0)06.1(6.0)16.1(6.0)06.1(4.0
66.1)38.0(4.0)06.1(6.0)16.1(6.0)06.1(6.0)38.0(4.0
14.1)06.1(4.0)16.1(6.0)06.1(6.0)38.0(6.0)0.0(4.0
39.0)16.1(4.0)06.1(6.0)38.0(6.0)0.0(6.0
196.0)06.1(4.0)38.0(6.0)0.0(6.0
7
6
5
4
3
2
1
−=++−=
=+++−=
=−+++−=
=−+++−=
=−+++−=
=−++=
−=−+=
x
x
x
x
x
x
x
)0.0,38.0,06.1,16.1,06.1,38.0,0.0(=x
)0.0,39.0,14.1,66.1,14.1,39.0,0.0(=x
The node
Which has
been
enhanced
26
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
27
Hamming Network Fixed-weight competitive net
Hamming distance of two vectors is the number of
components in which the vector differ.
For 2 bipolar vectors, of dimension n,
The Hamming net uses Maxnet as a subnet to find the
unit with the largest input (see figure on next slide).
yx and
distanceHammingshorterlargerlarger
)(5.0
2
),HD(distanceHamming
andindifferentbitsofnumberis
andinagreementinbitsofnumberis:where
⇒⇒⋅
+⋅=
−=⋅
=−=
−=⋅
a
na
na
and
d
a
da
yx
yx
yx
yx
yx
yx
yx




















0
1
0
and
0
1
1
28
Hamming Network Fixed-weight competitive net
Architecture
The sample architecture shown assumes input vectors are 4-
tuples, to be categorized as belonging to one of 2 classes.
29
Hamming Network Fixed-weight competitive net
Algorithm
Given a set of m bipolar exemplar vectors e(1), e(2), …, e(m), the
Hamming net can be used to find the exemplar that is closest to the
bipolar i/p vector x.
The net i/p y_inj to unit Yj gives the number of components in which
the i/p vector and the exemplar vector for unit Yje(j) agree (n minus
the Hamming distance between these vectors).
Nomenclature:
n number of input nodes, number of components of any I/p
vector
m number of o/p nodes, number of exemplar vectors
e(j) the jth exemplar vector:
( ))(,),(),()( 1 jejejej ni LL=e
30
Hamming Network Fixed-weight competitive net
Algorithm
Step 0 To store the m exemplar vectors, initialise the weights and
biases:
Step 1 For each vector x, do steps 2-4
Step 2 Compute the net input to each unit Yj:
Step 3 Initialise activations for Maxnet
Step 4 Maxnet iterates to find the best match exemplar.
nbjew jiij 5.0),(5.0 ==
mjwxbiny
i
ijijj ,,1for_ L=+= ∑
mjinyy jj ,,1for_)0( L==
31
Hamming Network Fixed-weight competitive net
Upper layer: Maxnet
– it takes the y_in as its initial value, then
iterates toward stable state (the best match
exemplar)
– one output node with highest y_in will be
the winner because its weight vector is
closest to the input vector
jY
32
Hamming Network Fixed-weight competitive net
Example (see architecture n=4)
[ ]
[ ] patterns)(exemplar2casein thism1-1,-1,-1,)2(
1-1,-1,-1,)1(
:ectorsexemplar vGiven the
=−=
=
e
e
The Hamming net can be used to find the exemplar that is closest to each of the
bipolar i/p patterns, (1, 1, -1, -1), (1, -1, -1, -1), (-1, -1, -1, 1), and (-1, -1, 1, 1)
Step 1: Store the m exemplar vectors in the weights:
case)(in this4nodesinputofnumber
2/2
:biasestheInitialize
2
)(
rewhe
5.5.
5.5.
5.5.
5.5.
21
==
===
=












−−
−−
−−
−
=
n
nbb
je
wW i
ij
33
Hamming Network Fixed-weight competitive net
Example
For the vector x=(1,1, -1, -1), let’s do the following steps:
These values represent the Hamming similarity because
(1, 1, -1, -1) agrees with e(1)= (1,-1, -1, -1) in the 1st , 3rd,
and 4th components and because (1,1, -1, -1) agrees
with e(2)=(-1,-1,-1,1) in only the 3rd component
Since y1(0) > y2(0), Maxnet will find that unit Y1 has the
best match exemplar for input vector x = (1, 1, -1, -1).
112_
312_
222
111
=−=+=
=+=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
1)0(
3)0(
2
1
=
=
y
y
34
Hamming Network Fixed-weight competitive net
Example
For the vector x=(1,-1, -1, -1), let’s do the following steps:
Note that the input agrees with e(1) in all 4 components
and agrees with e(2) in the 2nd and 3rd components
Since y1(0) > y2(0), Maxnet will find that unit Y1 has the
best match exemplar for input vector x = (1, -1, -1, -1).
202_
422_
222
111
=+=+=
=+=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
2)0(
4)0(
2
1
=
=
y
y
35
Hamming Network Fixed-weight competitive net
Example
For the vector x=(-1,-1, -1, 1), let’s do the following steps:
Note that the input agrees with e(1) in the 2nd and 3rd
components and agrees with e(2) in all four components
Since y2(0) > y1(0), Maxnet will find that unit Y2 has the
best match exemplar for input vector x = (-1, -1, -1, 1).
422_
202_
222
111
=+=+=
=+=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
4)0(
2)0(
2
1
=
=
y
y
36
Hamming Network Fixed-weight competitive net
Example
For the vector x=(-1,-1, 1, 1), let’s do the following steps:
Note that the input agrees with e(1) in the 2nd component
and agrees with e(2) in the 1st, 2nd, and 4th components
Since y2(0) > y1(0), Maxnet will find that unit Y2 has the
best match exemplar for input vector x = (-1, -1, 1, 1).
312_
112_
222
111
=+=+=
=−=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
3)0(
1)0(
2
1
=
=
y
y
37
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
38
Kohonen Self-Organizing Maps (SOM)
Fausett
Teuvo Kohonen, born: 11-07-
34, Finland
Author of the book: “Self-
organizing maps”, Springer
(1997)
SOM = “Topology preserving
maps”
SOM assume a topological
structure among the cluster
units
This property is observed in the
brain, but not found in other
ANNs
39
Kohonen Self-Organizing Maps (SOM)
Fausett
40
Fausett
Kohonen Sefl-Organizing Maps (SOM)
Architecture:
41
Fausett
Kohonen Self-Organizing Maps (SOM)
Architecture:
* * {* (* [#] *) *} * *
{ } R=2 ( ) R=1 [ ] R=0
Linear array of cluster units
This figure represents the neighborhood of the unit designated by # in a 1D topology
(with 10 cluster units).
*******
*******
*******
***#***
*******
*******
*******
R=2
R=1
R=0
Neighbourhoods for rectangular grid (Each unit has 8 neighbours)
42
Fausett
Kohonen Self-Organizing Maps (SOM)
Architecture:
******
******
******
**#***
******
******
****** R=2
R=1
R=0
Neighbourhoods for hexagonal grid (Each unit has 6 neighbours).
In each of these illustrations, the “winning unit” is indicated by the symbol # and the
other units are denoted by *
NOTE: What if the winning unit is on the edge of the grid?
• Winning units that are close to the edge of the grid will have some neighbourhoods
that have fewer units than those shown in the respective figures
• Neighbourhoods do not “wrap around” from one side of the grid to the other; “missing”
units are simply ignored.
43
Training SOM Networks
Which nodes are trained?
How are they trained?
– Training functions
– Are all nodes trained the same amount?
Two basic types of training sessions?
– Initial formation
– Final convergence
How do you know when to stop?
– Small number of changes in input mappings
– Max # epochs reached
44
Fausett
Kohonen Self-Organizing Maps (SOM)
Algorithm:
Step 0. Initialise weights wij (randomly)
Set topological neighbourhood parameters
Set learning rate parameters
Step 1 While stopping condition is false, do steps 2-8.
Step 2 For each i/p vector x, do steps 3-5
Step 3 For each j, compute:
Step 4 Find index J such that D(J) is a minimum
Step 5 For all units j within a specified neighbourhood of J, and
for all i:
Step 6 Update learning rate.
Step 7 Reduce radius of topological neighbourhood at specified time
Step 8 Test stopping condition
∑ −=
i
iij xwJD 2
)()(
)]([)()( oldwxoldwneww ijiijij −+= α
45
Fausett
Kohonen Self-Organizing Maps (SOM)
Alternative structure for reducing R and α:
The learning rate α is a slowly decreasing function of time (or training
epochs). Kohonen indicates that a linearly decreasing function is satisfactory
for practical computations; a geometric decrease would produce similar
results.
The radius of the neighbourhood around a cluster unit also decreases as the
clustering process progresses.
The formation of a map occurs in 2 phases:
The initial formation of the correct order and
the final convergence
The second phase takes much longer than the first and requires a small
value for the learning rate.
Many iterations through the training set may be necessary, at least in
some applications.
46
Fausett
Kohonen Self-Organizing Maps (SOM)
Applications and Examples:
The SOM have had may applications:
Computer generated music (Kohonen 1989)
Traveling salesman problem
Examples: A Kohonen SOM to cluster 4 vectors:
– Let the 4 vectors to be clustered be:
s(1) = (1, 1, 0, 0) s(2) = (0, 0, 0, 1)
s(3) = (1, 0, 0, 0) s(4) = (0, 0, 1, 1)
- The maximum number of clusters to be formed is m=2 (4 input nodes,
2 output nodes)
Suppose the learning rate is α(0)=0.6 and α(t+1) = 0.5 α(t)
With only 2 clusters available, the neighborhood of node J (step 4) is set so
that only one cluster update its weights at each step (i.e. R=0)
47
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 0 Initial weight matrix
Initial radius: R=0
Initial learning rate: α(0)=0.6
Step 1 Begin training
Step 2 For s(1) = (1, 1, 0, 0) , do steps 3-5
Step 3












=
3.9.
7.5.
4.6.
8.2.
W
98.0)03(.)07(.)14(.)18(.)2(
86.1)09(.)05(.)16(.)12(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
48
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 4 The i/p vector is closest to o/p node 2, so J=2
Step 5 The weights on the winning unit are updated
This gives the weight matrix:
ii
iiii
xoldw
oldwxoldwneww
6.0)(4.0
)]([6.0)()(
2
222
+
−+=
This set of weights has not
been modified.












=
12.9.
28.5.
76.6.
92.2.
W
49
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 2 For s(2) = (0, 0, 0, 1) , do steps 3-5
Step 3
2768.2)112(.)028(.)076(.)092(.)2(
66.0)19(.)05(.)06(.)02(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
Step 4 The i/p vector is closest to o/p node 1, so J=1
Step 5 The weights on the winning unit are updated
This gives the weight matrix:












=
12.96.
28.20.
76.24.
92.08.
W
This set of weights has not
been modified.
50
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 2 For s(3) = (1, 0, 0, 0) , do steps 3-5
Step 3
6768.0)012(.)028(.)076(.)192(.)2(
8656.1)096(.)02(.)024(.)108(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
Step 4 The i/p vector is closest to o/p node 2, so J=2
Step 5 The weights on the winning unit are updated
This gives the weight matrix:












=
048.96.
112.20.
304.24.
968.08.
W
This set of weights has not
been modified.
51
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 2 For s(4) = (0, 0, 1, 1) , do steps 3-5
Step 3
724.2)1048(.)1112(.)0304(.)0968(.)2(
7056.0)196(.)12(.)024(.)008(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
Step 4 The i/p vector is closest to o/p node 1, so J=1
Step 5 The weights on the winning unit are updated
This gives the weight matrix:
Step 6 Reduce the learning rate α = 0.5(0.6) = 0.3












=
048.984.
112.680.
304.096.
968.032.
W
This set of weights has not
been modified.
52
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
The weights update equations are now
The weight matrix after the 2nd epoch of training is:
Modifying the learning rate and after 100 iterations (epochs), the weight
matrix appears to be converging to:
iij
ijiijij
xoldw
oldwxoldwneww
3.0)(7.0
)]([3.0)()(
+
−+=












=
024.999.
055.630.
360.047.
980.016.
W












=
0.00.1
0.05.
5.00.
0.10.
W
The 1st
column is the average
of the 2 vects in cluster 1, and
the 2nd
col is the average of
the 2 vects in cluster 2
This set of weights has not
been modified.
53
Example of a SOM That
Use a 2-D Neighborhood
Character Recognition – Fausett, pp. 176-178
54
Traveling Salesman Problem (TSP) by SOM
The aim of the TSP is to find a tour of a given set
of cities that is of minimum length.
A tour consists of visiting each city exactly once
and returning to the starting city.
The net uses the city coordinates as input (n=2);
there are as many cluster units as there are cities
to be visited. The net has a linear topology (with
the first and last unit also connected).
Fig. shows the Initial
position of cluster
units and location of
cities
55
Traveling Salesman Problem (TSP) by SOM
Fig. Shows the result
after 100 epochs of
training with R=1
(learning rate
decreasing from 0.5 to
0.4).
Fig. Shows the final tour after 100
epochs of training with R=0
Two candidate solutions:
ADEFGHIJBC
ADEFGHIJCB
56
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
57
MatlabSOMExample
58
Architecture
59
CreatingaSOM
newsom -- Create a self-organizing map
Syntax
net = newsom
net = newsom(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND)
Description
net = newsom creates a new network with a dialog box.
net = newsom (PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND)
takes,
PR - R x 2 matrix of min and max values for R input elements.
Di - Size of ith layer dimension, defaults = [5 8].
TFCN - Topology function, default ='hextop'.
DFCN - Distance function, default ='linkdist'.
OLR - Ordering phase learning rate, default = 0.9.
OSTEPS - Ordering phase steps, default = 1000.
TLR - Tuning phase learning rate, default = 0.02;
TND - Tuning phase neighborhood distance, default = 1.
and returns a new self-organizing map.
The topology function TFCN can be hextop, gridtop, or randtop. The
distance function can be linkdist, dist, or mandist.
60
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
61
Suggested Reading.
L. Fausett,
“Fundamentals of
Neural Networks”,
Prentice-Hall,
1994, Chapter 4.
J. Zurada,
“Artificial Neural
systems”, West
Publishing, 1992,
Chapter 7.
62
References:
These lecture notes were based on the references of the
previous slide, and the following references
1. Berlin Chen Lecture notes: Normal University, Taipei,
Taiwan, ROC. http://140.122.185.120
2. Dan St. Clair, University of Missori-Rolla,
http://web.umr.edu/~stclair/class/classfiles/cs404_fs02/Misc/CS
404_fall2001/Lectures/ Lecture 14.

More Related Content

What's hot

Mc Culloch Pitts Neuron
Mc Culloch Pitts NeuronMc Culloch Pitts Neuron
Mc Culloch Pitts NeuronShajun Nisha
 
Counter propagation Network
Counter propagation NetworkCounter propagation Network
Counter propagation NetworkAkshay Dhole
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian LearningESCOM
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkKnoldus Inc.
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of AlgorithmsSwapnil Agrawal
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Mohammed Bennamoun
 
Statistical learning
Statistical learningStatistical learning
Statistical learningSlideshare
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Simplilearn
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseMohaiminur Rahman
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkPrakash K
 
I. Mini-Max Algorithm in AI
I. Mini-Max Algorithm in AII. Mini-Max Algorithm in AI
I. Mini-Max Algorithm in AIvikas dhakane
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Simplilearn
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERKnoldus Inc.
 

What's hot (20)

Mc Culloch Pitts Neuron
Mc Culloch Pitts NeuronMc Culloch Pitts Neuron
Mc Culloch Pitts Neuron
 
Counter propagation Network
Counter propagation NetworkCounter propagation Network
Counter propagation Network
 
Adaptive Resonance Theory (ART)
Adaptive Resonance Theory (ART)Adaptive Resonance Theory (ART)
Adaptive Resonance Theory (ART)
 
Concept learning
Concept learningConcept learning
Concept learning
 
Hebbian Learning
Hebbian LearningHebbian Learning
Hebbian Learning
 
Hebb network
Hebb networkHebb network
Hebb network
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Introduction to Recurrent Neural Network
Introduction to Recurrent Neural NetworkIntroduction to Recurrent Neural Network
Introduction to Recurrent Neural Network
 
Design and Analysis of Algorithms
Design and Analysis of AlgorithmsDesign and Analysis of Algorithms
Design and Analysis of Algorithms
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
 
Statistical learning
Statistical learningStatistical learning
Statistical learning
 
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
 
Deep learning
Deep learningDeep learning
Deep learning
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
 
Hopfield Networks
Hopfield NetworksHopfield Networks
Hopfield Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
I. Mini-Max Algorithm in AI
I. Mini-Max Algorithm in AII. Mini-Max Algorithm in AI
I. Mini-Max Algorithm in AI
 
Mc culloch pitts neuron
Mc culloch pitts neuronMc culloch pitts neuron
Mc culloch pitts neuron
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
 
NAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIERNAIVE BAYES CLASSIFIER
NAIVE BAYES CLASSIFIER
 

Viewers also liked

Neural networks...
Neural networks...Neural networks...
Neural networks...Molly Chugh
 
Artificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimizationArtificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimizationMohammed Bennamoun
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSMohammed Bennamoun
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationMohammed Bennamoun
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersMohammed Bennamoun
 
IA conexionista-RNA -- Prueba y entrenamiento con modelos de RNA
IA conexionista-RNA -- Prueba y entrenamiento con modelos de RNAIA conexionista-RNA -- Prueba y entrenamiento con modelos de RNA
IA conexionista-RNA -- Prueba y entrenamiento con modelos de RNAPriscill Orue Esquivel
 
MATLAB Environment for Neural Network Deployment
MATLAB Environment for Neural Network DeploymentMATLAB Environment for Neural Network Deployment
MATLAB Environment for Neural Network DeploymentAngelo Cardellicchio
 
Introduction to toolbox under matlab environment
Introduction to toolbox under matlab environmentIntroduction to toolbox under matlab environment
Introduction to toolbox under matlab environmentParamjeet Singh Jamwal
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - ArchitecturesErin Brunston
 
مقدمة في برمجة الشبكات network programming
مقدمة في برمجة الشبكات network programmingمقدمة في برمجة الشبكات network programming
مقدمة في برمجة الشبكات network programmingEhab Saad Ahmad
 
الملتقي الدولي الثامن Colloque 2017 urnop ria
الملتقي الدولي الثامن Colloque 2017               urnop  riaالملتقي الدولي الثامن Colloque 2017               urnop  ria
الملتقي الدولي الثامن Colloque 2017 urnop riaboukhrissa Naila
 
neural network nntool box matlab start
neural network nntool box matlab startneural network nntool box matlab start
neural network nntool box matlab startnabeelasd
 
Classical relations and fuzzy relations
Classical relations and fuzzy relationsClassical relations and fuzzy relations
Classical relations and fuzzy relationsBaran Kaynak
 
Multidimensional Perceptual Map for Project Prioritization and Selection - 20...
Multidimensional Perceptual Map for Project Prioritization and Selection - 20...Multidimensional Perceptual Map for Project Prioritization and Selection - 20...
Multidimensional Perceptual Map for Project Prioritization and Selection - 20...Jack Zheng
 
Neural tool box
Neural tool boxNeural tool box
Neural tool boxMohan Raj
 
Dissertation character recognition - Report
Dissertation character recognition - ReportDissertation character recognition - Report
Dissertation character recognition - Reportsachinkumar Bharadva
 
Dynamic analysis in Software Testing
Dynamic analysis in Software TestingDynamic analysis in Software Testing
Dynamic analysis in Software TestingSagar Pednekar
 
التعرف الآني علي الحروف العربية المنعزلة
التعرف الآني علي الحروف العربية المنعزلة التعرف الآني علي الحروف العربية المنعزلة
التعرف الآني علي الحروف العربية المنعزلة Ayman Amin
 

Viewers also liked (20)

Neural networks...
Neural networks...Neural networks...
Neural networks...
 
Artificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimizationArtificial Neural Networks Lect8: Neural networks for constrained optimization
Artificial Neural Networks Lect8: Neural networks for constrained optimization
 
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNSArtificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
 
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & BackpropagationArtificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
Artificial Neural Networks Lect5: Multi-Layer Perceptron & Backpropagation
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron ClassifiersArtificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
 
IA conexionista-RNA -- Prueba y entrenamiento con modelos de RNA
IA conexionista-RNA -- Prueba y entrenamiento con modelos de RNAIA conexionista-RNA -- Prueba y entrenamiento con modelos de RNA
IA conexionista-RNA -- Prueba y entrenamiento con modelos de RNA
 
MATLAB Environment for Neural Network Deployment
MATLAB Environment for Neural Network DeploymentMATLAB Environment for Neural Network Deployment
MATLAB Environment for Neural Network Deployment
 
Introduction to toolbox under matlab environment
Introduction to toolbox under matlab environmentIntroduction to toolbox under matlab environment
Introduction to toolbox under matlab environment
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - Architectures
 
مقدمة في برمجة الشبكات network programming
مقدمة في برمجة الشبكات network programmingمقدمة في برمجة الشبكات network programming
مقدمة في برمجة الشبكات network programming
 
الملتقي الدولي الثامن Colloque 2017 urnop ria
الملتقي الدولي الثامن Colloque 2017               urnop  riaالملتقي الدولي الثامن Colloque 2017               urnop  ria
الملتقي الدولي الثامن Colloque 2017 urnop ria
 
neural network nntool box matlab start
neural network nntool box matlab startneural network nntool box matlab start
neural network nntool box matlab start
 
Classical relations and fuzzy relations
Classical relations and fuzzy relationsClassical relations and fuzzy relations
Classical relations and fuzzy relations
 
Multidimensional Perceptual Map for Project Prioritization and Selection - 20...
Multidimensional Perceptual Map for Project Prioritization and Selection - 20...Multidimensional Perceptual Map for Project Prioritization and Selection - 20...
Multidimensional Perceptual Map for Project Prioritization and Selection - 20...
 
Neural tool box
Neural tool boxNeural tool box
Neural tool box
 
Fuzzy and nn
Fuzzy and nnFuzzy and nn
Fuzzy and nn
 
Dissertation character recognition - Report
Dissertation character recognition - ReportDissertation character recognition - Report
Dissertation character recognition - Report
 
Dynamic analysis in Software Testing
Dynamic analysis in Software TestingDynamic analysis in Software Testing
Dynamic analysis in Software Testing
 
التعرف الآني علي الحروف العربية المنعزلة
التعرف الآني علي الحروف العربية المنعزلة التعرف الآني علي الحروف العربية المنعزلة
التعرف الآني علي الحروف العربية المنعزلة
 

Similar to Artificial Neural Networks Lect7: Neural networks based on competition

ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...vijaym148
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Two algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksTwo algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksESCOM
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGRevanth Kumar
 
Classification using perceptron.pptx
Classification using perceptron.pptxClassification using perceptron.pptx
Classification using perceptron.pptxsomeyamohsen3
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patternsVito Strano
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORKESCOM
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxMdMahfoozAlam5
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 

Similar to Artificial Neural Networks Lect7: Neural networks based on competition (20)

Max net
Max netMax net
Max net
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
MNN
MNNMNN
MNN
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
20120140503023
2012014050302320120140503023
20120140503023
 
Two algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksTwo algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networks
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
Classification using perceptron.pptx
Classification using perceptron.pptxClassification using perceptron.pptx
Classification using perceptron.pptx
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patterns
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORK
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
 
UofT_ML_lecture.pptx
UofT_ML_lecture.pptxUofT_ML_lecture.pptx
UofT_ML_lecture.pptx
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Back propagation
Back propagationBack propagation
Back propagation
 

Recently uploaded

(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxhumanexperienceaaa
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAbhinavSharma374939
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 

Recently uploaded (20)

(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptxthe ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
the ladakh protest in leh ladakh 2024 sonam wangchuk.pptx
 
Analog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog ConverterAnalog to Digital and Digital to Analog Converter
Analog to Digital and Digital to Analog Converter
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 

Artificial Neural Networks Lect7: Neural networks based on competition

  • 1. 1 CS407 Neural Computation Lecture 7: Neural Networks Based on Competition. Lecturer: A/Prof. M. Bennamoun
  • 2. 2 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 3. 3 Introduction…Competitive nets Fausett The most extreme form of competition among a group of neurons is called “Winner Take All”. As the name suggests, only one neuron in the competing group will have a nonzero output signal when the competition is completed. A specific competitive net that performs Winner Take All (WTA) competition is the Maxnet. A more general form of competition, the “Mexican Hat” will also be described in this lecture (instead of a non-zero o/p for the winner and zeros for all other competing nodes, we have a bubble around the winner ). All of the other nets we discuss in this lecture use WTA competition as part of their operation. With the exception of the fixed-weight competitive nets (namely Maxnet, Mexican Hat, and Hamming net) all of the other nets combine competition with some form of learning to adjusts the weights of the net (i.e. the weights that are not part of any interconnections in the competitive layer).
  • 4. 4 Introduction…Competitive nets Fausett The form of learning depends on the purpose for which the net is being trained: – LVQ and counterpropagation net are trained to perform mappings. The learning in this case is supervised. – SOM (used for clustering of input data): a common use of unsupervised learning. – ART are also clustering nets: also unsupervised. Several of the nets discussed use the same learning algorithm known as “Kohonen learning”: where the units that update their weights do so by forming a new weight vector that is a linear combination of the old weight vector and the current input vector. Typically, the unit whose weight vector was closest to the input vector is allowed to learn.
  • 5. 5 Introduction…Competitive nets Fausett The weight update for output (or cluster) unit j is given as: where x is the input vector, w.j is the weight vector for unit j, and α the learning rate, decreases as learning proceeds. Two methods of determining the closest weight vector to a pattern vector are commonly used for self-organizing nets. Both are based on the assumption that the weight vector for each cluster (output) unit serves as an exemplar for the input vectors that have been assigned to that unit during learning. – The first method of determining the winner uses the squared Euclidean distance between the I/P vector [ ] old)()1( old)()old()new( . ... j jjj wx wxww αα α −+ −+=
  • 6. 6 Introduction…Competitive nets Fausett and the weight vector and chooses the unit whose weight vector has the smallest Euclidean distance from the I/P vector. – The second method uses the dot product of the I/P vector and the weight vector. The dot product can be interpreted as giving the correlation between the I/P and weight vector. In general and for consistency, we will use Euclidean distance squared. Many NNs use the idea of competition among neurons to enhance the contrast in activations of the neurons In the most extreme situation, the case of the Winner-Take- All, only the neuron with the largest activation is allowed to remain “on”.
  • 7. 7
  • 8. 8 Introduction…Competitive nets 1. Euclidean distance between two vectors 2. Cosine of the angle between two vectors ( ) ( )i t ii xxxxxx −−=− i i t xx xx =ψcos 21 ψψψ << T
  • 9. 9 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 11. 11 Hamming Network and MAXNET MAXNET – A recurrent network involving both excitatory and inhibitory connections • Positive self-feedbacks and negative cross-feedbacks – After a number of recurrences, the only non- zero node will be the one with the largest initializing entry from i/p vector
  • 12. 12 • Maxnet Lateral inhibition between competitors Maxnet Fixed-weight competitive net Fausett    > = otherwise0 0if )( :Maxnetfor thefunctionactivation xx xf Competition: • iterative process until the net stabilizes (at most one node with positive activation) – Step 0: • Initialize activations and weight (set ) where m is the # of nodes (competitors) ,/10 m<< ε
  • 13. 13 Algorithm Step 1: While stopping condition is false do steps 2-4 – Step 2: Update the activation of each node: – Step 3: Save activation for use in next iteration: – Step 4: Test stopping condition: If more than one node has a nonzero activation, continue; otherwise, stop.    − = = = otherwise if1 :weights nodeinput to)0( ε ji w Aa ij jj mj(old)a(old) –af(new)a jk kjj ,...,1for =      = ∑≠ ε mj(new)a(old)a jj ,...,1for == the total i/p to node Aj from all nodes, including itself
  • 14. 14 Example Note that in step 2, the i/p to the function f is the total i/p to node Aj from all nodes, including itself. Some precautions should be incorporated to handle the situation in which two or more units have the same, maximal, input. Example: ε = .2 and the initial activations (input signals) are: a1(0) = .2, a2(0) = .4 a3(0) = .4 a4(0) = .8 As the net iterates, the activations are: a1(1) = f{a1(0) -.2 [a2(0) + a3(0) + a4(0)]} = f(-.12)=0 a1(1) = .0, a2(1) = .08 a3(1) = .32 a4(1) = .56 a1(2) = .0, a2(2) = .0 a3(2) = .192 a4(2) = .48 a1(3) = .0, a2(3) = .0 a3(3) = .096 a4(3) = .442 a1(4) = .0, a2(4) = .0 a3(4) = .008 a4(4) = .422 a1(5) = .0, a2(5) = .0 a3(5) = .0 a4(5) = .421 The only node to remain on
  • 15. 15 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 16. 16 Mexican Hat Fixed-weight competitive net Architecture Each neuron is connected with excitatory links (positively weighted) to a number of “cooperative neighbors” neurons that are in close proximity. Each neuron is also connected with inhibitory links (with negative weights) to a number of “competitive neighbors” neurons that are somewhat further away. There may also be a number of neurons, further away still, to which the neurons is not connected. All of these connections are within a particular layer of a neural net. The neurons receive an external signal in addition to these interconnections signals (just like Maxnet). This pattern of interconnections is repeated for each neuron in the layer. The interconnection pattern for unit Xi is as follows:
  • 17. 17 Mexican Hat Fixed-weight competitive net The size of the region of cooperation (positive connections) and the region of competition (negative connections) may vary, as may vary the relative magnitudes of the +ve and –ve weights and the topology of the regions (linear, rectangular, hexagonal, etc..) si = external activation w1 = excitatory connectionw2 = inhibitory connection
  • 18. 18 Mexican Hat Fixed-weight competitive net The contrast enhancement of the signal si received by unit Xi is accomplished by iteration for several time steps. The activation of unit Xi at time t is given by: Where the terms in the summation are the weighted signal from other units cooperative and competitive neighbors) at the previous time step. 1       −+= ∑ + k kikii )(txw(t)sf(t)x inputs + - - - -
  • 19. 19 Mexican Hat Fixed-weight competitive net Nomenclature R2=Radius of region of interconnections; Xi is connected to units Xi+k and Xi-k for k=1, …R2 R1=Radius of region with +ve reinforcement; R1< R2 wk = weights on interconnections between Xi and units Xi+k and Xi-k; wk is positive for 0 ≤ k ≤ R1 wk is negative for R1 < k ≤ R2 x = vector of activations x_old = vector of activations at previous time step t_max = total number of iterations of contrast enhancement s = external signal
  • 20. 20 Mexican Hat Fixed-weight competitive net Algorithm As presented, the algorithm corresponds to the external signal being given only for the first iteration (step 1). Step 0 Initialize parameters t_max, R1, R2 as desired Initialize weights: Step 1 Present external signal s: Set iteration counter: t = 1 0x_old toInitialize ...,1for0 ...,0for0 212 11    +=< => = RRkC RkC wij ii xoldx = = = _ :n)...,1,i(forarrayinactivationSave x_old sx
  • 21. 21 Mexican Hat Fixed-weight competitive net Algorithm Step 2 While t is less than t_max, do steps 3-7 Step 3 Compute net input (i = 1, …n) Step 4 Apply activation function f (ramp from 0 to x_max, slope 1): ∑∑ ∑ +−= + −= −− −= ++ ++= 2 1 1 1 1 2 1 2 1 21 ___ R Rk ki R Rk R Rk kikii oldxColdxColdxCx      > ≤≤ < = maxmax_ max0 00 )( xifx xifx xif xf nixxx ii ,...,1)),0max(max,_min( ==
  • 22. 22 Mexican Hat Fixed-weight competitive net Step 5 Save current activations in x_old: Step 6 Increment iteration counter: Step 7 Test stopping condition: If t < t_max, continue; otherwise, stop. Note: The +ve reinforcement from nearby units and –ve reinforcement from units that are further away have the effect of – increasing the activation of units with larger initial activations, – reducing the activations of those that had a smaller external signal. nixoldx ii ,...,1_ == 1+= tt
  • 23. 23 Mexican Hat Example Let’s consider a simple net with 7 units. The activation function for this net is: Step 0 Initialize parameters R1=1, R2 =2, C1 =.6, C2 =-.4 Step 1 t=0 The external signal s = (.0, .5, .8, 1, .8, .5, .0) Step 2 t=1, the update formulas used in step 3, are listed as follows for reference: )0.0,5.0,8.0,0.1,8.0,5.0,0.0()1(old_ :_oldinSave )0.0,5.0,8.0,0.1,8.0,5.0,0.0()0( = = x x x      > ≤≤ < = 22 20 00 )( xif xifx xif xf The node to be enhanced
  • 25. 25 Mexican Hat Example Step 4 Step 5-7 Bookkeeping for next iteration Step 3: t=2 Step 4: Step 5-7 Bookkeeping for next iteration 196.0)0.0(6.0)38.0(6.0)06.1(4.0 39.0)0.0(6.0)38.0(6.0)06.1(6.0)16.1(4.0 14.1)0.0(4.0)38.0(6.0)06.1(6.0)16.1(6.0)06.1(4.0 66.1)38.0(4.0)06.1(6.0)16.1(6.0)06.1(6.0)38.0(4.0 14.1)06.1(4.0)16.1(6.0)06.1(6.0)38.0(6.0)0.0(4.0 39.0)16.1(4.0)06.1(6.0)38.0(6.0)0.0(6.0 196.0)06.1(4.0)38.0(6.0)0.0(6.0 7 6 5 4 3 2 1 −=++−= =+++−= =−+++−= =−+++−= =−+++−= =−++= −=−+= x x x x x x x )0.0,38.0,06.1,16.1,06.1,38.0,0.0(=x )0.0,39.0,14.1,66.1,14.1,39.0,0.0(=x The node Which has been enhanced
  • 26. 26 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 27. 27 Hamming Network Fixed-weight competitive net Hamming distance of two vectors is the number of components in which the vector differ. For 2 bipolar vectors, of dimension n, The Hamming net uses Maxnet as a subnet to find the unit with the largest input (see figure on next slide). yx and distanceHammingshorterlargerlarger )(5.0 2 ),HD(distanceHamming andindifferentbitsofnumberis andinagreementinbitsofnumberis:where ⇒⇒⋅ +⋅= −=⋅ =−= −=⋅ a na na and d a da yx yx yx yx yx yx yx                     0 1 0 and 0 1 1
  • 28. 28 Hamming Network Fixed-weight competitive net Architecture The sample architecture shown assumes input vectors are 4- tuples, to be categorized as belonging to one of 2 classes.
  • 29. 29 Hamming Network Fixed-weight competitive net Algorithm Given a set of m bipolar exemplar vectors e(1), e(2), …, e(m), the Hamming net can be used to find the exemplar that is closest to the bipolar i/p vector x. The net i/p y_inj to unit Yj gives the number of components in which the i/p vector and the exemplar vector for unit Yje(j) agree (n minus the Hamming distance between these vectors). Nomenclature: n number of input nodes, number of components of any I/p vector m number of o/p nodes, number of exemplar vectors e(j) the jth exemplar vector: ( ))(,),(),()( 1 jejejej ni LL=e
  • 30. 30 Hamming Network Fixed-weight competitive net Algorithm Step 0 To store the m exemplar vectors, initialise the weights and biases: Step 1 For each vector x, do steps 2-4 Step 2 Compute the net input to each unit Yj: Step 3 Initialise activations for Maxnet Step 4 Maxnet iterates to find the best match exemplar. nbjew jiij 5.0),(5.0 == mjwxbiny i ijijj ,,1for_ L=+= ∑ mjinyy jj ,,1for_)0( L==
  • 31. 31 Hamming Network Fixed-weight competitive net Upper layer: Maxnet – it takes the y_in as its initial value, then iterates toward stable state (the best match exemplar) – one output node with highest y_in will be the winner because its weight vector is closest to the input vector jY
  • 32. 32 Hamming Network Fixed-weight competitive net Example (see architecture n=4) [ ] [ ] patterns)(exemplar2casein thism1-1,-1,-1,)2( 1-1,-1,-1,)1( :ectorsexemplar vGiven the =−= = e e The Hamming net can be used to find the exemplar that is closest to each of the bipolar i/p patterns, (1, 1, -1, -1), (1, -1, -1, -1), (-1, -1, -1, 1), and (-1, -1, 1, 1) Step 1: Store the m exemplar vectors in the weights: case)(in this4nodesinputofnumber 2/2 :biasestheInitialize 2 )( rewhe 5.5. 5.5. 5.5. 5.5. 21 == === =             −− −− −− − = n nbb je wW i ij
  • 33. 33 Hamming Network Fixed-weight competitive net Example For the vector x=(1,1, -1, -1), let’s do the following steps: These values represent the Hamming similarity because (1, 1, -1, -1) agrees with e(1)= (1,-1, -1, -1) in the 1st , 3rd, and 4th components and because (1,1, -1, -1) agrees with e(2)=(-1,-1,-1,1) in only the 3rd component Since y1(0) > y2(0), Maxnet will find that unit Y1 has the best match exemplar for input vector x = (1, 1, -1, -1). 112_ 312_ 222 111 =−=+= =+=+= ∑ ∑ i ii i ii wxbiny wxbiny 1)0( 3)0( 2 1 = = y y
  • 34. 34 Hamming Network Fixed-weight competitive net Example For the vector x=(1,-1, -1, -1), let’s do the following steps: Note that the input agrees with e(1) in all 4 components and agrees with e(2) in the 2nd and 3rd components Since y1(0) > y2(0), Maxnet will find that unit Y1 has the best match exemplar for input vector x = (1, -1, -1, -1). 202_ 422_ 222 111 =+=+= =+=+= ∑ ∑ i ii i ii wxbiny wxbiny 2)0( 4)0( 2 1 = = y y
  • 35. 35 Hamming Network Fixed-weight competitive net Example For the vector x=(-1,-1, -1, 1), let’s do the following steps: Note that the input agrees with e(1) in the 2nd and 3rd components and agrees with e(2) in all four components Since y2(0) > y1(0), Maxnet will find that unit Y2 has the best match exemplar for input vector x = (-1, -1, -1, 1). 422_ 202_ 222 111 =+=+= =+=+= ∑ ∑ i ii i ii wxbiny wxbiny 4)0( 2)0( 2 1 = = y y
  • 36. 36 Hamming Network Fixed-weight competitive net Example For the vector x=(-1,-1, 1, 1), let’s do the following steps: Note that the input agrees with e(1) in the 2nd component and agrees with e(2) in the 1st, 2nd, and 4th components Since y2(0) > y1(0), Maxnet will find that unit Y2 has the best match exemplar for input vector x = (-1, -1, 1, 1). 312_ 112_ 222 111 =+=+= =−=+= ∑ ∑ i ii i ii wxbiny wxbiny 3)0( 1)0( 2 1 = = y y
  • 37. 37 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 38. 38 Kohonen Self-Organizing Maps (SOM) Fausett Teuvo Kohonen, born: 11-07- 34, Finland Author of the book: “Self- organizing maps”, Springer (1997) SOM = “Topology preserving maps” SOM assume a topological structure among the cluster units This property is observed in the brain, but not found in other ANNs
  • 41. 41 Fausett Kohonen Self-Organizing Maps (SOM) Architecture: * * {* (* [#] *) *} * * { } R=2 ( ) R=1 [ ] R=0 Linear array of cluster units This figure represents the neighborhood of the unit designated by # in a 1D topology (with 10 cluster units). ******* ******* ******* ***#*** ******* ******* ******* R=2 R=1 R=0 Neighbourhoods for rectangular grid (Each unit has 8 neighbours)
  • 42. 42 Fausett Kohonen Self-Organizing Maps (SOM) Architecture: ****** ****** ****** **#*** ****** ****** ****** R=2 R=1 R=0 Neighbourhoods for hexagonal grid (Each unit has 6 neighbours). In each of these illustrations, the “winning unit” is indicated by the symbol # and the other units are denoted by * NOTE: What if the winning unit is on the edge of the grid? • Winning units that are close to the edge of the grid will have some neighbourhoods that have fewer units than those shown in the respective figures • Neighbourhoods do not “wrap around” from one side of the grid to the other; “missing” units are simply ignored.
  • 43. 43 Training SOM Networks Which nodes are trained? How are they trained? – Training functions – Are all nodes trained the same amount? Two basic types of training sessions? – Initial formation – Final convergence How do you know when to stop? – Small number of changes in input mappings – Max # epochs reached
  • 44. 44 Fausett Kohonen Self-Organizing Maps (SOM) Algorithm: Step 0. Initialise weights wij (randomly) Set topological neighbourhood parameters Set learning rate parameters Step 1 While stopping condition is false, do steps 2-8. Step 2 For each i/p vector x, do steps 3-5 Step 3 For each j, compute: Step 4 Find index J such that D(J) is a minimum Step 5 For all units j within a specified neighbourhood of J, and for all i: Step 6 Update learning rate. Step 7 Reduce radius of topological neighbourhood at specified time Step 8 Test stopping condition ∑ −= i iij xwJD 2 )()( )]([)()( oldwxoldwneww ijiijij −+= α
  • 45. 45 Fausett Kohonen Self-Organizing Maps (SOM) Alternative structure for reducing R and α: The learning rate α is a slowly decreasing function of time (or training epochs). Kohonen indicates that a linearly decreasing function is satisfactory for practical computations; a geometric decrease would produce similar results. The radius of the neighbourhood around a cluster unit also decreases as the clustering process progresses. The formation of a map occurs in 2 phases: The initial formation of the correct order and the final convergence The second phase takes much longer than the first and requires a small value for the learning rate. Many iterations through the training set may be necessary, at least in some applications.
  • 46. 46 Fausett Kohonen Self-Organizing Maps (SOM) Applications and Examples: The SOM have had may applications: Computer generated music (Kohonen 1989) Traveling salesman problem Examples: A Kohonen SOM to cluster 4 vectors: – Let the 4 vectors to be clustered be: s(1) = (1, 1, 0, 0) s(2) = (0, 0, 0, 1) s(3) = (1, 0, 0, 0) s(4) = (0, 0, 1, 1) - The maximum number of clusters to be formed is m=2 (4 input nodes, 2 output nodes) Suppose the learning rate is α(0)=0.6 and α(t+1) = 0.5 α(t) With only 2 clusters available, the neighborhood of node J (step 4) is set so that only one cluster update its weights at each step (i.e. R=0)
  • 47. 47 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 0 Initial weight matrix Initial radius: R=0 Initial learning rate: α(0)=0.6 Step 1 Begin training Step 2 For s(1) = (1, 1, 0, 0) , do steps 3-5 Step 3             = 3.9. 7.5. 4.6. 8.2. W 98.0)03(.)07(.)14(.)18(.)2( 86.1)09(.)05(.)16(.)12(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D
  • 48. 48 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 4 The i/p vector is closest to o/p node 2, so J=2 Step 5 The weights on the winning unit are updated This gives the weight matrix: ii iiii xoldw oldwxoldwneww 6.0)(4.0 )]([6.0)()( 2 222 + −+= This set of weights has not been modified.             = 12.9. 28.5. 76.6. 92.2. W
  • 49. 49 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 2 For s(2) = (0, 0, 0, 1) , do steps 3-5 Step 3 2768.2)112(.)028(.)076(.)092(.)2( 66.0)19(.)05(.)06(.)02(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D Step 4 The i/p vector is closest to o/p node 1, so J=1 Step 5 The weights on the winning unit are updated This gives the weight matrix:             = 12.96. 28.20. 76.24. 92.08. W This set of weights has not been modified.
  • 50. 50 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 2 For s(3) = (1, 0, 0, 0) , do steps 3-5 Step 3 6768.0)012(.)028(.)076(.)192(.)2( 8656.1)096(.)02(.)024(.)108(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D Step 4 The i/p vector is closest to o/p node 2, so J=2 Step 5 The weights on the winning unit are updated This gives the weight matrix:             = 048.96. 112.20. 304.24. 968.08. W This set of weights has not been modified.
  • 51. 51 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 2 For s(4) = (0, 0, 1, 1) , do steps 3-5 Step 3 724.2)1048(.)1112(.)0304(.)0968(.)2( 7056.0)196(.)12(.)024(.)008(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D Step 4 The i/p vector is closest to o/p node 1, so J=1 Step 5 The weights on the winning unit are updated This gives the weight matrix: Step 6 Reduce the learning rate α = 0.5(0.6) = 0.3             = 048.984. 112.680. 304.096. 968.032. W This set of weights has not been modified.
  • 52. 52 Fausett Kohonen Self-Organizing Maps (SOM) Examples: The weights update equations are now The weight matrix after the 2nd epoch of training is: Modifying the learning rate and after 100 iterations (epochs), the weight matrix appears to be converging to: iij ijiijij xoldw oldwxoldwneww 3.0)(7.0 )]([3.0)()( + −+=             = 024.999. 055.630. 360.047. 980.016. W             = 0.00.1 0.05. 5.00. 0.10. W The 1st column is the average of the 2 vects in cluster 1, and the 2nd col is the average of the 2 vects in cluster 2 This set of weights has not been modified.
  • 53. 53 Example of a SOM That Use a 2-D Neighborhood Character Recognition – Fausett, pp. 176-178
  • 54. 54 Traveling Salesman Problem (TSP) by SOM The aim of the TSP is to find a tour of a given set of cities that is of minimum length. A tour consists of visiting each city exactly once and returning to the starting city. The net uses the city coordinates as input (n=2); there are as many cluster units as there are cities to be visited. The net has a linear topology (with the first and last unit also connected). Fig. shows the Initial position of cluster units and location of cities
  • 55. 55 Traveling Salesman Problem (TSP) by SOM Fig. Shows the result after 100 epochs of training with R=1 (learning rate decreasing from 0.5 to 0.4). Fig. Shows the final tour after 100 epochs of training with R=0 Two candidate solutions: ADEFGHIJBC ADEFGHIJCB
  • 56. 56 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 59. 59 CreatingaSOM newsom -- Create a self-organizing map Syntax net = newsom net = newsom(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND) Description net = newsom creates a new network with a dialog box. net = newsom (PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND) takes, PR - R x 2 matrix of min and max values for R input elements. Di - Size of ith layer dimension, defaults = [5 8]. TFCN - Topology function, default ='hextop'. DFCN - Distance function, default ='linkdist'. OLR - Ordering phase learning rate, default = 0.9. OSTEPS - Ordering phase steps, default = 1000. TLR - Tuning phase learning rate, default = 0.02; TND - Tuning phase neighborhood distance, default = 1. and returns a new self-organizing map. The topology function TFCN can be hextop, gridtop, or randtop. The distance function can be linkdist, dist, or mandist.
  • 60. 60 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 61. 61 Suggested Reading. L. Fausett, “Fundamentals of Neural Networks”, Prentice-Hall, 1994, Chapter 4. J. Zurada, “Artificial Neural systems”, West Publishing, 1992, Chapter 7.
  • 62. 62 References: These lecture notes were based on the references of the previous slide, and the following references 1. Berlin Chen Lecture notes: Normal University, Taipei, Taiwan, ROC. http://140.122.185.120 2. Dan St. Clair, University of Missori-Rolla, http://web.umr.edu/~stclair/class/classfiles/cs404_fs02/Misc/CS 404_fall2001/Lectures/ Lecture 14.