2. A neural network is a processing device, whose design was
inspired by the design and functioning of human brain and
their components.
Different neural network algorithms are used for
recognizing the pattern.
Various algorithms differ in their learning mechanism.
All learning methods used for adaptive neural networks
can be classified into two major categories:
Supervised learning
Unsupervised learning
6/4/2015 2
3. Its capability for solving complex pattern recognition
problems:-
Noise in weights
Noise in inputs
Loss of connections
Missing information and adding information.
6/4/2015 3
4. The primary function of which is to retrieve in a pattern
stored in memory, when an incomplete or noisy version of
that pattern is presented.
This is a two layer classifier of binary bipolar vectors.
The first layer of hamming network itself is capable of
selecting the stored class that is at minimum HD value to
the test vector presented at the input.
The second layer MAXNET only suppresses outputs.
6/4/2015 4
6. The hamming network is of the feed forward type. The
number of output neurons in this part equals the number
of classes.
The strongest response of a neuron of this layer indicated
the minimum HD value between the input vector and the
class this neuron represents.
The second layer is MAXNET, which operates as a recurrent
network. It involves both excitatory and inhibitory
connections.
6/4/2015 6
8. The purpose of the layer is to compute, in a feed forward
manner, the values of (n-HD).
Where HD is the hamming distance between the search
argument and the encoded class prototype vector.
For the Hamming net, we have input vector X
p classes => p neurons for output
output vector Y = [y1,……yp]
6/4/2015 8
9. for any output neuron ,m, m=1, ……p, we have
Wm = [wm1, wm2,……wmn]t and
m=1,2,……p
to be the weights between input X and each output
neuron.
Also, assuming that for each class m, one has the
prototype vector S(m) as the standard to be matched.
6/4/2015 9
10. For classifying p classes, one can say the m’th output is 1 if
and only if
output for the classifier are
XtS(1), XtS(2),…XtS(m),…XtS(p)
So when X= S(m), the m’th output is n and other outputs
are smaller than n.
X= S(m) W(m) =S(m)
=> happens only
6/4/2015 10
11. Xt S(m) = (n - HD(X , S(m)) ) - HD(X , S(m))
∴½ XtS(m) = n/2 – HD(X , S(m))
So the weight matrix:
WH=½S
)()(
2
)(
1
)2()2(
2
)2(
1
)1()1(
2
)1(
1
2
1
p
n
pp
n
n
H
SSS
SSS
SSS
W
6/4/2015 11
12. By giving a fixed bias n/2 to the input
then
netm = ½XtS(m) + n/2 for m=1,2,……p
or
netm = n - HD(X , S(m))
To scale the input 0~n to 0~1 down, one can apply
transfer function as
f(netm) = 1/n(netm) for m=1,2,…..p
6/4/2015 12
14. So for the node with the the highest output means that
the node has smallest HD between input and prototype
vectors S(1)……S(m)
i.e.
f(netm) = 1
for other nodes
f(netm) < 1
The purpose of MAXNET is to let max{ y1,……yp }
equal to 1 and let others equal to 0.
6/4/2015 14
17. And
So the transfer function
0
00
)(
netnet
net
netf
6/4/2015 17
kk
k
M
k
netfY
YWnet
1
18. Each entry of the updated vector decreases at the k’th
recursion step under the MAXNET update algorithm,
with the largest entry decreasing slowest.
6/4/2015 18
19. Step 1: Consider that patterns to classified are a1, a2 …
ap,each pattern is n dimensional. The weights connecting
inputs to the neuron of hamming network is given by
weight matrix.
pmpp
n
n
H
aaa
aaa
aaa
W
21
22121
11211
2
1
6/4/2015 19
20. Step2: n-dimensional input vector x is presented to the
input.
Step3: Net input of each neuron of hamming network is
netm = ½XtS(m) + n/2 for m=1,2,……p
Where n/2 is fixed bias applied to the input of each neuron
of this layer.
Step 4: Out put of each neuron of first layer is,
f(netm) =1/n( netm) for m=1,2,…..p
6/4/2015 20
21. Step 5: Output of hamming network is applied as input to
MAXNET
y0=f(netm)
Step 6: Weights connecting neurons of hamming
network and MAXNET is taken as,
)(
1
1
1
1
pp
MW
6/4/2015 21
22. Where ε must be bounded 0< ε <1/p. the quantity ε can be
called the literal interaction coefficient. Dimension of WM
is p×p.
Step 7: The output of MAXNET is calculated as,
k=1, 2, 3…… denotes the no of iterations.
0
00
)(
netnet
net
netf
k1k
k
M
k
netfY
YWnet
6/4/2015 22
23. Ex: To have a Hamming Net for classifying C , I , T
then
S(1) = [ 1 1 1 1 -1 -1 1 1 1 ]t
S(2) = [ -1 1 -1 -1 1 -1 -1 1 -1 ]t
S(3) = [ 1 1 1 -1 1 -1 -1 1 -1 ]t
So,
6/4/2015 23
111111111
111111111
111111111
HW
28. K=1
K=2
6/4/2015 28
t
t
Y
net
2
0
1
120.00520.0
120.0120.0520.0
t
t
Y
net
3
0
2
096.00480.0
096.014.0480.0
29. K=3
The result computed by the network after four
recurrences indicates the vector x presented at i/p for
mini hamming distance has been at the smallest HD
from s1.
So, it represents the distorted character C.
6/4/2015 29
t
t
Y
net
4
7
0
3
00461.0
10115.0461.0
30. Noise is introduced in the input by adding random
numbers.
Hamming Network and MaxNet network recognizes
correctly all the stored strings even after introducing noise
at the time of testing.
6/4/2015 30
31. In the network, neurons are interconnected and every
interconnection has some interconnecting coefficient
called weight.
If some of these weights are equated to zero then how it is
going to effect the classification or recognition.
The number of connections that can be removed such that
the network performance is not affected.
6/4/2015 31
32. Missing information means some of the on pixels in
pattern grid are made off.
For the algorithm, how many information we can miss so
that the strings can be recognized correctly varies from
string to string.
The number of pixels that can be switched off for all the
stored strings in algorithm.
6/4/2015 32
33. Adding information means some of the off pixels in the
pattern grid are made on.
The number of pixels that can be made on for all the strings
that can be stored in networks.
6/4/2015 33
34. The network architecture is very simple.
This network is a counter part of Hopfield auto associative
network.
The advantage of this network is that it involves less
number of neurons and less number of connections in
comparison to its counter part.
There is no capacity limitation.
6/4/2015 34
35. The hamming network retrieves only the closest class index
and not the entire prototype vector.
It is not able to restore any of the key patterns. It provides
passive classification only.
This network does not have any mechanism for data
restoration.
It’s not to restore distorted pattern.
6/4/2015 35
36. Jacek M. Zurada, “Introduction to artificial Neural
Systems”, Jaico Publication House. New Delhi, INDIA
Amit Kumar Gupta, Yash Pal Singh, “Analysis of Hamming
Network and MAXNET of Neural Network method in the
String Recognition”, IEEE ,2011.
C.M. Bishop, Neural Networks for Pattern Recognition,
Oxford University Press, Oxford, 2003.
6/4/2015 36