Artificial Neural Networks
Architectures
Perceptron Network
• Weights between
input & output
units are
adjusted.
• Weights between
sensory
associator units
are fixed.
• Goal of
Perceptron net is
to classify the
input pattern as a
member on not
a member to a
particular class.
X1
Xi
1
Xn
Y
X0
X1
Xi
Xn
y
b
W1
W2
Wn
Adaline Network
• Receives input from
several units and one
unit called bias.
• Inputs are +1 or -1,
weights have sign +
or -
• Net input calculated
is applied to
quantizer function to
restore output to +1
or -1
• Compares actual
Madaline Network
• Contains “n” units of
input layer,”m” units of
adaline layers, “1” unit
of Madaline Layer.
• Each neuron in the
Adaline and madaline
layer have a bias of
excitation 1.
• Adaline layer is present
between input and
output Madaline Layer.
• Used in
Communication
Systems , equilizers and
noise cancellation
devices.
Back Propagation Network
• A multilayer Feed
forward network
consisting of Input,
hidden and output
layers.
• Hidden and output
layers have biases
whose activation is 1.
• Signals are reversed
in learning phase.
• Inputs sent to BPN
and outputs
obtained could be
Auto Associative Memory Network
• Training input and
target output vectors
are same.
• Input layers consist of n
input units & output
layer consist of n
output units.
• Input and output units
are connected
through weighted
interconnections.
• Input and output
vectors are perfectly
correlated with each
other component by
Maxnet
• Symmetrical weights
are present over the
weighted
interconnections.
• Weights between
neurons are inhibitory
and fixed.
• The maxnet with this
structure can be
used as a subnet to
select a particular
node whose net
input is the largest.
X1 Xm
Xi Xj
1 1
1
−𝜀
−𝜀
−𝜀
−𝜀
−𝜀
−𝜀
1
Mexican Hat Net
• Neurons are arranged
in a linear order such
that positive
connections exist
between Xi and
neighborhood units &
negative between Xi
and far away units.
• Positive region is
Cooperation and
negative region is
Competition.
• Size of these regions
depend on the
magnitude that exist
between positive and
X i X
i+1
X
i+2
X
i+3
X
i-1
X
i-2
X
i-3
W3
W3
W2 W2
W1 W1
𝛿𝑖
W0

Artificial neural network architectures

  • 1.
  • 2.
    Perceptron Network • Weightsbetween input & output units are adjusted. • Weights between sensory associator units are fixed. • Goal of Perceptron net is to classify the input pattern as a member on not a member to a particular class. X1 Xi 1 Xn Y X0 X1 Xi Xn y b W1 W2 Wn
  • 3.
    Adaline Network • Receivesinput from several units and one unit called bias. • Inputs are +1 or -1, weights have sign + or - • Net input calculated is applied to quantizer function to restore output to +1 or -1 • Compares actual
  • 4.
    Madaline Network • Contains“n” units of input layer,”m” units of adaline layers, “1” unit of Madaline Layer. • Each neuron in the Adaline and madaline layer have a bias of excitation 1. • Adaline layer is present between input and output Madaline Layer. • Used in Communication Systems , equilizers and noise cancellation devices.
  • 5.
    Back Propagation Network •A multilayer Feed forward network consisting of Input, hidden and output layers. • Hidden and output layers have biases whose activation is 1. • Signals are reversed in learning phase. • Inputs sent to BPN and outputs obtained could be
  • 6.
    Auto Associative MemoryNetwork • Training input and target output vectors are same. • Input layers consist of n input units & output layer consist of n output units. • Input and output units are connected through weighted interconnections. • Input and output vectors are perfectly correlated with each other component by
  • 7.
    Maxnet • Symmetrical weights arepresent over the weighted interconnections. • Weights between neurons are inhibitory and fixed. • The maxnet with this structure can be used as a subnet to select a particular node whose net input is the largest. X1 Xm Xi Xj 1 1 1 −𝜀 −𝜀 −𝜀 −𝜀 −𝜀 −𝜀 1
  • 8.
    Mexican Hat Net •Neurons are arranged in a linear order such that positive connections exist between Xi and neighborhood units & negative between Xi and far away units. • Positive region is Cooperation and negative region is Competition. • Size of these regions depend on the magnitude that exist between positive and X i X i+1 X i+2 X i+3 X i-1 X i-2 X i-3 W3 W3 W2 W2 W1 W1 𝛿𝑖 W0

Editor's Notes

  • #3  Learning signal is the difference between the desired and actual response of a neuron. The perceptron learning rule is Consider a finite “n” number of input training vectors Associated target (desired) values x(n) and t(n) where n is from 1 to N Target is either +1 or -1 The output “y” is obtained on the basis of the net input calculated and activation function being applied over the net input