2. Introduction
Neural computing tries to model the working of neurons in a human
brain.
It is a subset of Artificial Intelligence, and represents one of the facets
of ”intelligence”.
Note that neural networks only try to model neurons, and are not
trying to replicate the functioning of a human brain.
Amit Praseed Classification November 5, 2019 2 / 22
3. How do neurons work?
A neuron is composed of three main parts - dendrites, axon and axon
terminals.
Neurons are connected end to end, i.e. the dendrites of a neuron are
connected to the axon terminals of another neuron to enable informa-
tion flow.
However, in between the axon terminals and the dendrites of the other
neuron, there is a gap which is called as synapse.
Electrical signals which flow through a neuron are passed on to the
next neurons through this synapse, which attenuates the signal.
By BruceBlaus - Own work, CC BY 3.0,Amit Praseed Classification November 5, 2019 3 / 22
4. The Perceptron
The simplest model of a neuron is the Perceptron.
A perceptron consists of n inputs x1, x2, ...xn which models the electrical
signals flowing through a neuron.
To model the attenuation across a synapse, each of these inputs are
multiplied by a weight w1, w2, ...wn.
So the total input that reaches the next neuron is x1w1 + x2w2 + ... +
xnwn.
The perceptron is said to ”fire” only if this input received exceeds a
threshold value or bias θ.
a(t) =
1, n
i=0 wi ∗ xi > 0
0, n
i=0 wi ∗ xi ≤ 0
where w0 = θ and x0 = 1 and η is the learning rate.
Amit Praseed Classification November 5, 2019 4 / 22
5. The Perceptron Update Rule
The weights of the perceptron are changed along the following rules
wi (t + 1) =
wi (t) + η ∗ xi , ExpectedOutput ≤ ObservedOutput
wi (t) − η ∗ xi , ExpectedOutput > ObservedOutput
Amit Praseed Classification November 5, 2019 5 / 22
6. The Perceptron
−2 −1 1 2
−2
−1
1
2
Let η = 0.2
Let W=
0
1
0.5
The line represented by the
weight vectors is
w0x0 + w1x1 + w2x2 = 0
x1 + 0.5x2 = 0
Amit Praseed Classification November 5, 2019 6 / 22
7. The Perceptron
−2 −1 1 2
−2
−1
1
2
Point:(1,1)
Weighted Input =
1x0+1x1+1x0.5=2.5>0
Activation: 1 (Correct Classifi-
cation)
Action: Do Nothing
Amit Praseed Classification November 5, 2019 7 / 22
16. A Look Back at the Finale
−2 −1 1 2
−2
−1
1
2
Point: (-2,1)
Weighted Input=1(-0.2)+
(-2)(0.6)+(1)(0.9)=-0.5<0
Activation: 0 (Incorrect Classi-
fication)
Update weight vectors:
w0
w1
w2
=
−0.2
0.6
0.9
+
0.2
1
−2
1
=
0
0.2
1.1
Amit Praseed Classification November 5, 2019 16 / 22
17. A Look Back at the Finale
−2 −1 1 2
−2
−1
1
2
Point: (-2,1)
Weighted Input=1(-0.2)+
(-2)(0.6)+(1)(0.9)=-0.5<0
Activation: 0 (Incorrect Classi-
fication)
Update weight vectors:
w0
w1
w2
=
−0.2
0.6
0.9
+
0.2
1
−2
1
=
0
0.2
1.1
Amit Praseed Classification November 5, 2019 17 / 22
18. A Look Back at the Finale
−2 −1 1 2
−2
−1
1
2
Point: (1.5,-0.4)
Weighted Input=1(0)+
(1.5)(0.2)+(-0.4)(1.1)=-
0.14<0
Activation: 0 (Incorrect Classi-
fication)
Update weight vectors:
w0
w1
w2
=
0
0.2
1.1
+
0.2
1
1.5
−0.5
=
0.2
0.5
1
Amit Praseed Classification November 5, 2019 18 / 22
19. A Look Back at the Finale
−2 −1 1 2
−2
−1
1
2
Point: (1.5,-0.4)
Weighted Input=1(0)+
(1.5)(0.2)+(-0.4)(1.1)=-
0.14<0
Activation: 0 (Incorrect Classi-
fication)
Update weight vectors:
w0
w1
w2
=
0
0.2
1.1
+
0.2
1
1.5
−0.5
=
0.2
0.5
1
Amit Praseed Classification November 5, 2019 19 / 22
20. Widrow-Hoff Delta Rule
Modifying the weights by the input value during incorrect classification
seems excessive, and can lead to the decision boundary swinging back
and forth before stabilizing.
Intuitively, it is reasonable to modify the weights only by the difference
between the threshold and the weighted input.
wi (t + 1) = wi (t) + η∆xi (t)
∆ = θ −
n
i=0
wi ∗ xi
Note that in this way, there is no separate equation for the weighted
input lying on both sides of the threshold. The sign of ∆ automatically
takes care of this.
Amit Praseed Classification November 5, 2019 20 / 22