SlideShare a Scribd company logo
1 of 123
Download to read offline
Neural network problems
Outline
1. Single-layer perceptron
a) Using perceptron learning algorithm
b) Using delta rule
2. Single-layer perceptron with multiple outputs
1. Single-layer perceptron
1. Single-layer perceptron
• Before using SLP, make sure the data is linearly separable
• Visualize the data (not possible for more than 2 features)
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
x1
x2
Linearly separable: We
can use SLP
1. Single-layer perceptron
• Visualization example (2 features)
x1 x2 t
2 3 0
-3 3 1
3 4 0
-1 2 1
7 2 0
0 1 1
0 -2 1
3 8 1
7 5 1
x1
x2 Non-linearly separable:
We can NOT use SLP
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
1. Single-layer perceptron
• Visualization example (1 feature)
x
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
-0.5
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
-0.5
1.2
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
-0.5
1.2
4
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
-0.5
1.2
4
2.6
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
-0.5
1.2
4
2.6 3.4
1. Single-layer perceptron
• Visualization example (1 feature)
x t
1.1 0
2.8 1
-0.5 0
1.2 0
4 1
2.6 0
3.4 1
x
1.1 2.8
-0.5
1.2
4
2.6 3.4
We can separate
the two classes
with one point:
We can use SLP
Note: in 1d, SLP is a point separator
1. Single-layer perceptron
• Visualization example (1 feature)
1. a) Perceptron learning algorithm
1. a) Perceptron learning algorithm
1. a) Perceptron learning algorithm
1. a) Perceptron learning algorithm
1. a) Perceptron learning algorithm
Step size depends on Oi.
Not controlled because Oi
can be large.
1. a) Perceptron learning algorithm
It assumes
the output
is either 0
or 1
1. a) Perceptron learning algorithm
May cause
oscillations
1. a) Perceptron learning algorithm example
x1 x2 t
0 0 0
0 1 1
1 0 1
1 1 1
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 0
0 1 1
1 0 1
1 1 1
Add new columns
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0
0 1 1 1
1 0 1 1
1 1 1 1
Bias node always
produces 1
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 0
0 1 1 1
1 0 1 1
1 1 1 1
Put initial weights (given)
If not given: assume random weights
(but not 0)
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0
0 1 1 1
1 0 1 1
1 1 1 1
Calculate net = x1*w1 + x2*w2 + bias*w_bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 1
1 0 1 1
1 1 1 1
Calculate y =
1 if net >= threshold,
0 if net < threshold
Threshold should be given. If not, assume random
threshold
Here we assume threshold = 0.1  net < threshold
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 1
1 0 1 1
1 1 1 1
y = t ? yes
Weights will not be changed
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 1
1 0 1 1
1 1 1 1
Use the same weights for next pattern
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 1
1 0 1 1
1 1 1 1
Calculate net = x1*w1 + x2*w2 + bias*w_bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 1
1 1 1 1
net < 0.1  y = 0
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 1
1 1 1 1
y != t
We need to change weights
y = 0 we want y = 1 increase weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 1
1 1 1 1
w1 := w1 + x1
w2 := w1 + x1
w_bias := w_bias + bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1
1 1 1 1
Calculate net = x1*w1 + x2*w2 + bias*w_bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 1
net >= 0.1  y = 1
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 0.1 1.2 0.8 1
y = t
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 0.1 1.2 0.8 2.1 1
Calculate net = x1*w1 + x2*w2 + bias*w_bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 0.1 1.2 0.8 2.1 1 1
net >= 0.1  y = 1
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 0.1 1.2 0.8 2.1 1 1
Weights for next epoch 0.1 1.2 0.8
y = t
Don’t change weights
1. a) Perceptron learning algorithm
1 Epoch complete:
But we still have 1 error
We need to run another epoch
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 0.1 1.2 0.8 2.1 1 1
Weights for next epoch 0.1 1.2 0.8
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 1.2 0.8 0.9 1 1
1 1 1 0.1 1.2 0.8 2.1 1 1
Weights for next epoch 0.1 1.2 0.8
Use these as
initial weights
for next epoch
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0
0 1 1 1
1 0 1 1
1 1 1 1
New epoch with initial weights from previous slide
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 1
1 0 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1
1 0 1 1
1 1 1 1
y != t y = 1 and we want y = 0
Decrease weights:
w1 := w1 - x1
w2 := w1 - x1
w_bias := w_bias - bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 0.1 1.2 -0.2 1
1 1 1 1
y = t
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 0.1 1.2 -0.2 -0.1 0 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 0.1 1.2 -0.2 -0.1 0 1
1 1 1 1.1 1.2 0.8 1
y = 0 and we want y = 1
Increase weights:
w1 := w1 + x1
w2 := w1 +x1
w_bias := w_bias + bias
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 0.1 1.2 -0.2 -0.1 0 1
1 1 1 1.1 1.2 0.8 3.1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 0.1 1.2 -0.2 -0.1 0 1
1 1 1 1.1 1.2 0.8 3.1 1 1
Weights for next epoch 1.1 1.2 0.8
y = t
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 0.1 1.2 0.8 0.8 1 0
0 1 1 0.1 1.2 -0.2 1 1 1
1 0 1 0.1 1.2 -0.2 -0.1 0 1
1 1 1 1.1 1.2 0.8 3.1 1 1
Weights for next epoch 1.1 1.2 0.8
Second epoch done
We still have 2 errors
We need to run another epoch
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0
0 1 1 1
1 0 1 1
1 1 1 1
Starting third epoch
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1
1 0 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1
1 0 1 1
1 1 1 1
Decrease weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 1
1 1 1 1
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 1
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Weights for next epoch 1.1 1.2 -0.2
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 0.8 0.8 1 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Weights for next epoch 1.1 1.2 -0.2
We still have one error
We need to run another epoch
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 0
0 1 1 1
1 0 1 1
1 1 1 1
Fourth epoch
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1
1 0 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1
1 0 1 1
1 1 1 1
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 1
1 1 1 1
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 1
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Calculate net and y
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Weights for next epoch 1.1 1.2 -0.2
Don’t change weights
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Weights for next epoch 1.1 1.2 -0.2
Fourth epoch done
No errors  Stop training
1. a) Perceptron learning algorithm
x1 x2 bias w1 w2 w_bias net y t
0 0 1 1.1 1.2 -0.2 -0.2 0 0
0 1 1 1.1 1.2 -0.2 1 1 1
1 0 1 1.1 1.2 -0.2 0.9 1 1
1 1 1 1.1 1.2 -0.2 2.1 1 1
Weights for next epoch 1.1 1.2 -0.2
Final weights
1. b) SLP using delta rule
• Same as the previous example. Just updating weights is different
𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω
• For previous example:
ow1 := w1 + η * x1 * (t – y)
ow2 := w2 + η * x2 * (t – y)
ow_bias := w_bias + η * bias * (t – y)
1. b) SLP using delta rule
• Same as the previous example. Just updating weights is different
𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω
• For previous example:
ow1 := w1 + η * x1 * (t – y)
ow2 := w2 + η * x2 * (t – y)
ow_bias := w_bias + η * bias * (t – y)
The term “bias”
always equals 1
(can be omitted)
1. b) SLP using delta rule
• Same as the previous example. Just updating weights is different
𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω
• For previous example:
ow1 := w1 + η * x1 * (t – y)
ow2 := w2 + η * x2 * (t – y)
ow_bias := w_bias + η * bias * (t – y)
This is the learning
rate (a given
constant). If not
given, assume a
value between
0.01 and 0.9
1. b) SLP using delta rule
• Same as the previous example. Just updating weights is different
𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω
• For previous example:
ow1 := w1 + η * x1 * (t – y)
ow2 := w2 + η * x2 * (t – y)
ow_bias := w_bias + η * bias * (t – y)
We always add
(even if y > t)
But how do we
decrease weights?
1. b) SLP using delta rule
• Same as the previous example. Just updating weights is different
𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω
• For previous example:
ow1 := w1 + η * x1 * (t – y)
ow2 := w2 + η * x2 * (t – y)
ow_bias := w_bias + η * bias * (t – y)
If y > t, this term
will be negative,
causing weights to
be decreased
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 0
0 1 1 1
1 0 1 1
1 1 1 1
Same example using delta rule
Assume learning rate = 0.1
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 1
1 0 1 1
1 1 1 1
Calculating net and y is not different
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 1
1 0 1 1
1 1 1 1
If we try to update weights: (even though y = t)
w1 := w1 + 0.1 * x1 * (t - y)
w2 := w2 + 0.1 * x2 * (t - y)
wb := wb + 0.1 * bias * (t - y)
(t - y) = 0 so the weights will not be changed
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 1
1 1 1 1
Calculate y and net
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 0.3 -0.1 1
1 1 1 1
update weights:
w1 := w1 + 0.1 * x1 * (t - y) 
w2 := w2 + 0.1 * x2 * (t - y) 
wb := wb + 0.1 * bias * (t - y) 
w1 := 0.1 + 0.1 * 0 * 1  0.1
w2 := 0.2 + 0.1 * 1 * 1  0.3
wb := -0.2 + 0.1 * 1 * 1  -0.1
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 0.3 -0.1 0 0 1
1 1 1 1
Calculate net and y
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 0.3 -0.1 0 0 1
1 1 1 0.2 0.3 0 1
update weights:
w1 := w1 + 0.1 * x1 * (t - y)  0.1 + 0.1 * 1 * 1  0.2
w2 := w2 + 0.1 * x2 * (t - y)  0.3 + 0.1 * 0 * 1  0.3
wb := wb + 0.1 * bias * (t - y)  -0.1 + 0.1 * 1 * 1  0
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 0.3 -0.1 0 0 1
1 1 1 0.2 0.3 0 0.5 1 1
Calculate net and y
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 0.3 -0.1 0 0 1
1 1 1 0.2 0.3 0 0.5 1 1
Weights for next epoch: 0.2 0.3 0
Weights will not be changed
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.1 0.2 -0.2 -0.2 0 0
0 1 1 0.1 0.2 -0.2 0 0 1
1 0 1 0.1 0.3 -0.1 0 0 1
1 1 1 0.2 0.3 0 0.5 1 1
Weights for next epoch: 0.2 0.3 0
First epoch done
We have 2 errors
We need to run another epoch
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.2 0.3 0 0
0 1 1 1
1 0 1 1
1 1 1 1
Second epoch
1. b) SLP using delta rule
x1 x2 bias w1 w2 wb net y t
0 0 1 0.2 0.3 0 0 0 0
0 1 1 0.2 0.3 0 0.3 1 1
1 0 1 0.2 0.3 0 0.2 1 1
1 1 1 0.2 0.3 0 0.5 1 1
Weights for next epoch: 0.2 0.3 0
Second epoch has no errors  stop training
2. SLP with multiple outputs
x1 x2 t1 t2
0 0 0 0
0 1 1 0
1 0 1 0
1 1 1 1
The two output neurons can
have different threshold
values
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0 0
0 1 1 0
1 0 1 0
1 1 1 1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 0
0 1 1 0
1 0 1 0
1 1 1 1
Initial weights
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 0 0
0 1 1 0
1 0 1 0
1 1 1 1
net1 = w11 * x1 + w21 * x2 + b1
net1 = 0.1 * 0 + 0.2 * 0 - 0.2 = -0.2
Assume threshold1 = 0.1,
threshold2 = 1
net1 >= 0.1 ?  y = 1
else?  y = 0
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 1 0
1 0 1 0
1 1 1 1
net2 = w12 * x1 + w22 * x2 + b2
net2 = 0.2 * 0 + 0.3 * 0 + 1 = 1
net2 >= 1 ?  y = 1
else?  y = 0
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 1 0
1 0 1 0
1 1 1 1
y1 is OK
don’t change its weights
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 1 0
1 0 1 0
1 1 1 1
y2 is wrong
update its weights:
(We can either use perceptron learning
algorithm or delta rule)
Assume we are using delta rule, 𝜂 = 0.1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 1 0
1 0 1 0
1 1 1 1
w12 := w12 + 0.1 * x1 * (t2 – y2)  0.2 + 0.1 * 0 * -1  0.2
w22 := w22 + 0.1 * x2 * (t2 – y2)  0.3 + 0.1 * 0 * -1  0.3
wb2 := wb2 + 0.1 * (t2 – y2)  1 + 0.1 * -1  0.9
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 0
1 0 1 0
1 1 1 1
calculate net1, y1
net1 = 0.1 * 0 + 0.2 * 1 – 0.2 = 0 net1 < 0.1  y1 = 0
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 1 0
1 1 1 1
calculate net2, y2
net2 = 0.2 * 0 + 0.3 * 1 + 0.9 = 1.2 net2 >= 1  y2 = 1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 1 0
1 1 1 1
Update weights of y1
w11 = 0.1 + 0.1 * 0 * (1 - 0) = 0.1
w21 = 0.2 + 0.1 * 1 * (1 – 0) = 0.3
wb1 = -0.2 + 0.1 * (1 – 0) = -0.1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 1 0
1 1 1 1
Update weights of y2
w12 = 0.2 + 0.1 * 0 * (0 - 1) = 0.2
w22 = 0.3 + 0.1 * 1 * (0 – 1) = 0.2
wb2 = 0.9 + 0.1 * (0 – 1) = 0.8
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 0
1 1 1 1
Calculate net1 and y1
net1 = 0.1 * 1 + 0.3 * 0 – 0.1 = 0  y1 = 0
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 1 1
Calculate net2 and y2
net2 = 0.2 * 1 + 0.2 * 0 + 0.8 = 1  y2 = 1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 0.2 0.3 0 1 1
Update weights of y1
w11 = 0.1 + 0.1 * 1 * (1 - 0) = 0.2
w21 = 0.3 + 0.1 * 0 * (1 – 0) = 0.3
wb1 = -0.1 + 0.1 * (1 – 0) = 0
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 0.2 0.3 0 0.1 0.2 0.7 1 1
Update weights of y2
w12 = 0.2 + 0.1 * 1 * (0 - 1) = 0.1
w22 = 0.2 + 0.1 * 0 * (0 - 1) = 0.2
wb2 = 0.8 + 0.1 * (0 - 1) = 0.7
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1
Calculate net1 and y1
net1 = 0.2 * 1 + 0.3 * 1 + 0 = 0.5  y1 = 1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1
Calculate net2 and y2
net2 = 0.1 * 1 + 0.2 * 0.7 + 0 = 1  y2 = 1
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1
Next
weights
0.2 0.3 0 0.1 0.2 0.7
both y1 and y2 are OK
Don’t update weights
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0
0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0
1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0
1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1
Next
weights
0.2 0.3 0 0.1 0.2 0.7
We need to run another epoch
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.2 0.3 0 0.1 0.2 0.7 0 0
0 1 1 0
1 0 1 0
1 1 1 1
Second epoch
2. SLP with multiple outputs
x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2
0 0 0.2 0.3 0 0.1 0.2 0.7 0 0 0 0
0 1 0.2 0.3 0 0.1 0.2 0.7 1 0 1 0
1 0 0.2 0.3 0 0.1 0.2 0.7 1 0 1 0
1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1
Next
weights
0.2 0.3 0 0.1 0.2 0.7
Second epoch

More Related Content

Similar to NN problems solved

Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningVahid Mirjalili
 
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptcs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptGayathriRHICETCSESTA
 
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptcs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptGayathriRHICETCSESTA
 
Neural network
Neural networkNeural network
Neural networkmarada0033
 
Lecture 2: Artificial Neural Network
Lecture 2: Artificial Neural NetworkLecture 2: Artificial Neural Network
Lecture 2: Artificial Neural NetworkMohamed Loey
 
Deep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptxDeep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptxDr. Radhey Shyam
 
Machine Learning.pdf
Machine Learning.pdfMachine Learning.pdf
Machine Learning.pdfnikola_tesla1
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Introduction to Gaussian Processes
Introduction to Gaussian ProcessesIntroduction to Gaussian Processes
Introduction to Gaussian ProcessesDmytro Fishman
 
Back propagation
Back propagationBack propagation
Back propagationSan Kim
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Randa Elanwar
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patternsVito Strano
 

Similar to NN problems solved (20)

Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
 
feedforward-network-
feedforward-network-feedforward-network-
feedforward-network-
 
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptcs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
 
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.pptcs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
 
Neural network
Neural networkNeural network
Neural network
 
Adv.TopicsAICNN.ppt
Adv.TopicsAICNN.pptAdv.TopicsAICNN.ppt
Adv.TopicsAICNN.ppt
 
Annintro
AnnintroAnnintro
Annintro
 
Lecture 2: Artificial Neural Network
Lecture 2: Artificial Neural NetworkLecture 2: Artificial Neural Network
Lecture 2: Artificial Neural Network
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Deep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptxDeep-Learning-2017-Lecture5CNN.pptx
Deep-Learning-2017-Lecture5CNN.pptx
 
Machine Learning.pdf
Machine Learning.pdfMachine Learning.pdf
Machine Learning.pdf
 
Artificial Neural Networks
Artificial Neural NetworksArtificial Neural Networks
Artificial Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Quantum Computation Introduction
Quantum Computation IntroductionQuantum Computation Introduction
Quantum Computation Introduction
 
Introduction to Gaussian Processes
Introduction to Gaussian ProcessesIntroduction to Gaussian Processes
Introduction to Gaussian Processes
 
Back propagation
Back propagationBack propagation
Back propagation
 
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 4 of 9
 
UNIT I_3.pdf
UNIT I_3.pdfUNIT I_3.pdf
UNIT I_3.pdf
 
cnn.pdf
cnn.pdfcnn.pdf
cnn.pdf
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patterns
 

Recently uploaded

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Alan Dix
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxnull - The Open Security Community
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonetsnaman860154
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphNeo4j
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Hyundai Motor Group
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetEnjoy Anytime
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024BookNet Canada
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhisoniya singh
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?XfilesPro
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 

Recently uploaded (20)

Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
 
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
Neo4j - How KGs are shaping the future of Generative AI at AWS Summit London ...
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptxMaking_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
Making_way_through_DLL_hollowing_inspite_of_CFG_by_Debjeet Banerjee.pptx
 
How to convert PDF to text with Nanonets
How to convert PDF to text with NanonetsHow to convert PDF to text with Nanonets
How to convert PDF to text with Nanonets
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge GraphSIEMENS: RAPUNZEL – A Tale About Knowledge Graph
SIEMENS: RAPUNZEL – A Tale About Knowledge Graph
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2Next-generation AAM aircraft unveiled by Supernal, S-A2
Next-generation AAM aircraft unveiled by Supernal, S-A2
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your BudgetHyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
Hyderabad Call Girls Khairatabad ✨ 7001305949 ✨ Cheap Price Your Budget
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
Transcript: #StandardsGoals for 2024: What’s new for BISAC - Tech Forum 2024
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | DelhiFULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
FULL ENJOY 🔝 8264348440 🔝 Call Girls in Diplomatic Enclave | Delhi
 
How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?How to Remove Document Management Hurdles with X-Docs?
How to Remove Document Management Hurdles with X-Docs?
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 

NN problems solved

  • 2. Outline 1. Single-layer perceptron a) Using perceptron learning algorithm b) Using delta rule 2. Single-layer perceptron with multiple outputs
  • 4. 1. Single-layer perceptron • Before using SLP, make sure the data is linearly separable • Visualize the data (not possible for more than 2 features)
  • 5. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1
  • 6. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 7. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 8. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 9. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 10. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 11. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 12. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 13. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2
  • 14. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 x1 x2 Linearly separable: We can use SLP
  • 15. 1. Single-layer perceptron • Visualization example (2 features) x1 x2 t 2 3 0 -3 3 1 3 4 0 -1 2 1 7 2 0 0 1 1 0 -2 1 3 8 1 7 5 1 x1 x2 Non-linearly separable: We can NOT use SLP
  • 16. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1
  • 17. 1. Single-layer perceptron • Visualization example (1 feature) x x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1
  • 18. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1
  • 19. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8
  • 20. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8 -0.5
  • 21. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8 -0.5 1.2
  • 22. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8 -0.5 1.2 4
  • 23. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8 -0.5 1.2 4 2.6
  • 24. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8 -0.5 1.2 4 2.6 3.4
  • 25. 1. Single-layer perceptron • Visualization example (1 feature) x t 1.1 0 2.8 1 -0.5 0 1.2 0 4 1 2.6 0 3.4 1 x 1.1 2.8 -0.5 1.2 4 2.6 3.4 We can separate the two classes with one point: We can use SLP Note: in 1d, SLP is a point separator
  • 26. 1. Single-layer perceptron • Visualization example (1 feature)
  • 27. 1. a) Perceptron learning algorithm
  • 28. 1. a) Perceptron learning algorithm
  • 29. 1. a) Perceptron learning algorithm
  • 30. 1. a) Perceptron learning algorithm
  • 31. 1. a) Perceptron learning algorithm Step size depends on Oi. Not controlled because Oi can be large.
  • 32. 1. a) Perceptron learning algorithm It assumes the output is either 0 or 1
  • 33. 1. a) Perceptron learning algorithm May cause oscillations
  • 34. 1. a) Perceptron learning algorithm example x1 x2 t 0 0 0 0 1 1 1 0 1 1 1 1
  • 35. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 0 0 1 1 1 0 1 1 1 1 Add new columns
  • 36. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0 0 1 1 1 1 0 1 1 1 1 1 1 Bias node always produces 1
  • 37. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 0 0 1 1 1 1 0 1 1 1 1 1 1 Put initial weights (given) If not given: assume random weights (but not 0)
  • 38. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 1 1 1 1 0 1 1 1 1 1 1 Calculate net = x1*w1 + x2*w2 + bias*w_bias
  • 39. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 1 1 0 1 1 1 1 1 1 Calculate y = 1 if net >= threshold, 0 if net < threshold Threshold should be given. If not, assume random threshold Here we assume threshold = 0.1  net < threshold
  • 40. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 1 1 0 1 1 1 1 1 1 y = t ? yes Weights will not be changed
  • 41. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 1 1 0 1 1 1 1 1 1 Use the same weights for next pattern
  • 42. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 1 1 0 1 1 1 1 1 1 Calculate net = x1*w1 + x2*w2 + bias*w_bias
  • 43. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 1 1 1 1 1 net < 0.1  y = 0
  • 44. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 1 1 1 1 1 y != t We need to change weights y = 0 we want y = 1 increase weights
  • 45. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 1 1 1 1 1 w1 := w1 + x1 w2 := w1 + x1 w_bias := w_bias + bias
  • 46. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 Calculate net = x1*w1 + x2*w2 + bias*w_bias
  • 47. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 1 net >= 0.1  y = 1
  • 48. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 0.1 1.2 0.8 1 y = t Don’t change weights
  • 49. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 0.1 1.2 0.8 2.1 1 Calculate net = x1*w1 + x2*w2 + bias*w_bias
  • 50. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 0.1 1.2 0.8 2.1 1 1 net >= 0.1  y = 1
  • 51. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 0.1 1.2 0.8 2.1 1 1 Weights for next epoch 0.1 1.2 0.8 y = t Don’t change weights
  • 52. 1. a) Perceptron learning algorithm 1 Epoch complete: But we still have 1 error We need to run another epoch x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 0.1 1.2 0.8 2.1 1 1 Weights for next epoch 0.1 1.2 0.8
  • 53. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 1.2 0.8 0.9 1 1 1 1 1 0.1 1.2 0.8 2.1 1 1 Weights for next epoch 0.1 1.2 0.8 Use these as initial weights for next epoch
  • 54. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0 0 1 1 1 1 0 1 1 1 1 1 1 New epoch with initial weights from previous slide
  • 55. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 1 1 0 1 1 1 1 1 1 Calculate net and y
  • 56. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 0 1 1 1 1 1 1 y != t y = 1 and we want y = 0 Decrease weights: w1 := w1 - x1 w2 := w1 - x1 w_bias := w_bias - bias
  • 57. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 1 1 1 1 1 Calculate net and y
  • 58. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 0.1 1.2 -0.2 1 1 1 1 1 y = t Don’t change weights
  • 59. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 0.1 1.2 -0.2 -0.1 0 1 1 1 1 1 Calculate net and y
  • 60. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 0.1 1.2 -0.2 -0.1 0 1 1 1 1 1.1 1.2 0.8 1 y = 0 and we want y = 1 Increase weights: w1 := w1 + x1 w2 := w1 +x1 w_bias := w_bias + bias
  • 61. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 0.1 1.2 -0.2 -0.1 0 1 1 1 1 1.1 1.2 0.8 3.1 1 1 Calculate net and y
  • 62. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 0.1 1.2 -0.2 -0.1 0 1 1 1 1 1.1 1.2 0.8 3.1 1 1 Weights for next epoch 1.1 1.2 0.8 y = t Don’t change weights
  • 63. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 0.1 1.2 0.8 0.8 1 0 0 1 1 0.1 1.2 -0.2 1 1 1 1 0 1 0.1 1.2 -0.2 -0.1 0 1 1 1 1 1.1 1.2 0.8 3.1 1 1 Weights for next epoch 1.1 1.2 0.8 Second epoch done We still have 2 errors We need to run another epoch
  • 64. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0 0 1 1 1 1 0 1 1 1 1 1 1 Starting third epoch
  • 65. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1 1 0 1 1 1 1 1 1 Calculate net and y
  • 66. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 0 1 1 1 1 1 1 Decrease weights
  • 67. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1 1 1 1 1 Calculate net and y
  • 68. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 1 1 1 1 1 Don’t change weights
  • 69. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1 Calculate net and y
  • 70. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 1 Don’t change weights
  • 71. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Calculate net and y
  • 72. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Weights for next epoch 1.1 1.2 -0.2 Don’t change weights
  • 73. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 0.8 0.8 1 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Weights for next epoch 1.1 1.2 -0.2 We still have one error We need to run another epoch
  • 74. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 0 0 1 1 1 1 0 1 1 1 1 1 1 Fourth epoch
  • 75. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1 1 0 1 1 1 1 1 1 Calculate net and y
  • 76. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 0 1 1 1 1 1 1 Don’t change weights
  • 77. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1 1 1 1 1 Calculate net and y
  • 78. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 1 1 1 1 1 Don’t change weights
  • 79. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1 Calculate net and y
  • 80. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 1 Don’t change weights
  • 81. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Calculate net and y
  • 82. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Weights for next epoch 1.1 1.2 -0.2 Don’t change weights
  • 83. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Weights for next epoch 1.1 1.2 -0.2 Fourth epoch done No errors  Stop training
  • 84. 1. a) Perceptron learning algorithm x1 x2 bias w1 w2 w_bias net y t 0 0 1 1.1 1.2 -0.2 -0.2 0 0 0 1 1 1.1 1.2 -0.2 1 1 1 1 0 1 1.1 1.2 -0.2 0.9 1 1 1 1 1 1.1 1.2 -0.2 2.1 1 1 Weights for next epoch 1.1 1.2 -0.2 Final weights
  • 85. 1. b) SLP using delta rule • Same as the previous example. Just updating weights is different 𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω • For previous example: ow1 := w1 + η * x1 * (t – y) ow2 := w2 + η * x2 * (t – y) ow_bias := w_bias + η * bias * (t – y)
  • 86. 1. b) SLP using delta rule • Same as the previous example. Just updating weights is different 𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω • For previous example: ow1 := w1 + η * x1 * (t – y) ow2 := w2 + η * x2 * (t – y) ow_bias := w_bias + η * bias * (t – y) The term “bias” always equals 1 (can be omitted)
  • 87. 1. b) SLP using delta rule • Same as the previous example. Just updating weights is different 𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω • For previous example: ow1 := w1 + η * x1 * (t – y) ow2 := w2 + η * x2 * (t – y) ow_bias := w_bias + η * bias * (t – y) This is the learning rate (a given constant). If not given, assume a value between 0.01 and 0.9
  • 88. 1. b) SLP using delta rule • Same as the previous example. Just updating weights is different 𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω • For previous example: ow1 := w1 + η * x1 * (t – y) ow2 := w2 + η * x2 * (t – y) ow_bias := w_bias + η * bias * (t – y) We always add (even if y > t) But how do we decrease weights?
  • 89. 1. b) SLP using delta rule • Same as the previous example. Just updating weights is different 𝑤𝑖,Ω ≔ 𝑤𝑖,Ω + 𝜂 𝑜𝑖 𝑡Ω − 𝑦Ω • For previous example: ow1 := w1 + η * x1 * (t – y) ow2 := w2 + η * x2 * (t – y) ow_bias := w_bias + η * bias * (t – y) If y > t, this term will be negative, causing weights to be decreased
  • 90. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 0 0 1 1 1 1 0 1 1 1 1 1 1 Same example using delta rule Assume learning rate = 0.1
  • 91. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 1 1 0 1 1 1 1 1 1 Calculating net and y is not different
  • 92. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 1 1 0 1 1 1 1 1 1 If we try to update weights: (even though y = t) w1 := w1 + 0.1 * x1 * (t - y) w2 := w2 + 0.1 * x2 * (t - y) wb := wb + 0.1 * bias * (t - y) (t - y) = 0 so the weights will not be changed
  • 93. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 1 1 1 1 1 Calculate y and net
  • 94. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 0.3 -0.1 1 1 1 1 1 update weights: w1 := w1 + 0.1 * x1 * (t - y)  w2 := w2 + 0.1 * x2 * (t - y)  wb := wb + 0.1 * bias * (t - y)  w1 := 0.1 + 0.1 * 0 * 1  0.1 w2 := 0.2 + 0.1 * 1 * 1  0.3 wb := -0.2 + 0.1 * 1 * 1  -0.1
  • 95. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 0.3 -0.1 0 0 1 1 1 1 1 Calculate net and y
  • 96. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 0.3 -0.1 0 0 1 1 1 1 0.2 0.3 0 1 update weights: w1 := w1 + 0.1 * x1 * (t - y)  0.1 + 0.1 * 1 * 1  0.2 w2 := w2 + 0.1 * x2 * (t - y)  0.3 + 0.1 * 0 * 1  0.3 wb := wb + 0.1 * bias * (t - y)  -0.1 + 0.1 * 1 * 1  0
  • 97. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 0.3 -0.1 0 0 1 1 1 1 0.2 0.3 0 0.5 1 1 Calculate net and y
  • 98. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 0.3 -0.1 0 0 1 1 1 1 0.2 0.3 0 0.5 1 1 Weights for next epoch: 0.2 0.3 0 Weights will not be changed
  • 99. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.1 0.2 -0.2 -0.2 0 0 0 1 1 0.1 0.2 -0.2 0 0 1 1 0 1 0.1 0.3 -0.1 0 0 1 1 1 1 0.2 0.3 0 0.5 1 1 Weights for next epoch: 0.2 0.3 0 First epoch done We have 2 errors We need to run another epoch
  • 100. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.2 0.3 0 0 0 1 1 1 1 0 1 1 1 1 1 1 Second epoch
  • 101. 1. b) SLP using delta rule x1 x2 bias w1 w2 wb net y t 0 0 1 0.2 0.3 0 0 0 0 0 1 1 0.2 0.3 0 0.3 1 1 1 0 1 0.2 0.3 0 0.2 1 1 1 1 1 0.2 0.3 0 0.5 1 1 Weights for next epoch: 0.2 0.3 0 Second epoch has no errors  stop training
  • 102. 2. SLP with multiple outputs x1 x2 t1 t2 0 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 The two output neurons can have different threshold values
  • 103. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1
  • 104. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 0 0 1 1 0 1 0 1 0 1 1 1 1 Initial weights
  • 105. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 0 0 0 1 1 0 1 0 1 0 1 1 1 1 net1 = w11 * x1 + w21 * x2 + b1 net1 = 0.1 * 0 + 0.2 * 0 - 0.2 = -0.2 Assume threshold1 = 0.1, threshold2 = 1 net1 >= 0.1 ?  y = 1 else?  y = 0
  • 106. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 1 0 1 0 1 0 1 1 1 1 net2 = w12 * x1 + w22 * x2 + b2 net2 = 0.2 * 0 + 0.3 * 0 + 1 = 1 net2 >= 1 ?  y = 1 else?  y = 0
  • 107. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 1 0 1 0 1 0 1 1 1 1 y1 is OK don’t change its weights
  • 108. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 1 0 1 0 1 0 1 1 1 1 y2 is wrong update its weights: (We can either use perceptron learning algorithm or delta rule) Assume we are using delta rule, 𝜂 = 0.1
  • 109. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 1 0 1 0 1 0 1 1 1 1 w12 := w12 + 0.1 * x1 * (t2 – y2)  0.2 + 0.1 * 0 * -1  0.2 w22 := w22 + 0.1 * x2 * (t2 – y2)  0.3 + 0.1 * 0 * -1  0.3 wb2 := wb2 + 0.1 * (t2 – y2)  1 + 0.1 * -1  0.9
  • 110. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 0 1 0 1 0 1 1 1 1 calculate net1, y1 net1 = 0.1 * 0 + 0.2 * 1 – 0.2 = 0 net1 < 0.1  y1 = 0
  • 111. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 1 0 1 1 1 1 calculate net2, y2 net2 = 0.2 * 0 + 0.3 * 1 + 0.9 = 1.2 net2 >= 1  y2 = 1
  • 112. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 1 0 1 1 1 1 Update weights of y1 w11 = 0.1 + 0.1 * 0 * (1 - 0) = 0.1 w21 = 0.2 + 0.1 * 1 * (1 – 0) = 0.3 wb1 = -0.2 + 0.1 * (1 – 0) = -0.1
  • 113. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 1 0 1 1 1 1 Update weights of y2 w12 = 0.2 + 0.1 * 0 * (0 - 1) = 0.2 w22 = 0.3 + 0.1 * 1 * (0 – 1) = 0.2 wb2 = 0.9 + 0.1 * (0 – 1) = 0.8
  • 114. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 0 1 1 1 1 Calculate net1 and y1 net1 = 0.1 * 1 + 0.3 * 0 – 0.1 = 0  y1 = 0
  • 115. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 1 1 Calculate net2 and y2 net2 = 0.2 * 1 + 0.2 * 0 + 0.8 = 1  y2 = 1
  • 116. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 0.2 0.3 0 1 1 Update weights of y1 w11 = 0.1 + 0.1 * 1 * (1 - 0) = 0.2 w21 = 0.3 + 0.1 * 0 * (1 – 0) = 0.3 wb1 = -0.1 + 0.1 * (1 – 0) = 0
  • 117. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 Update weights of y2 w12 = 0.2 + 0.1 * 1 * (0 - 1) = 0.1 w22 = 0.2 + 0.1 * 0 * (0 - 1) = 0.2 wb2 = 0.8 + 0.1 * (0 - 1) = 0.7
  • 118. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 Calculate net1 and y1 net1 = 0.2 * 1 + 0.3 * 1 + 0 = 0.5  y1 = 1
  • 119. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1 Calculate net2 and y2 net2 = 0.1 * 1 + 0.2 * 0.7 + 0 = 1  y2 = 1
  • 120. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1 Next weights 0.2 0.3 0 0.1 0.2 0.7 both y1 and y2 are OK Don’t update weights
  • 121. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.1 0.2 -0.2 0.2 0.3 1 0 1 0 0 0 1 0.1 0.2 -0.2 0.2 0.3 0.9 0 1 1 0 1 0 0.1 0.3 -0.1 0.2 0.2 0.8 0 1 1 0 1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1 Next weights 0.2 0.3 0 0.1 0.2 0.7 We need to run another epoch
  • 122. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.2 0.3 0 0.1 0.2 0.7 0 0 0 1 1 0 1 0 1 0 1 1 1 1 Second epoch
  • 123. 2. SLP with multiple outputs x1 x2 w11 w21 wb1 w12 w22 wb2 y1 y2 t1 t2 0 0 0.2 0.3 0 0.1 0.2 0.7 0 0 0 0 0 1 0.2 0.3 0 0.1 0.2 0.7 1 0 1 0 1 0 0.2 0.3 0 0.1 0.2 0.7 1 0 1 0 1 1 0.2 0.3 0 0.1 0.2 0.7 1 1 1 1 Next weights 0.2 0.3 0 0.1 0.2 0.7 Second epoch