1. Matters of Discussion
1.Supervised Learning Neural Networks:
2.Perceptron Networks
3.- Adaptive Linear Neuron
4.- Multiple Adaptive Linear Neurons
5.– Back Propagation Network
1
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
2. 1.A Supervised Learning Process
Compute
output
Is desired
output
achieved?
Stop
learning
Adjust
weights
Yes
No
ANN
Model
Three-step process:
1. Compute temporary
outputs
2. Compare outputs with
desired targets
3. Adjust the weights and
repeat the process
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan] 2
3. 1.Supervised Learning Neural Networks
supervised learning takes place under the supervision
of a teacher. This learning process is dependent.
During the training of ANN under supervised learning,
the input vector is presented to the network, which
will produce an output vector.
This output vector is compared with the
desired/target output vector.
An error signal is generated if there is a difference
between the actual output and the desired/target
output vector.
On the basis of this error signal, the weights would be
adjusted until the actual output is matched with the
desired output.
3
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
4. 2.Perceptron Concept
Developed by Frank Rosenblatt by using
McCulloch and Pitts model, perceptron is the
basic operational unit of artificial neural
networks.
It employs supervised learning rule and is
able to classify the data into two classes.
4
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
5. 2.Schematic representation of perceptron
5
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
Architecture
Bias- additional parameter to make best fit output
6. 2.Operational characteristics of perceptron
It consists of a single neuron with an
arbitrary number of inputs along with
adjustable weights,
but the output of the neuron is 1 or 0
depending upon the threshold.
It also consists of a bias whose weight is
always 1.
6
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
7. 2.Basic elements of perceptron
Perceptron thus has the following three basic
elements −
Links − It would have a set of connection links,
which carries a weight including a bias always
having weight 1.
Adder − It adds the input after they are multiplied
with their respective weights.
Activation function − It limits the output of
neuron. The most basic activation function is a
Heaviside step function that has two possible
outputs. This function returns 1, if the input is
positive, and 0 for any negative input.
7
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
8. 3. Adaptive Linear Neuron[Adaline]
Adaline is a network having a single linear
unit.
It was developed by Widrow and Hoff in
1960. [Widrow-Hoff rule]
single-layer perceptron [ input layer and
output layer ]
single-layer perceptron is the simplest feed
forward neural network.
8
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
9. 3. Architecture of Adaline
9
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
10. Cont..
The basic structure of Adaline is similar to perceptron
having an extra feedback loop with the help of which the
Computed output[CO] is compared with the desired/target
output [TO].
If CO= TO then stop training else adjust weights and bias.
Uses for linear classification problems
10
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
Error = |CO-TO|
11. 4. Multiple Adaptive Linear Neuron
Madaline--consists of many Adalines in
parallel.
It is just like a multilayer perceptron
Multiple Input neurons [I/P layer] and single
O/P neuron [O/P layer].
The Adaline layer can be considered as the
hidden layer as it is between the input layer
and the output layer
11
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
12. 4. Architecture of Madaline
12
Computed output[CO] is compared with the desired/target output [TO].
13. Cont..
MADALINE (Many ADALINE) is a three-layer
(input, hidden, output),
fully connected, feed-forward artificial neural
network architecture
Uses for non linear classification problems
13
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
14. 5. Back Propagation Neural Network
standard method of training artificial neural
networks.
It is the method of fine-tuning the weights of a
neural net based on the error rate
The training of BPN will have the following three
phases.
Phase 1 − Feed Forward Phase
Phase 2 − Back Propagation of error
Phase 3 − Updating of weights
14
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
15. How Back propagation Works: Simple Algorithm
15
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
16. Computational process of BPN Algorithm
1. Inputs X, arrive through the preconnected path
2. Input is modeled using real weights W. The
weights are usually randomly selected.
3. Calculate the output for every neuron from the
input layer, to the hidden layers, to the output
layer.
4. Calculate the error in the outputs // Error= Actual
Output – Desired Output
5. Travel back from the output layer to the hidden
layer to adjust the weights such that the error is
decreased.
6. Keep repeating the process [5] until the desired
output is achieved.
16
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
17. Key points on BPN
Backpropagation is fast, simple and easy to
program.
Supervised machine learning algorithm.
It is especially useful for deep neural networks.
widely used algorithm in training feedforward
neural networks for supervised learning.
The biggest drawback of the Backpropagation is
that it can be sensitive for noisy data.
Application- object recognition, predictions, etc.
17
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
18. TUTORIAL ACTIVITY [A-8]
Exemplify the architecture and computational
learning process for Back Propagation Neural
Network Algorithm and prepare your
investigative report.
18
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
19. Questions [VVI]
1. Investigate the computational process of a
Supervised Learning Neural Networks.
2. Design the Schematic representation of
perceptron and analyze it’s basic elements
and operational characteristics.
3. Investigate the architecture of ADALINE and
MADALINE and describe their computational
process.
19
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]
20. Cheers For the Great Patience!
Query Please?
20
Compiled By: Dr. Nilamadhab Mishra [(PhD- CSIE) Taiwan]