SlideShare a Scribd company logo
1 of 30
CHAPTER 3
SUPERVISED
LEARNING NETWORK
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
DEFINITION OF SUPERVISED LEARNING
NETWORKS
 Training and test data sets
 Training set; input & target are specified
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
PERCEPTRON NETWORKS
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Linear threshold unit (LTU)
Σ
x1
x2
xn
..
.
w1
w2
wn
w0
Σ wi xi
1 if Σ wi xi >0
f(xi)=
-1 otherwise
o
{
n
i=0
i=0
n
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
PERCEPTRON LEARNING
wi = wi + ∆wi
∆wi = η (t - o) xi
where
t = c(x) is the target value,
o is the perceptron output,
η Is a small constant (e.g., 0.1) called learning rate.
 If the output is correct (t = o) the weights wi are not changed
 If the output is incorrect (t ≠ o) the weights wi are changed such
that the output of the perceptron for the new weights is closer to t.
 The algorithm converges to the correct classification
• if the training data is linearly separable
∀ η is sufficiently small
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
LEARNING ALGORITHM
 Epoch : Presentation of the entire training set to the neural
network.
 In the case of the AND function, an epoch consists of four sets of
inputs being presented to the network (i.e. [0,0], [0,1], [1,0], [1,1]).
 Error: The error value is the amount by which the value output by
the network differs from the target value. For example, if we
required the network to output 0 and it outputs 1, then Error = -1.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Target Value, T : When we are training a network we not only
present it with the input but also with a value that we require the
network to produce. For example, if we present the network with
[1,1] for the AND function, the training value will be 1.
 Output , O : The output value from the neuron.
 Ij : Inputs being presented to the neuron.
 Wj : Weight from input neuron (Ij) to the output neuron.
 LR : The learning rate. This dictates how quickly the network
converges. It is set by a matter of experimentation. It is typically
0.1.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
TRAINING ALGORITHM
 Adjust neural network weights to map inputs to outputs.
 Use a set of sample patterns where the desired output (given the
inputs presented) is known.
 The purpose is to learn to
• Recognize features which are common to good and bad
exemplars
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
MULTILAYER PERCEPTRON
Output Values
Input Signals (External Stimuli)
Output Layer
Adjustable
Weights
Input Layer
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
LAYERS IN NEURAL NETWORK
 The input layer:
• Introduces input values into the network.
• No activation function or other processing.
 The hidden layer(s):
• Performs classification of features.
• Two hidden layers are sufficient to solve any problem.
• Features imply more layers may be better.
 The output layer:
• Functionally is just like the hidden layers.
• Outputs are passed on to the world outside the neural network.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
ADAPTIVE LINEAR NEURON (ADALINE)
In 1959, Bernard Widrow and Marcian Hoff of Stanford developed
models they called ADALINE (Adaptive Linear Neuron) and MADALINE
(Multilayer ADALINE). These models were named for their use of
Multiple ADAptive LINear Elements. MADALINE was the first neural
network to be applied to a real world problem. It is an adaptive filter
which eliminates echoes on phone lines.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
ADALINE MODEL
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
ADALINE LEARNING RULE
Adaline network uses Delta Learning Rule. This rule is also called as
Widrow Learning Rule or Least Mean Square Rule. The delta rule for
adjusting the weights is given as (i = 1 to n):
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Initialize
• Assign random weights to all links
 Training
• Feed-in known inputs in random sequence
• Simulate the network
• Compute error between the input and the
output (Error Function)
• Adjust weights (Learning Function)
• Repeat until total error < ε
 Thinking
• Simulate the network
• Network will respond to any input
• Does not guarantee a correct solution even
for trained inputs
USING ADALINE NETWORKS
Initialize
Training
Thinking
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
MADALINE NETWORK
MADALINE is a Multilayer Adaptive Linear Element. MADALINE was the
first neural network to be applied to a real world problem. It is used in
several adaptive filtering process.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
BACK PROPAGATION NETWORK
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 A training procedure which allows multilayer feed forward Neural
Networks to be trained.
 Can theoretically perform “any” input-output mapping.
 Can learn to solve linearly inseparable problems.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
MULTILAYER FEEDFORWARD
NETWORK
Inputs
Hiddens
Outputs
I1
I2
I3
I0
h0
h1
h2
o0
o1
Inputs
Hiddens
Outputs
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
MULTILAYER FEEDFORWARD NETWORK:
ACTIVATION AND TRAINING
 For feed forward networks:
• A continuous function can be
• differentiated allowing
• gradient-descent.
• Back propagation is an example of a gradient-descent
technique.
• Uses sigmoid (binary or bipolar) activation function.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
In multilayer networks, the activation function is
usually more complex than just a threshold function,
like 1/[1+exp(-x)] or even 2/[1+exp(-x)] – 1 to allow for
inhibition, etc.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Gradient-Descent(training_examples, η)
 Each training example is a pair of the form <(x1,…xn),t> where (x1,
…,xn) is the vector of input values, and t is the target output value,
η is the learning rate (e.g. 0.1)
 Initialize each wi to some small random value
 Until the termination condition is met, Do
• Initialize each ∆wi to zero
• For each <(x1,…xn),t> in training_examples Do
GRADIENT DESCENT
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
Input the instance (x1,…,xn) to the linear unit and compute
the output o
For each linear unit weight wi Do
• ∆wi= ∆wi + η (t-o) xi
• For each linear unit weight wi Do
• wi=wi+∆wi
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Batch mode : gradient descent
w=w - η ∇ED[w] over the entire data D
ED[w]=1/2Σd(td-od)2
 Incremental mode: gradient descent
w=w - η ∇Ed[w] over individual training examples d
Ed[w]=1/2 (td-od)2
 Incremental Gradient Descent can approximate Batch Gradient
Descent arbitrarily closely if η is small enough.
MODES OF GRADIENT DESCENT
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
SIGMOID ACTIVATION FUNCTION
Σ
x1
x2
xn
..
.
w1
w2
wn
w0
x0=1
net=Σi=0
n
wi xi
o
o=σ(net)=1/(1+e-net
)
σ(x) is the sigmoid function: 1/(1+e-x)
dσ(x)/dx= σ(x) (1- σ(x))
Derive gradient decent rules to train:
• one sigmoid function
∂E/∂wi = -Σd(td-od) od (1-od) xi
• Multilayer networks of sigmoid units
backpropagation
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Initialize each wi to some small random value.
 Until the termination condition is met, Do
• For each training example <(x1,…xn),t> Do
• Input the instance (x1,…,xn) to the network and compute the
network outputs ok
• For each output unit k
– δk=ok(1-ok)(tk-ok)
• For each hidden unit h
– δh=oh(1-oh) Σk wh,k δk
• For each network weight w,j Do
• wi,j=wi,j+∆wi,j where
– ∆wi,j= η δj xi,j
BACKPROPAGATION TRAINING
ALGORITHM
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
 Gradient descent over entire network weight vector
 Easily generalized to arbitrary directed graphs
 Will find a local, not necessarily global error minimum -in practice often
works well (can be invoked multiple times with different initial weights)
 Often include weight momentum term
∆wi,j(t)= η δj xi,j + α ∆wi,j (t-1)
 Minimizes error training examples
 Will it generalize well to unseen instances (over-fitting)?
 Training can be slow typical 1000-10000 iterations (use Levenberg-
Marquardt instead of gradient descent)
BACKPROPAGATION
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
APPLICATIONS OF BACKPROPAGATION
NETWORK
 Load forecasting problems in power systems.
 Image processing.
 Fault diagnosis and fault detection.
 Gesture recognition, speech recognition.
 Signature verification.
 Bioinformatics.
 Structural engineering design (civil).
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
RADIAL BASIS FUCNTION NETWORK
 The radial basis function (RBF) is a classification and functional
approximation neural network developed by M.J.D. Powell.
 The network uses the most common nonlinearities such as
sigmoidal and Gaussian kernel functions.
 The Gaussian functions are also used in regularization networks.
 The Gaussian function is generally defined as
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
RADIAL BASIS FUCNTION NETWORK
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
SUMMARY
This chapter discussed on the several supervised learning networks
like
 Perceptron,
 Adaline,
 Madaline,
 Backpropagation Network,
 Radial Basis Function Network.
Apart from these mentioned above, there are several other supervised
neural networks like tree neural networks, wavelet neural network,
functional link neural network and so on.
“Principles of Soft Computing, 2nd
Edition”
by S.N. Sivanandam & SN Deepa
Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.

More Related Content

What's hot

NNFL - Guru Nanak Dev Engineering College
NNFL - Guru Nanak Dev Engineering CollegeNNFL - Guru Nanak Dev Engineering College
NNFL - Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
NNFL 6 - Guru Nanak Dev Engineering College
NNFL  6 - Guru Nanak Dev Engineering CollegeNNFL  6 - Guru Nanak Dev Engineering College
NNFL 6 - Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
NNFL 11- Guru Nanak Dev Engineering College
NNFL   11- Guru Nanak Dev Engineering CollegeNNFL   11- Guru Nanak Dev Engineering College
NNFL 11- Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
NNFL 13- Guru Nanak Dev Engineering College
NNFL   13- Guru Nanak Dev Engineering CollegeNNFL   13- Guru Nanak Dev Engineering College
NNFL 13- Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
NNFL 14- Guru Nanak Dev Engineering College
NNFL   14- Guru Nanak Dev Engineering CollegeNNFL   14- Guru Nanak Dev Engineering College
NNFL 14- Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
NNFL 8 - Guru Nanak Dev Engineering College
NNFL  8 - Guru Nanak Dev Engineering CollegeNNFL  8 - Guru Nanak Dev Engineering College
NNFL 8 - Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
NNFL 9 - Guru Nanak Dev Engineering College
NNFL  9 - Guru Nanak Dev Engineering CollegeNNFL  9 - Guru Nanak Dev Engineering College
NNFL 9 - Guru Nanak Dev Engineering CollegeMR. VIKRAM SNEHI
 
Artificial neural network paper
Artificial neural network paperArtificial neural network paper
Artificial neural network paperAkashRanjandas1
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksernj
 
Artificial neural network for concrete mix design
Artificial neural network for concrete mix designArtificial neural network for concrete mix design
Artificial neural network for concrete mix designMonjurul Shuvo
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural NetworkPutri Wikie
 
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...Simplilearn
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - ArchitecturesErin Brunston
 
Neural network
Neural networkNeural network
Neural networkFacebook
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkKnoldus Inc.
 
Adaline madaline
Adaline madalineAdaline madaline
Adaline madalineNagarajan
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseoJinTaek Seo
 

What's hot (20)

NNFL - Guru Nanak Dev Engineering College
NNFL - Guru Nanak Dev Engineering CollegeNNFL - Guru Nanak Dev Engineering College
NNFL - Guru Nanak Dev Engineering College
 
NNFL 6 - Guru Nanak Dev Engineering College
NNFL  6 - Guru Nanak Dev Engineering CollegeNNFL  6 - Guru Nanak Dev Engineering College
NNFL 6 - Guru Nanak Dev Engineering College
 
NNFL 11- Guru Nanak Dev Engineering College
NNFL   11- Guru Nanak Dev Engineering CollegeNNFL   11- Guru Nanak Dev Engineering College
NNFL 11- Guru Nanak Dev Engineering College
 
NNFL 13- Guru Nanak Dev Engineering College
NNFL   13- Guru Nanak Dev Engineering CollegeNNFL   13- Guru Nanak Dev Engineering College
NNFL 13- Guru Nanak Dev Engineering College
 
NNFL 14- Guru Nanak Dev Engineering College
NNFL   14- Guru Nanak Dev Engineering CollegeNNFL   14- Guru Nanak Dev Engineering College
NNFL 14- Guru Nanak Dev Engineering College
 
NNFL 8 - Guru Nanak Dev Engineering College
NNFL  8 - Guru Nanak Dev Engineering CollegeNNFL  8 - Guru Nanak Dev Engineering College
NNFL 8 - Guru Nanak Dev Engineering College
 
NNFL 9 - Guru Nanak Dev Engineering College
NNFL  9 - Guru Nanak Dev Engineering CollegeNNFL  9 - Guru Nanak Dev Engineering College
NNFL 9 - Guru Nanak Dev Engineering College
 
Artifical Neural Network
Artifical Neural NetworkArtifical Neural Network
Artifical Neural Network
 
Artificial neural network paper
Artificial neural network paperArtificial neural network paper
Artificial neural network paper
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial neural network for concrete mix design
Artificial neural network for concrete mix designArtificial neural network for concrete mix design
Artificial neural network for concrete mix design
 
(Artificial) Neural Network
(Artificial) Neural Network(Artificial) Neural Network
(Artificial) Neural Network
 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlp
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
 
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
Deep Learning Tutorial | Deep Learning TensorFlow | Deep Learning With Neural...
 
Artificial neural network - Architectures
Artificial neural network - ArchitecturesArtificial neural network - Architectures
Artificial neural network - Architectures
 
Neural network
Neural networkNeural network
Neural network
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Adaline madaline
Adaline madalineAdaline madaline
Adaline madaline
 
Neural network 20161210_jintaekseo
Neural network 20161210_jintaekseoNeural network 20161210_jintaekseo
Neural network 20161210_jintaekseo
 

Viewers also liked

A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...
A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...
A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...idescitation
 
ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...
ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...
ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...Augusto Pucci, PhD
 
Prediction of stock market index using neural networks an empirical study of...
Prediction of stock market index using neural networks  an empirical study of...Prediction of stock market index using neural networks  an empirical study of...
Prediction of stock market index using neural networks an empirical study of...Alexander Decker
 
Neural Network Classification and its Applications in Insurance Industry
Neural Network Classification and its Applications in Insurance IndustryNeural Network Classification and its Applications in Insurance Industry
Neural Network Classification and its Applications in Insurance IndustryInderjeet Singh
 
DisEMBL - Artificial neural network prediction of protein disorder
DisEMBL - Artificial neural network prediction of protein disorderDisEMBL - Artificial neural network prediction of protein disorder
DisEMBL - Artificial neural network prediction of protein disorderLars Juhl Jensen
 
Stock Market Prediction
Stock Market PredictionStock Market Prediction
Stock Market PredictionMRIDUL GUPTA
 
Neural networks for the prediction and forecasting of water resources variables
Neural networks for the prediction and forecasting of water resources variablesNeural networks for the prediction and forecasting of water resources variables
Neural networks for the prediction and forecasting of water resources variablesJonathan D'Cruz
 
Final Year Project Report for B.Tech on Neural Network
Final Year Project Report for B.Tech on Neural NetworkFinal Year Project Report for B.Tech on Neural Network
Final Year Project Report for B.Tech on Neural NetworkAkash Bisariya
 
Stock market prediction technique:
Stock market prediction technique:Stock market prediction technique:
Stock market prediction technique:Paladion Networks
 
Fuzzy Logic ppt
Fuzzy Logic pptFuzzy Logic ppt
Fuzzy Logic pptRitu Bafna
 

Viewers also liked (12)

A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...
A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...
A Comparison of Stock Trend Prediction Using Accuracy Driven Neural Network V...
 
ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...
ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...
ESANN2006 - A Cyclostationary Neural Network model for the prediction of the ...
 
Prediction of stock market index using neural networks an empirical study of...
Prediction of stock market index using neural networks  an empirical study of...Prediction of stock market index using neural networks  an empirical study of...
Prediction of stock market index using neural networks an empirical study of...
 
Neural Network Classification and its Applications in Insurance Industry
Neural Network Classification and its Applications in Insurance IndustryNeural Network Classification and its Applications in Insurance Industry
Neural Network Classification and its Applications in Insurance Industry
 
DisEMBL - Artificial neural network prediction of protein disorder
DisEMBL - Artificial neural network prediction of protein disorderDisEMBL - Artificial neural network prediction of protein disorder
DisEMBL - Artificial neural network prediction of protein disorder
 
Stock Market Prediction
Stock Market PredictionStock Market Prediction
Stock Market Prediction
 
Neural networks for the prediction and forecasting of water resources variables
Neural networks for the prediction and forecasting of water resources variablesNeural networks for the prediction and forecasting of water resources variables
Neural networks for the prediction and forecasting of water resources variables
 
STOCK MARKET PREDICTION
STOCK MARKET PREDICTIONSTOCK MARKET PREDICTION
STOCK MARKET PREDICTION
 
Final Year Project Report for B.Tech on Neural Network
Final Year Project Report for B.Tech on Neural NetworkFinal Year Project Report for B.Tech on Neural Network
Final Year Project Report for B.Tech on Neural Network
 
Stock market prediction technique:
Stock market prediction technique:Stock market prediction technique:
Stock market prediction technique:
 
Stock Market Analysis
Stock Market AnalysisStock Market Analysis
Stock Market Analysis
 
Fuzzy Logic ppt
Fuzzy Logic pptFuzzy Logic ppt
Fuzzy Logic ppt
 

Similar to NNFL 3 - Guru Nanak Dev Engineering College

IRJET- Machine Learning based Object Identification System using Python
IRJET- Machine Learning based Object Identification System using PythonIRJET- Machine Learning based Object Identification System using Python
IRJET- Machine Learning based Object Identification System using PythonIRJET Journal
 
Visualization of Deep Learning
Visualization of Deep LearningVisualization of Deep Learning
Visualization of Deep LearningYaminiAlapati1
 
neural (1).ppt
neural (1).pptneural (1).ppt
neural (1).pptAlmamoon
 
Scalable Deep Learning Using Apache MXNet
Scalable Deep Learning Using Apache MXNetScalable Deep Learning Using Apache MXNet
Scalable Deep Learning Using Apache MXNetAmazon Web Services
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.pptRINUSATHYAN
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learningRADO7900
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networksAkash Goel
 
Convolutional Neural Network (CNN) - image recognition
Convolutional Neural Network (CNN)  - image recognitionConvolutional Neural Network (CNN)  - image recognition
Convolutional Neural Network (CNN) - image recognitionYUNG-KUEI CHEN
 
Character Recognition using Artificial Neural Networks
Character Recognition using Artificial Neural NetworksCharacter Recognition using Artificial Neural Networks
Character Recognition using Artificial Neural NetworksJaison Sabu
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
 

Similar to NNFL 3 - Guru Nanak Dev Engineering College (20)

IRJET- Machine Learning based Object Identification System using Python
IRJET- Machine Learning based Object Identification System using PythonIRJET- Machine Learning based Object Identification System using Python
IRJET- Machine Learning based Object Identification System using Python
 
NNAF_DRK.pdf
NNAF_DRK.pdfNNAF_DRK.pdf
NNAF_DRK.pdf
 
Visualization of Deep Learning
Visualization of Deep LearningVisualization of Deep Learning
Visualization of Deep Learning
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 
neural (1).ppt
neural (1).pptneural (1).ppt
neural (1).ppt
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 
neural.ppt
neural.pptneural.ppt
neural.ppt
 
Scalable Deep Learning Using Apache MXNet
Scalable Deep Learning Using Apache MXNetScalable Deep Learning Using Apache MXNet
Scalable Deep Learning Using Apache MXNet
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
Neural networks and deep learning
Neural networks and deep learningNeural networks and deep learning
Neural networks and deep learning
 
MNN
MNNMNN
MNN
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Convolutional Neural Network (CNN) - image recognition
Convolutional Neural Network (CNN)  - image recognitionConvolutional Neural Network (CNN)  - image recognition
Convolutional Neural Network (CNN) - image recognition
 
Artificial Neural networks
Artificial Neural networksArtificial Neural networks
Artificial Neural networks
 
Character Recognition using Artificial Neural Networks
Character Recognition using Artificial Neural NetworksCharacter Recognition using Artificial Neural Networks
Character Recognition using Artificial Neural Networks
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
 
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
 
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
Multilayer Perceptron - Elisa Sayrol - UPC Barcelona 2018
 

Recently uploaded

A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024Janet Corral
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingTeacherCyreneCayanan
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Disha Kariya
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 

Recently uploaded (20)

A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
General AI for Medical Educators April 2024
General AI for Medical Educators April 2024General AI for Medical Educators April 2024
General AI for Medical Educators April 2024
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
fourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writingfourth grading exam for kindergarten in writing
fourth grading exam for kindergarten in writing
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..Sports & Fitness Value Added Course FY..
Sports & Fitness Value Added Course FY..
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 

NNFL 3 - Guru Nanak Dev Engineering College

  • 1. CHAPTER 3 SUPERVISED LEARNING NETWORK “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 2. DEFINITION OF SUPERVISED LEARNING NETWORKS  Training and test data sets  Training set; input & target are specified “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 3. PERCEPTRON NETWORKS “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 4.  Linear threshold unit (LTU) Σ x1 x2 xn .. . w1 w2 wn w0 Σ wi xi 1 if Σ wi xi >0 f(xi)= -1 otherwise o { n i=0 i=0 n “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 5. PERCEPTRON LEARNING wi = wi + ∆wi ∆wi = η (t - o) xi where t = c(x) is the target value, o is the perceptron output, η Is a small constant (e.g., 0.1) called learning rate.  If the output is correct (t = o) the weights wi are not changed  If the output is incorrect (t ≠ o) the weights wi are changed such that the output of the perceptron for the new weights is closer to t.  The algorithm converges to the correct classification • if the training data is linearly separable ∀ η is sufficiently small “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 6. LEARNING ALGORITHM  Epoch : Presentation of the entire training set to the neural network.  In the case of the AND function, an epoch consists of four sets of inputs being presented to the network (i.e. [0,0], [0,1], [1,0], [1,1]).  Error: The error value is the amount by which the value output by the network differs from the target value. For example, if we required the network to output 0 and it outputs 1, then Error = -1. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 7.  Target Value, T : When we are training a network we not only present it with the input but also with a value that we require the network to produce. For example, if we present the network with [1,1] for the AND function, the training value will be 1.  Output , O : The output value from the neuron.  Ij : Inputs being presented to the neuron.  Wj : Weight from input neuron (Ij) to the output neuron.  LR : The learning rate. This dictates how quickly the network converges. It is set by a matter of experimentation. It is typically 0.1. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 8. TRAINING ALGORITHM  Adjust neural network weights to map inputs to outputs.  Use a set of sample patterns where the desired output (given the inputs presented) is known.  The purpose is to learn to • Recognize features which are common to good and bad exemplars “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 9. MULTILAYER PERCEPTRON Output Values Input Signals (External Stimuli) Output Layer Adjustable Weights Input Layer “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 10. LAYERS IN NEURAL NETWORK  The input layer: • Introduces input values into the network. • No activation function or other processing.  The hidden layer(s): • Performs classification of features. • Two hidden layers are sufficient to solve any problem. • Features imply more layers may be better.  The output layer: • Functionally is just like the hidden layers. • Outputs are passed on to the world outside the neural network. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 11. ADAPTIVE LINEAR NEURON (ADALINE) In 1959, Bernard Widrow and Marcian Hoff of Stanford developed models they called ADALINE (Adaptive Linear Neuron) and MADALINE (Multilayer ADALINE). These models were named for their use of Multiple ADAptive LINear Elements. MADALINE was the first neural network to be applied to a real world problem. It is an adaptive filter which eliminates echoes on phone lines. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 12. ADALINE MODEL “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 13. ADALINE LEARNING RULE Adaline network uses Delta Learning Rule. This rule is also called as Widrow Learning Rule or Least Mean Square Rule. The delta rule for adjusting the weights is given as (i = 1 to n): “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 14.  Initialize • Assign random weights to all links  Training • Feed-in known inputs in random sequence • Simulate the network • Compute error between the input and the output (Error Function) • Adjust weights (Learning Function) • Repeat until total error < ε  Thinking • Simulate the network • Network will respond to any input • Does not guarantee a correct solution even for trained inputs USING ADALINE NETWORKS Initialize Training Thinking “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 15. MADALINE NETWORK MADALINE is a Multilayer Adaptive Linear Element. MADALINE was the first neural network to be applied to a real world problem. It is used in several adaptive filtering process. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 16. BACK PROPAGATION NETWORK “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 17.  A training procedure which allows multilayer feed forward Neural Networks to be trained.  Can theoretically perform “any” input-output mapping.  Can learn to solve linearly inseparable problems. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 18. MULTILAYER FEEDFORWARD NETWORK Inputs Hiddens Outputs I1 I2 I3 I0 h0 h1 h2 o0 o1 Inputs Hiddens Outputs “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 19. MULTILAYER FEEDFORWARD NETWORK: ACTIVATION AND TRAINING  For feed forward networks: • A continuous function can be • differentiated allowing • gradient-descent. • Back propagation is an example of a gradient-descent technique. • Uses sigmoid (binary or bipolar) activation function. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 20. In multilayer networks, the activation function is usually more complex than just a threshold function, like 1/[1+exp(-x)] or even 2/[1+exp(-x)] – 1 to allow for inhibition, etc. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 21.  Gradient-Descent(training_examples, η)  Each training example is a pair of the form <(x1,…xn),t> where (x1, …,xn) is the vector of input values, and t is the target output value, η is the learning rate (e.g. 0.1)  Initialize each wi to some small random value  Until the termination condition is met, Do • Initialize each ∆wi to zero • For each <(x1,…xn),t> in training_examples Do GRADIENT DESCENT “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 22. Input the instance (x1,…,xn) to the linear unit and compute the output o For each linear unit weight wi Do • ∆wi= ∆wi + η (t-o) xi • For each linear unit weight wi Do • wi=wi+∆wi “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 23.  Batch mode : gradient descent w=w - η ∇ED[w] over the entire data D ED[w]=1/2Σd(td-od)2  Incremental mode: gradient descent w=w - η ∇Ed[w] over individual training examples d Ed[w]=1/2 (td-od)2  Incremental Gradient Descent can approximate Batch Gradient Descent arbitrarily closely if η is small enough. MODES OF GRADIENT DESCENT “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 24. SIGMOID ACTIVATION FUNCTION Σ x1 x2 xn .. . w1 w2 wn w0 x0=1 net=Σi=0 n wi xi o o=σ(net)=1/(1+e-net ) σ(x) is the sigmoid function: 1/(1+e-x) dσ(x)/dx= σ(x) (1- σ(x)) Derive gradient decent rules to train: • one sigmoid function ∂E/∂wi = -Σd(td-od) od (1-od) xi • Multilayer networks of sigmoid units backpropagation “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 25.  Initialize each wi to some small random value.  Until the termination condition is met, Do • For each training example <(x1,…xn),t> Do • Input the instance (x1,…,xn) to the network and compute the network outputs ok • For each output unit k – δk=ok(1-ok)(tk-ok) • For each hidden unit h – δh=oh(1-oh) Σk wh,k δk • For each network weight w,j Do • wi,j=wi,j+∆wi,j where – ∆wi,j= η δj xi,j BACKPROPAGATION TRAINING ALGORITHM “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 26.  Gradient descent over entire network weight vector  Easily generalized to arbitrary directed graphs  Will find a local, not necessarily global error minimum -in practice often works well (can be invoked multiple times with different initial weights)  Often include weight momentum term ∆wi,j(t)= η δj xi,j + α ∆wi,j (t-1)  Minimizes error training examples  Will it generalize well to unseen instances (over-fitting)?  Training can be slow typical 1000-10000 iterations (use Levenberg- Marquardt instead of gradient descent) BACKPROPAGATION “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 27. APPLICATIONS OF BACKPROPAGATION NETWORK  Load forecasting problems in power systems.  Image processing.  Fault diagnosis and fault detection.  Gesture recognition, speech recognition.  Signature verification.  Bioinformatics.  Structural engineering design (civil). “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 28. RADIAL BASIS FUCNTION NETWORK  The radial basis function (RBF) is a classification and functional approximation neural network developed by M.J.D. Powell.  The network uses the most common nonlinearities such as sigmoidal and Gaussian kernel functions.  The Gaussian functions are also used in regularization networks.  The Gaussian function is generally defined as “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 29. RADIAL BASIS FUCNTION NETWORK “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.
  • 30. SUMMARY This chapter discussed on the several supervised learning networks like  Perceptron,  Adaline,  Madaline,  Backpropagation Network,  Radial Basis Function Network. Apart from these mentioned above, there are several other supervised neural networks like tree neural networks, wavelet neural network, functional link neural network and so on. “Principles of Soft Computing, 2nd Edition” by S.N. Sivanandam & SN Deepa Copyright © 2011 Wiley India Pvt. Ltd. All rights reserved.