SlideShare a Scribd company logo
Soft Computing: Artificial
Neural Networks
Dr. Baljit Singh Khehra
Professor
CSE Department
Baba Banda Singh Bahadur Engineering College
Fatehgarh Sahib-140407, Punjab, India
Soft Computing
 Soft Computing is a new field to construct new generation of AI , known as
Computational Intelligence.
 Soft Computing is branch in which it is tried to build Intelligent Machines.
 Hard Computing requires a precisely stated analytical model and often a lot
of computation time.
 Many Analytical models are valid for ideal cases.
 Real world problems exist in a non-ideal environment.
 Soft Computing is a collection of methodologies that aim to exploit the
tolerance for imprecision and uncertainty to achieve tractability, robustness
and low solution cost.
 The role model for Soft Computing is the human mind.
Soft Computing Techniques
 Soft Computing is defined as collection of techniques spanning many fields
that fall under various categories in computational intelligence.
 Soft Computing has main three branches:
 Artificial Neural Networks (ANNs)
 Fuzzy logic: To handle uncertainty (partial information about the problem, unreliable
information, information from more than one source about the problem that are conflicting)
 Evolutionary Computing : contains optimization Algorithms
 Genetic Algorithm (GA)
 Ant Colony Optimization (ACO) algorithm
 Biogeography based Optimization (BBO) approach
 Bacterial foraging optimization algorithm
 Gravitational search algorithm
 Cuckoo optimization algorithm
 Teaching-Learning-Based Optimization (TLBO)
 Big Crunch Optimization (BBBCO) algorithm
Neural Networks (NNs)
 A group of interconnected people that interact with each others to exchange
information.
 CN is a group of two or more computer systems linked together to exchange
information.
 A network of neurons
 Neurons are the cells in the brain that convey information about the world around
us
 A human brain has 86 billion neurons of different kinds.
 But, we use only 10% of them.
Comparison b/w Real & Artificial Neurons
Artificial Neural Networks (ANNs)
 To simulate human brain behavior
 Mimic information processing capability of Human Brain (Human
Nervous System).
 Computational or Mathematical Models of Human Brain based on
some assumptions:
 Information processing occurs at many simple elements called
Neurons.
 Signals are passed b/w neurons by connection Links.
 Each connection link has an Associated Weight.
 The output of each neuron is obtained by passing its input through
Activation Function.
A Simple Artificial Neural Network
Activation function which is Binary Sigmoid function
)( inyfy 
x
e
xf 


1
1
)(
332211
3
1
wxwxwxwxy i
i
iin  

inyin
e
yfy 


1
1
)(
A Simple Artificial Neural Network with Multi-layers
 Each ANN is composed of a collection of neurons grouped in layers.
 Note the three layers: input, intermediate (called the hidden layer) and output.
 Several hidden layers can be placed between the input and output layers.
)( inyfy 
x
e
xf 


1
1
)(
j
j
jin zvy 
 
2
1

 
3
1i
iijinj xwz
)( injj zfz 
Artificial Neural Networks (ANNs)
 An ANN is characterized by
 Its pattern of connections b/w neurons
(called its architecture)
 Its method of determining weights on connections
(Training or Learning Algorithm)
 Its Activation function.
 Features of ANN
 Adaptive Learning
 Self-organization
 Real-Time operation
 Fault Tolerance via redundant information coding.
 Information processing is local
 Memory is distributed:
 Long term: Weights
 Short term: Signal sends
Advantages of ANNs
 Lower interpolation error
 Good extrapolation capabilities.
 Generalization ability
 Fast response time in operational phase
 Free from numerical instability
 Learning not programming
 Parallelism in approach
 Distributed memory
 Intelligent behavior
 Capability to operate based on a multivariate and noisy or error prone
training data set.
 Capability for modeling non-linear characteristics.
Applications of ANNs
 Designing fuzzy logic controllers
 Parameter estimation for nonlinear systems
 Optimization methods in real time traffic control
 Power system identification and control
 Power Load forecasting
 Weather forecasting
 Solving NP-Hard problems
 VLSI design
 Learning the topology and weights of neural networks
 Performance enhancement of neural networks
 Distributed data base design
 Allocation and scheduling on multi-computers.
 Signature verification study
 Computer assisted drug design
 Computer-aided disease diagnosis system
 CPU Job scheduling
 Pattern Recognition
 Speech Recognition
 Finger print Recognition
 Face Recognition
 Character/ Digit Recognition
 Signal processing applications in virtual instrumentation systems
Basic Building Blocks of ANNs
 Network Architecture
 Learning Algorithms
 Activation Functions
 Network Architecture: The arrangement of neurons into layers and the pattern of
connection within and in-between layer are called the architecture of the network.
 Commonly used Network Architecture are
Learning of ANNs
 Learning or training algorithms are used to set weights and bias in Neural
Networks.
 Types of Learning
– Supervised learning
– Unsupervised learning
 Supervised learning
• Learning with a teacher
• Learning by examples
 Training set
 Examples: Perceptron, ADALINE, MADALINE, Backpropagation etc.
Supervising Learning
Unsupervised Learning
 Self-organizing
 Clustering
– Form proper clusters by discovering the similarities and
dissimilarities among objects
 Examples: Kohonen Self-organizing MAP, ART1,ART2 etc.
Activation Functions
 Activation Function: Activation Function is used to calculate the output
response of a neuron.
 Various types of activation functions
 Step function
 Hard Limiter function
Activation Functions
 Various types of activation functions
 Ramp function
 Unipolar Sigmoid function
Activation Functions
 Various types of activation functions
 Bipolar Sigmoid function
Rosenblatt’s Perceptron
 In 1962, Frank Rosenblatt developed an ANN called Perceptron.
 Perceptron is a computational model of the retina of the eye.
 Weights b/w S and A are fixed
 Weights b/w A and R are adjusted by Perceptron Learning Rule.
 Learning of Perceptron is supervised.
 Training algorithm is suitable for either Bipolar or Binary input with Bipolar
target, fixed threshold and adjustable bias.
Perceptron Training Rule
 For each training pattern, net calculates the response of the output unit.
 The net determines whether an error occurred for the pattern.
 This is done by comparing the calculated output with target value.
 If an error occurred for a particular training pattern (y ≠ t), then weights are
changed according to the following formula:
wi (new) = wi (old)+ wi
b (new ) = b (old)+ b
where wi = α t xi
b = α t
t is target output value for the current training example
y is Perceptron output
α is small constant (e.g., 0.5) called learning rate
 The role of the learning rate is to moderate the degree to which weights
are changed at each step.
Activation Function for Perceptron
 Binary Step Activation Function
 Output of Perceptron
 Perceptron only handle tasks which are linearly separable
















in
in
in
in
yif
yif
yif
yfy
1
0
1
)(
Perceptron Training Algorithm
Step1. Initialize weights and bias.
Set weights and bias to small random values
Set Learning rate (0 < α ≤ 1)
Set Threshold Value (θ)
Step2. While stopping condition is False, do Steps 3-8
Step3. For each training pair (s : t), do Steps 4-7
Step 4. Set activation of input units, i = 1, …..,n
xi = si
Step 5. Compute response of output unit
y-in = b + w1x1 + … + wnxn
y = f (y-in)
















in
in
in
in
yif
yif
yif
yfy
1
0
1
)(
Perceptron Training Algorithm
Step.6 Update Weight and bias
If an error occurred for a particular training pattern (y ≠ t),
then, weights are changed according to the following formula:
wi (new) = wi (old)+ wi
b (new ) = b (old)+ b
where wi = α t xi
b = α t
t is target output value for the current training example
y is Perceptron output
α is small constant (e.g., 0.5) called learning rate
Else
wi (new) = wi (old)
b (new ) = b (old)
Step 7. Test stopping condition
Perceptron Testing Algorithm
Step1. Set calculated weights from training algorithm
Set Learning rate (0 < α ≤ 1)
Set Threshold Value (θ)
Step2. For each input and target (s : t), do Steps 3-5
Step 3. Set activation of input units, i = 1, …..,n
xi = si
Step 4. Compute response of output unit
y-in = b + w1x1 + … + wnxn
y = f (y-in)
Step.5 Calculate Error
E=(t – y)
















in
in
in
in
yif
yif
yif
yfy
1
0
1
)(
Development of Perceptron for AND Function
Input Output
1 1 1
1 -1 -1
-1 1 -1
-1 -1 -1
Input Output
1 1 1
1 -1 -1
-1 1 -1
-1 -1 -1
Perceptron Training Algorithm for AND function
x=[1 1 -1 -1;1 -1 1 -1];
t=[1 -1 -1 -1];
w=[0 0];
b=0;
alpha=input('Enter Learning Rate=');
theta=input('Enter Threshold Value=');
epoch=0;
maxepoch=100;
while epoch<mepoch
for i = 1:4
yin=b*x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta
y=1;
end
if yin<=theta & yin>=-theta
y = 0;
end
if yin<-theta
y = -1;
end
if y – t(i) ~= 0
for j = 1:2
w(j) = w(j) + alpha*t(i)*x(j, i);
end
b=b + alpha*t(i);
end
end
epoch=epoch+1;
end
disp('Perceptron for AND function');
disp('Final Weight Matrix');
disp(w);
disp('Final Bias');
disp(b);
 OUTPUT
 Enter Learning Rate=1
 Enter Threshold Value=0.5
 Perceptron for AND function
 Final Weight Matrix
0 2
 Final Bias
0
Perceptron Testing Algorithm for AND function
x=[1 1 -1 -1;1 -1 1 -1];
w=[0 2];
b=0;
for i=1:4
yin=b*x(1,i)*w(1)+x(2,i)*w(2);
if yin>theta
y(i)=1;
end
if yin<=theta & yin>=-theta
y(i)=0;
end
if yin<-theta
y(i)=-1;
end
end
y
OUTPUT: 1 -1 1 -1
Input Target Actual
Output
1 1 1 1
1 -1 -1 -1
-1 1 -1 1
-1 -1 -1 -1
ADALINE
 In 1960, Widrow and Hoff developed ADALINE.
 It uses Bipolar (+1 or -1) activations for its input signals and target output.
 Weights and bias are updated using Delta Rule.
wi (new) = wi (old)+ wi
b (new ) = b (old)+ b
where wi = α (t –y-in)xi
b = α (t –y-in)
t is target output value for the current training example
y-in is input of output unit
α is learning rate
ADALINE Training Algorithm
Step1. Initialize weights and bias.
Set weights and bias to small random values
Set Learning rate (0 < α ≤ 1)
Step2. While stopping condition is False, do Steps 3-7
Step3. For each training pair (s : t), do Steps 4-6
Step 4. Set activation of input units, i = 1, …..,n
xi = si
Step 5. Compute net input of output unit
y-in = b + w1x1 + … + wnxn
Step.6 Update Weight and bias
wi (new) = wi (old) + α(t-y-in) xi
b (new ) = b (old) + α(t-y-in)
Step 7. Test stopping condition
ADALINE Testing Algorithm
Step1. Set calculated weights from training algorithm
Set Learning rate (0 < α ≤ 1)
Step2. For each input and target (s : t), do Steps 3-5
Step 3. Set activation of input units, i = 1, …..,n
xi = si
Step 4. Compute response of output unit
y-in = b + w1x1 + … + wnxn
y = f (y-in)
Step.5 Calculate Error
E=(t – y)











01
01
)(
in
in
in
yif
yif
yfy
MADALINE
 In 1960, Widrow & Hoff developed MADALINE.
 Many ADALINES arranged in a multilayer net.
 A MADALINE with two hidden ADALINES and one output ADALINE.
 MADALINE uses Bipolar (+1 or -1) activations for its input signals and target
output.
 Weights and bias on output ADALINE are fixed.
 Weights and bias on hidden ADALINES are updated using Widrow & Hoff rule.
MADALINE
 Activation Function
 Weights and bias on output ADALINE are fixed: v1 = v2 = b3 = 0.5
 Weights and bias on hidden ADALINES are updated using Widrow & Hoff rule:
If t = y,
then, no weights and bias are updated
Otherwise
If t = 1,
then, weights and bias are updated on zJ (unit whose net input is closed to 0)
wiJ (new) = wiJ (old) + α (t –z-inJ)xi
bJ (new ) = bJ (old) + α (t –z-inJ)
If t = -1,
then, weights and bias are updated on zK (unit whose net input is +tive)
wiK (new) = wiK (old) + α (t –z-inK)xi
bK (new ) = bK (old) + α (t –z-inK)









01
01
)(
in
in
in
yif
yif
yfy
MADALINE Training Algorithm
Step1. Initialize weights and bias.
Set weights and bias on output units to v1 = v2 = b3 = 0.5
Set weights and bias on hidden ADALINES to small random values
Set Learning rate (0 < α ≤ 1)
Step2. While stopping condition is False, do Steps 3-10
Step3. For each training pair (s : t), do Steps 4-9
Step 4. Set activation of input units, i = 1, …..,n
xi = si
Step 5. Compute net input of each hidden ADALINE unit
z-in1 = b1 + w11x1 +w21x2
z-in2 = b2 + w12x1 +w22x2
Step 6. Determine output of each hidden ADALINE unit
z1 = f (z-in1)
z2 = f (z-in2)
MADALINE Training Algorithm
Step 7. Compute net input of the output ADALINE unit
y-in = b3 + v1z1 +v2z2
Step 8. Determine output of the output ADALINE unit
y = f (y-in)
Step9. Update Weights and bias using Widrow & Hoff rule:
If t = y, then, no weights and bias are updated
Otherwise
If t = 1, then, weights and bias are updated on zJ
wiJ (new) = wiJ (old) + α (1 –z-inJ)xi
bJ (new ) = bJ (old) + α (t –z-inJ)
If t = -1, then, weights and bias are updated on zK
wiK (new) = wiK (old) + α (-1 –z-inK)xi
bK (new ) = bK (old) + α (-1 –z-inK)
Step10. Test stopping condition
MADALINE Testing Algorithm
Step1. Set calculated weights from training algorithm
Set Learning rate (0 < α ≤ 1)
Step2. For each input and target (s : t), do Steps 3-8
Step 3. Set activation of input units, i = 1, …..,n
xi = si
Step 4. Compute net input of each hidden ADALINE unit
z-in1 = b1 + w11x1 +w21x2
z-in2 = b2 + w12x1 +w22x2
Step 6. Determine output of each hidden ADALINE unit
z1 = f (z-in1)
z2 = f (z-in2)
Step 7. Compute response of output unit
y-in = b3 + v1z1 +v2z2
y = f (y-in)
Step.8 Calculate Error
E=(t – y)











01
01
)(
in
in
in
yif
yif
yfy
MADALINE Training Algorithm for XOR function
Step 1.
w11=0.05, w21=0.2,b1=0.3
w12=0.1,w22=0.2, b2=0.15
v1 = v2 = b3 = 0.5
α=0.5
Step2. Begin Training, do Steps 3-10
Step3. For 1st training pair (s : t) = (1 1:-1), do Steps 4-9
Step 4. Activation of input units, i = 1, 2
xi = si
x1 = 1, x2 = 1
Step 5. Compute net input of each hidden ADALINE unit
z-in1 = b1 + w11x1 +w21x2 z-in1 = 0.3+0.05b+0.2=0.55
z-in2 = b2 + w12x1 +w22x2 z-in2 = 0.15+0.1+0.2=0.45
Step 6. Determine output of each hidden ADALINE unit
z1 = f (z-in1) z1 = 1
z2 = f (z-in2) z2 = 1
Input Target
s1 s2 t
1 1 -1
1 -1 1
-1 1 1
-1 -1 -1
MADALINE Training Algorithm for XOR function
Step 7. Compute net input of the output ADALINE unit
y-in = b3 + v1z1 +v2z2 y-in = 0.5 + 0.5 +0.5=1.5
Step 8. Determine output of the output ADALINE unit
y = f (y-in) y = 1
Step9. Update Weights and bias because Error occurred (t-y=-1-1=-2)
If t = -1, then, weights and bias are updated on zK
(unit whose net input is +tive)
wiK (new) = wiK (old) + α (-1 –z-inK)xi
bK (new ) = bK (old) + α (-1 –z-inK)
b1 (new ) = b1 (old) + α (-1 –z-in1)=0.3+0.5(-1-0.55)= - 0.475
w1 1(new ) = w1 1(old) + α (-1 –z-in1) x1=0.05+0.5(-1-0.55)1= -0.725
Similarly
w21(new) = -.0575, b2(new) = -0.575
w12(new) = -0.625, w22(new) = -0.525
Step10. Test stopping condition
MADALINE Training Algorithm for XOR function
After 1st Training pair of 1st Iteration, New Weights and bias
w11= -0.725, w21= -0.575, b1= -0.475
w12= -0.625, w22= -0.525, b2= -0.575
These weights and bias are used for 2nd training pair (1 -1: 1) in 1st iteration to get
new weights and bias.
New weights and bias obtained from 2nd training pair are used for 3rd training pair
(-1 1: 1) in 1st iteration to get new weights and bias.
New weights and bias obtained from 3rd training pair are used for 4th training pair
(-1 -1: -1) in 1st iteration and get new weights and bias.
Thus 1st Iteration is completed
weights and bias obtained in 1st Iteration (obtained from 4th training pair ) are used
for 1st training pair in 2nd Iteration to get new weights and bias
Step10. Test stopping condition
Thanks

More Related Content

What's hot

Machine learning with neural networks
Machine learning with neural networksMachine learning with neural networks
Machine learning with neural networks
Let's talk about IT
 
Convolutional Neural Network
Convolutional Neural NetworkConvolutional Neural Network
Convolutional Neural Network
Vignesh Suresh
 
Deep Learning in Computer Vision
Deep Learning in Computer VisionDeep Learning in Computer Vision
Deep Learning in Computer Vision
Sungjoon Choi
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
ananth
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
leopauly
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
Massimiliano Ruocco
 
Handwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPTHandwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPT
RishabhTyagi48
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
Suraj Aavula
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
Christian Perone
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketaki
Ketaki Patwari
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Simplilearn
 
Deep Learning - RNN and CNN
Deep Learning - RNN and CNNDeep Learning - RNN and CNN
Deep Learning - RNN and CNN
Pradnya Saval
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
Sopheaktra YONG
 
Activation functions
Activation functionsActivation functions
Activation functions
PRATEEK SAHU
 
Support Vector Machines for Classification
Support Vector Machines for ClassificationSupport Vector Machines for Classification
Support Vector Machines for Classification
Prakash Pimpale
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
MojammilHusain
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
nextlib
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
Akash Goel
 
Regularization in deep learning
Regularization in deep learningRegularization in deep learning
Regularization in deep learning
Kien Le
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 

What's hot (20)

Machine learning with neural networks
Machine learning with neural networksMachine learning with neural networks
Machine learning with neural networks
 
Convolutional Neural Network
Convolutional Neural NetworkConvolutional Neural Network
Convolutional Neural Network
 
Deep Learning in Computer Vision
Deep Learning in Computer VisionDeep Learning in Computer Vision
Deep Learning in Computer Vision
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
 
Introduction to Deep learning
Introduction to Deep learningIntroduction to Deep learning
Introduction to Deep learning
 
Handwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPTHandwritten Digit Recognition(Convolutional Neural Network) PPT
Handwritten Digit Recognition(Convolutional Neural Network) PPT
 
Convolution Neural Network (CNN)
Convolution Neural Network (CNN)Convolution Neural Network (CNN)
Convolution Neural Network (CNN)
 
Deep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural NetworksDeep Learning - Convolutional Neural Networks
Deep Learning - Convolutional Neural Networks
 
CNN and its applications by ketaki
CNN and its applications by ketakiCNN and its applications by ketaki
CNN and its applications by ketaki
 
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
 
Deep Learning - RNN and CNN
Deep Learning - RNN and CNNDeep Learning - RNN and CNN
Deep Learning - RNN and CNN
 
Feedforward neural network
Feedforward neural networkFeedforward neural network
Feedforward neural network
 
Activation functions
Activation functionsActivation functions
Activation functions
 
Support Vector Machines for Classification
Support Vector Machines for ClassificationSupport Vector Machines for Classification
Support Vector Machines for Classification
 
Convolutional neural network
Convolutional neural networkConvolutional neural network
Convolutional neural network
 
Support Vector Machines
Support Vector MachinesSupport Vector Machines
Support Vector Machines
 
backpropagation in neural networks
backpropagation in neural networksbackpropagation in neural networks
backpropagation in neural networks
 
Regularization in deep learning
Regularization in deep learningRegularization in deep learning
Regularization in deep learning
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
 

Similar to Artificial Neural Networks-Supervised Learning Models

Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
RINUSATHYAN
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
2303oyxxxjdeepak
 
Perceptron in ANN
Perceptron in ANNPerceptron in ANN
Perceptron in ANN
Zaid Al-husseini
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
Sangeeta Tiwari
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
gnans Kgnanshek
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
AROCKIAJAYAIECW
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
CSITSansar
 
N ns 1
N ns 1N ns 1
N ns 1
Thy Selaroth
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
butest
 
Supervised learning network
Supervised learning networkSupervised learning network
Supervised learning network
Dr. C.V. Suresh Babu
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
Ismail El Gayar
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
Naren Chandra Kattla
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
aciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
AMIT KUMAR
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
ShwethaShreeS
 
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
Multilayer Backpropagation Neural Networks for Implementation of Logic GatesMultilayer Backpropagation Neural Networks for Implementation of Logic Gates
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
IJCSES Journal
 
Final Semester Project
Final Semester ProjectFinal Semester Project
Final Semester Project
Debraj Paul
 
Amnestic neural network for classification
Amnestic neural network for classificationAmnestic neural network for classification
Amnestic neural network for classification
lolokikipipi
 
Artificial Neural networks
Artificial Neural networksArtificial Neural networks
Artificial Neural networks
Learnbay Datascience
 

Similar to Artificial Neural Networks-Supervised Learning Models (20)

Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
 
Perceptron in ANN
Perceptron in ANNPerceptron in ANN
Perceptron in ANN
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
 
19_Learning.ppt
19_Learning.ppt19_Learning.ppt
19_Learning.ppt
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
N ns 1
N ns 1N ns 1
N ns 1
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Supervised learning network
Supervised learning networkSupervised learning network
Supervised learning network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Electricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANNElectricity Demand Forecasting Using ANN
Electricity Demand Forecasting Using ANN
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network AlgorithmsWeb Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
Multilayer Backpropagation Neural Networks for Implementation of Logic GatesMultilayer Backpropagation Neural Networks for Implementation of Logic Gates
Multilayer Backpropagation Neural Networks for Implementation of Logic Gates
 
Final Semester Project
Final Semester ProjectFinal Semester Project
Final Semester Project
 
Amnestic neural network for classification
Amnestic neural network for classificationAmnestic neural network for classification
Amnestic neural network for classification
 
Artificial Neural networks
Artificial Neural networksArtificial Neural networks
Artificial Neural networks
 

More from DrBaljitSinghKhehra

Ann 3
Ann 3Ann 3
Ga for shortest_path
Ga for shortest_pathGa for shortest_path
Ga for shortest_path
DrBaljitSinghKhehra
 
Deep learning
Deep learningDeep learning
Deep learning
DrBaljitSinghKhehra
 
Back propagation
Back propagation Back propagation
Back propagation
DrBaljitSinghKhehra
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
DrBaljitSinghKhehra
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
DrBaljitSinghKhehra
 

More from DrBaljitSinghKhehra (6)

Ann 3
Ann 3Ann 3
Ann 3
 
Ga for shortest_path
Ga for shortest_pathGa for shortest_path
Ga for shortest_path
 
Deep learning
Deep learningDeep learning
Deep learning
 
Back propagation
Back propagation Back propagation
Back propagation
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 

Recently uploaded

IEEE CIS Webinar Sustainable futures.pdf
IEEE CIS Webinar Sustainable futures.pdfIEEE CIS Webinar Sustainable futures.pdf
IEEE CIS Webinar Sustainable futures.pdf
Claudio Gallicchio
 
The Intersection between Competition and Data Privacy – COLANGELO – June 2024...
The Intersection between Competition and Data Privacy – COLANGELO – June 2024...The Intersection between Competition and Data Privacy – COLANGELO – June 2024...
The Intersection between Competition and Data Privacy – COLANGELO – June 2024...
OECD Directorate for Financial and Enterprise Affairs
 
Artificial Intelligence, Data and Competition – LIM – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – LIM – June 2024 OECD discussionArtificial Intelligence, Data and Competition – LIM – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – LIM – June 2024 OECD discussion
OECD Directorate for Financial and Enterprise Affairs
 
Competition and Regulation in Professions and Occupations – ROBSON – June 202...
Competition and Regulation in Professions and Occupations – ROBSON – June 202...Competition and Regulation in Professions and Occupations – ROBSON – June 202...
Competition and Regulation in Professions and Occupations – ROBSON – June 202...
OECD Directorate for Financial and Enterprise Affairs
 
BRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdf
BRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdfBRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdf
BRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdf
Robin Haunschild
 
Competition and Regulation in Professions and Occupations – OECD – June 2024 ...
Competition and Regulation in Professions and Occupations – OECD – June 2024 ...Competition and Regulation in Professions and Occupations – OECD – June 2024 ...
Competition and Regulation in Professions and Occupations – OECD – June 2024 ...
OECD Directorate for Financial and Enterprise Affairs
 
Pro-competitive Industrial Policy – LANE – June 2024 OECD discussion
Pro-competitive Industrial Policy – LANE – June 2024 OECD discussionPro-competitive Industrial Policy – LANE – June 2024 OECD discussion
Pro-competitive Industrial Policy – LANE – June 2024 OECD discussion
OECD Directorate for Financial and Enterprise Affairs
 
Disaster Management project for holidays homework and other uses
Disaster Management project for holidays homework and other usesDisaster Management project for holidays homework and other uses
Disaster Management project for holidays homework and other uses
RIDHIMAGARG21
 
原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样
原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样
原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样
gpww3sf4
 
The remarkable life of Sir Mokshagundam Visvesvaraya.pptx
The remarkable life of Sir Mokshagundam Visvesvaraya.pptxThe remarkable life of Sir Mokshagundam Visvesvaraya.pptx
The remarkable life of Sir Mokshagundam Visvesvaraya.pptx
JiteshKumarChoudhary2
 
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfWhy Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
Ben Linders
 
Using-Presentation-Software-to-the-Fullf.pptx
Using-Presentation-Software-to-the-Fullf.pptxUsing-Presentation-Software-to-the-Fullf.pptx
Using-Presentation-Software-to-the-Fullf.pptx
kainatfatyma9
 
Carrer goals.pptx and their importance in real life
Carrer goals.pptx  and their importance in real lifeCarrer goals.pptx  and their importance in real life
Carrer goals.pptx and their importance in real life
artemacademy2
 
The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...
The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...
The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...
OECD Directorate for Financial and Enterprise Affairs
 
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
OECD Directorate for Financial and Enterprise Affairs
 
ASONAM2023_presection_slide_track-recommendation.pdf
ASONAM2023_presection_slide_track-recommendation.pdfASONAM2023_presection_slide_track-recommendation.pdf
ASONAM2023_presection_slide_track-recommendation.pdf
ToshihiroIto4
 
Artificial Intelligence, Data and Competition – OECD – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – OECD – June 2024 OECD discussionArtificial Intelligence, Data and Competition – OECD – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – OECD – June 2024 OECD discussion
OECD Directorate for Financial and Enterprise Affairs
 
Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...
Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...
Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...
OECD Directorate for Financial and Enterprise Affairs
 
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
Suzanne Lagerweij
 
怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样
怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样
怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样
kekzed
 

Recently uploaded (20)

IEEE CIS Webinar Sustainable futures.pdf
IEEE CIS Webinar Sustainable futures.pdfIEEE CIS Webinar Sustainable futures.pdf
IEEE CIS Webinar Sustainable futures.pdf
 
The Intersection between Competition and Data Privacy – COLANGELO – June 2024...
The Intersection between Competition and Data Privacy – COLANGELO – June 2024...The Intersection between Competition and Data Privacy – COLANGELO – June 2024...
The Intersection between Competition and Data Privacy – COLANGELO – June 2024...
 
Artificial Intelligence, Data and Competition – LIM – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – LIM – June 2024 OECD discussionArtificial Intelligence, Data and Competition – LIM – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – LIM – June 2024 OECD discussion
 
Competition and Regulation in Professions and Occupations – ROBSON – June 202...
Competition and Regulation in Professions and Occupations – ROBSON – June 202...Competition and Regulation in Professions and Occupations – ROBSON – June 202...
Competition and Regulation in Professions and Occupations – ROBSON – June 202...
 
BRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdf
BRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdfBRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdf
BRIC_2024_2024-06-06-11:30-haunschild_archival_version.pdf
 
Competition and Regulation in Professions and Occupations – OECD – June 2024 ...
Competition and Regulation in Professions and Occupations – OECD – June 2024 ...Competition and Regulation in Professions and Occupations – OECD – June 2024 ...
Competition and Regulation in Professions and Occupations – OECD – June 2024 ...
 
Pro-competitive Industrial Policy – LANE – June 2024 OECD discussion
Pro-competitive Industrial Policy – LANE – June 2024 OECD discussionPro-competitive Industrial Policy – LANE – June 2024 OECD discussion
Pro-competitive Industrial Policy – LANE – June 2024 OECD discussion
 
Disaster Management project for holidays homework and other uses
Disaster Management project for holidays homework and other usesDisaster Management project for holidays homework and other uses
Disaster Management project for holidays homework and other uses
 
原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样
原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样
原版制作贝德福特大学毕业证(bedfordhire毕业证)硕士文凭原版一模一样
 
The remarkable life of Sir Mokshagundam Visvesvaraya.pptx
The remarkable life of Sir Mokshagundam Visvesvaraya.pptxThe remarkable life of Sir Mokshagundam Visvesvaraya.pptx
The remarkable life of Sir Mokshagundam Visvesvaraya.pptx
 
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdfWhy Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
Why Psychological Safety Matters for Software Teams - ACE 2024 - Ben Linders.pdf
 
Using-Presentation-Software-to-the-Fullf.pptx
Using-Presentation-Software-to-the-Fullf.pptxUsing-Presentation-Software-to-the-Fullf.pptx
Using-Presentation-Software-to-the-Fullf.pptx
 
Carrer goals.pptx and their importance in real life
Carrer goals.pptx  and their importance in real lifeCarrer goals.pptx  and their importance in real life
Carrer goals.pptx and their importance in real life
 
The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...
The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...
The Intersection between Competition and Data Privacy – CAPEL – June 2024 OEC...
 
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
The Intersection between Competition and Data Privacy – KEMP – June 2024 OECD...
 
ASONAM2023_presection_slide_track-recommendation.pdf
ASONAM2023_presection_slide_track-recommendation.pdfASONAM2023_presection_slide_track-recommendation.pdf
ASONAM2023_presection_slide_track-recommendation.pdf
 
Artificial Intelligence, Data and Competition – OECD – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – OECD – June 2024 OECD discussionArtificial Intelligence, Data and Competition – OECD – June 2024 OECD discussion
Artificial Intelligence, Data and Competition – OECD – June 2024 OECD discussion
 
Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...
Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...
Artificial Intelligence, Data and Competition – SCHREPEL – June 2024 OECD dis...
 
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
Suzanne Lagerweij - Influence Without Power - Why Empathy is Your Best Friend...
 
怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样
怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样
怎么办理(lincoln学位证书)英国林肯大学毕业证文凭学位证书原版一模一样
 

Artificial Neural Networks-Supervised Learning Models

  • 1. Soft Computing: Artificial Neural Networks Dr. Baljit Singh Khehra Professor CSE Department Baba Banda Singh Bahadur Engineering College Fatehgarh Sahib-140407, Punjab, India
  • 2. Soft Computing  Soft Computing is a new field to construct new generation of AI , known as Computational Intelligence.  Soft Computing is branch in which it is tried to build Intelligent Machines.  Hard Computing requires a precisely stated analytical model and often a lot of computation time.  Many Analytical models are valid for ideal cases.  Real world problems exist in a non-ideal environment.  Soft Computing is a collection of methodologies that aim to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solution cost.  The role model for Soft Computing is the human mind.
  • 3. Soft Computing Techniques  Soft Computing is defined as collection of techniques spanning many fields that fall under various categories in computational intelligence.  Soft Computing has main three branches:  Artificial Neural Networks (ANNs)  Fuzzy logic: To handle uncertainty (partial information about the problem, unreliable information, information from more than one source about the problem that are conflicting)  Evolutionary Computing : contains optimization Algorithms  Genetic Algorithm (GA)  Ant Colony Optimization (ACO) algorithm  Biogeography based Optimization (BBO) approach  Bacterial foraging optimization algorithm  Gravitational search algorithm  Cuckoo optimization algorithm  Teaching-Learning-Based Optimization (TLBO)  Big Crunch Optimization (BBBCO) algorithm
  • 4. Neural Networks (NNs)  A group of interconnected people that interact with each others to exchange information.  CN is a group of two or more computer systems linked together to exchange information.  A network of neurons  Neurons are the cells in the brain that convey information about the world around us  A human brain has 86 billion neurons of different kinds.  But, we use only 10% of them.
  • 5. Comparison b/w Real & Artificial Neurons
  • 6. Artificial Neural Networks (ANNs)  To simulate human brain behavior  Mimic information processing capability of Human Brain (Human Nervous System).  Computational or Mathematical Models of Human Brain based on some assumptions:  Information processing occurs at many simple elements called Neurons.  Signals are passed b/w neurons by connection Links.  Each connection link has an Associated Weight.  The output of each neuron is obtained by passing its input through Activation Function.
  • 7. A Simple Artificial Neural Network Activation function which is Binary Sigmoid function )( inyfy  x e xf    1 1 )( 332211 3 1 wxwxwxwxy i i iin    inyin e yfy    1 1 )(
  • 8. A Simple Artificial Neural Network with Multi-layers  Each ANN is composed of a collection of neurons grouped in layers.  Note the three layers: input, intermediate (called the hidden layer) and output.  Several hidden layers can be placed between the input and output layers. )( inyfy  x e xf    1 1 )( j j jin zvy    2 1    3 1i iijinj xwz )( injj zfz 
  • 9. Artificial Neural Networks (ANNs)  An ANN is characterized by  Its pattern of connections b/w neurons (called its architecture)  Its method of determining weights on connections (Training or Learning Algorithm)  Its Activation function.  Features of ANN  Adaptive Learning  Self-organization  Real-Time operation  Fault Tolerance via redundant information coding.  Information processing is local  Memory is distributed:  Long term: Weights  Short term: Signal sends
  • 10. Advantages of ANNs  Lower interpolation error  Good extrapolation capabilities.  Generalization ability  Fast response time in operational phase  Free from numerical instability  Learning not programming  Parallelism in approach  Distributed memory  Intelligent behavior  Capability to operate based on a multivariate and noisy or error prone training data set.  Capability for modeling non-linear characteristics.
  • 11. Applications of ANNs  Designing fuzzy logic controllers  Parameter estimation for nonlinear systems  Optimization methods in real time traffic control  Power system identification and control  Power Load forecasting  Weather forecasting  Solving NP-Hard problems  VLSI design  Learning the topology and weights of neural networks  Performance enhancement of neural networks  Distributed data base design  Allocation and scheduling on multi-computers.  Signature verification study  Computer assisted drug design  Computer-aided disease diagnosis system  CPU Job scheduling  Pattern Recognition  Speech Recognition  Finger print Recognition  Face Recognition  Character/ Digit Recognition  Signal processing applications in virtual instrumentation systems
  • 12. Basic Building Blocks of ANNs  Network Architecture  Learning Algorithms  Activation Functions  Network Architecture: The arrangement of neurons into layers and the pattern of connection within and in-between layer are called the architecture of the network.  Commonly used Network Architecture are
  • 13. Learning of ANNs  Learning or training algorithms are used to set weights and bias in Neural Networks.  Types of Learning – Supervised learning – Unsupervised learning  Supervised learning • Learning with a teacher • Learning by examples  Training set  Examples: Perceptron, ADALINE, MADALINE, Backpropagation etc.
  • 15. Unsupervised Learning  Self-organizing  Clustering – Form proper clusters by discovering the similarities and dissimilarities among objects  Examples: Kohonen Self-organizing MAP, ART1,ART2 etc.
  • 16. Activation Functions  Activation Function: Activation Function is used to calculate the output response of a neuron.  Various types of activation functions  Step function  Hard Limiter function
  • 17. Activation Functions  Various types of activation functions  Ramp function  Unipolar Sigmoid function
  • 18. Activation Functions  Various types of activation functions  Bipolar Sigmoid function
  • 19. Rosenblatt’s Perceptron  In 1962, Frank Rosenblatt developed an ANN called Perceptron.  Perceptron is a computational model of the retina of the eye.  Weights b/w S and A are fixed  Weights b/w A and R are adjusted by Perceptron Learning Rule.  Learning of Perceptron is supervised.  Training algorithm is suitable for either Bipolar or Binary input with Bipolar target, fixed threshold and adjustable bias.
  • 20. Perceptron Training Rule  For each training pattern, net calculates the response of the output unit.  The net determines whether an error occurred for the pattern.  This is done by comparing the calculated output with target value.  If an error occurred for a particular training pattern (y ≠ t), then weights are changed according to the following formula: wi (new) = wi (old)+ wi b (new ) = b (old)+ b where wi = α t xi b = α t t is target output value for the current training example y is Perceptron output α is small constant (e.g., 0.5) called learning rate  The role of the learning rate is to moderate the degree to which weights are changed at each step.
  • 21. Activation Function for Perceptron  Binary Step Activation Function  Output of Perceptron  Perceptron only handle tasks which are linearly separable                 in in in in yif yif yif yfy 1 0 1 )(
  • 22. Perceptron Training Algorithm Step1. Initialize weights and bias. Set weights and bias to small random values Set Learning rate (0 < α ≤ 1) Set Threshold Value (θ) Step2. While stopping condition is False, do Steps 3-8 Step3. For each training pair (s : t), do Steps 4-7 Step 4. Set activation of input units, i = 1, …..,n xi = si Step 5. Compute response of output unit y-in = b + w1x1 + … + wnxn y = f (y-in)                 in in in in yif yif yif yfy 1 0 1 )(
  • 23. Perceptron Training Algorithm Step.6 Update Weight and bias If an error occurred for a particular training pattern (y ≠ t), then, weights are changed according to the following formula: wi (new) = wi (old)+ wi b (new ) = b (old)+ b where wi = α t xi b = α t t is target output value for the current training example y is Perceptron output α is small constant (e.g., 0.5) called learning rate Else wi (new) = wi (old) b (new ) = b (old) Step 7. Test stopping condition
  • 24. Perceptron Testing Algorithm Step1. Set calculated weights from training algorithm Set Learning rate (0 < α ≤ 1) Set Threshold Value (θ) Step2. For each input and target (s : t), do Steps 3-5 Step 3. Set activation of input units, i = 1, …..,n xi = si Step 4. Compute response of output unit y-in = b + w1x1 + … + wnxn y = f (y-in) Step.5 Calculate Error E=(t – y)                 in in in in yif yif yif yfy 1 0 1 )(
  • 25. Development of Perceptron for AND Function Input Output 1 1 1 1 -1 -1 -1 1 -1 -1 -1 -1 Input Output 1 1 1 1 -1 -1 -1 1 -1 -1 -1 -1
  • 26. Perceptron Training Algorithm for AND function x=[1 1 -1 -1;1 -1 1 -1]; t=[1 -1 -1 -1]; w=[0 0]; b=0; alpha=input('Enter Learning Rate='); theta=input('Enter Threshold Value='); epoch=0; maxepoch=100;
  • 27. while epoch<mepoch for i = 1:4 yin=b*x(1,i)*w(1)+x(2,i)*w(2); if yin>theta y=1; end if yin<=theta & yin>=-theta y = 0; end if yin<-theta y = -1; end if y – t(i) ~= 0 for j = 1:2 w(j) = w(j) + alpha*t(i)*x(j, i); end b=b + alpha*t(i); end end epoch=epoch+1; end
  • 28. disp('Perceptron for AND function'); disp('Final Weight Matrix'); disp(w); disp('Final Bias'); disp(b);  OUTPUT  Enter Learning Rate=1  Enter Threshold Value=0.5  Perceptron for AND function  Final Weight Matrix 0 2  Final Bias 0
  • 29. Perceptron Testing Algorithm for AND function x=[1 1 -1 -1;1 -1 1 -1]; w=[0 2]; b=0; for i=1:4 yin=b*x(1,i)*w(1)+x(2,i)*w(2); if yin>theta y(i)=1; end if yin<=theta & yin>=-theta y(i)=0; end if yin<-theta y(i)=-1; end end y OUTPUT: 1 -1 1 -1 Input Target Actual Output 1 1 1 1 1 -1 -1 -1 -1 1 -1 1 -1 -1 -1 -1
  • 30. ADALINE  In 1960, Widrow and Hoff developed ADALINE.  It uses Bipolar (+1 or -1) activations for its input signals and target output.  Weights and bias are updated using Delta Rule. wi (new) = wi (old)+ wi b (new ) = b (old)+ b where wi = α (t –y-in)xi b = α (t –y-in) t is target output value for the current training example y-in is input of output unit α is learning rate
  • 31. ADALINE Training Algorithm Step1. Initialize weights and bias. Set weights and bias to small random values Set Learning rate (0 < α ≤ 1) Step2. While stopping condition is False, do Steps 3-7 Step3. For each training pair (s : t), do Steps 4-6 Step 4. Set activation of input units, i = 1, …..,n xi = si Step 5. Compute net input of output unit y-in = b + w1x1 + … + wnxn Step.6 Update Weight and bias wi (new) = wi (old) + α(t-y-in) xi b (new ) = b (old) + α(t-y-in) Step 7. Test stopping condition
  • 32. ADALINE Testing Algorithm Step1. Set calculated weights from training algorithm Set Learning rate (0 < α ≤ 1) Step2. For each input and target (s : t), do Steps 3-5 Step 3. Set activation of input units, i = 1, …..,n xi = si Step 4. Compute response of output unit y-in = b + w1x1 + … + wnxn y = f (y-in) Step.5 Calculate Error E=(t – y)            01 01 )( in in in yif yif yfy
  • 33. MADALINE  In 1960, Widrow & Hoff developed MADALINE.  Many ADALINES arranged in a multilayer net.  A MADALINE with two hidden ADALINES and one output ADALINE.  MADALINE uses Bipolar (+1 or -1) activations for its input signals and target output.  Weights and bias on output ADALINE are fixed.  Weights and bias on hidden ADALINES are updated using Widrow & Hoff rule.
  • 34. MADALINE  Activation Function  Weights and bias on output ADALINE are fixed: v1 = v2 = b3 = 0.5  Weights and bias on hidden ADALINES are updated using Widrow & Hoff rule: If t = y, then, no weights and bias are updated Otherwise If t = 1, then, weights and bias are updated on zJ (unit whose net input is closed to 0) wiJ (new) = wiJ (old) + α (t –z-inJ)xi bJ (new ) = bJ (old) + α (t –z-inJ) If t = -1, then, weights and bias are updated on zK (unit whose net input is +tive) wiK (new) = wiK (old) + α (t –z-inK)xi bK (new ) = bK (old) + α (t –z-inK)          01 01 )( in in in yif yif yfy
  • 35. MADALINE Training Algorithm Step1. Initialize weights and bias. Set weights and bias on output units to v1 = v2 = b3 = 0.5 Set weights and bias on hidden ADALINES to small random values Set Learning rate (0 < α ≤ 1) Step2. While stopping condition is False, do Steps 3-10 Step3. For each training pair (s : t), do Steps 4-9 Step 4. Set activation of input units, i = 1, …..,n xi = si Step 5. Compute net input of each hidden ADALINE unit z-in1 = b1 + w11x1 +w21x2 z-in2 = b2 + w12x1 +w22x2 Step 6. Determine output of each hidden ADALINE unit z1 = f (z-in1) z2 = f (z-in2)
  • 36. MADALINE Training Algorithm Step 7. Compute net input of the output ADALINE unit y-in = b3 + v1z1 +v2z2 Step 8. Determine output of the output ADALINE unit y = f (y-in) Step9. Update Weights and bias using Widrow & Hoff rule: If t = y, then, no weights and bias are updated Otherwise If t = 1, then, weights and bias are updated on zJ wiJ (new) = wiJ (old) + α (1 –z-inJ)xi bJ (new ) = bJ (old) + α (t –z-inJ) If t = -1, then, weights and bias are updated on zK wiK (new) = wiK (old) + α (-1 –z-inK)xi bK (new ) = bK (old) + α (-1 –z-inK) Step10. Test stopping condition
  • 37. MADALINE Testing Algorithm Step1. Set calculated weights from training algorithm Set Learning rate (0 < α ≤ 1) Step2. For each input and target (s : t), do Steps 3-8 Step 3. Set activation of input units, i = 1, …..,n xi = si Step 4. Compute net input of each hidden ADALINE unit z-in1 = b1 + w11x1 +w21x2 z-in2 = b2 + w12x1 +w22x2 Step 6. Determine output of each hidden ADALINE unit z1 = f (z-in1) z2 = f (z-in2) Step 7. Compute response of output unit y-in = b3 + v1z1 +v2z2 y = f (y-in) Step.8 Calculate Error E=(t – y)            01 01 )( in in in yif yif yfy
  • 38. MADALINE Training Algorithm for XOR function Step 1. w11=0.05, w21=0.2,b1=0.3 w12=0.1,w22=0.2, b2=0.15 v1 = v2 = b3 = 0.5 α=0.5 Step2. Begin Training, do Steps 3-10 Step3. For 1st training pair (s : t) = (1 1:-1), do Steps 4-9 Step 4. Activation of input units, i = 1, 2 xi = si x1 = 1, x2 = 1 Step 5. Compute net input of each hidden ADALINE unit z-in1 = b1 + w11x1 +w21x2 z-in1 = 0.3+0.05b+0.2=0.55 z-in2 = b2 + w12x1 +w22x2 z-in2 = 0.15+0.1+0.2=0.45 Step 6. Determine output of each hidden ADALINE unit z1 = f (z-in1) z1 = 1 z2 = f (z-in2) z2 = 1 Input Target s1 s2 t 1 1 -1 1 -1 1 -1 1 1 -1 -1 -1
  • 39. MADALINE Training Algorithm for XOR function Step 7. Compute net input of the output ADALINE unit y-in = b3 + v1z1 +v2z2 y-in = 0.5 + 0.5 +0.5=1.5 Step 8. Determine output of the output ADALINE unit y = f (y-in) y = 1 Step9. Update Weights and bias because Error occurred (t-y=-1-1=-2) If t = -1, then, weights and bias are updated on zK (unit whose net input is +tive) wiK (new) = wiK (old) + α (-1 –z-inK)xi bK (new ) = bK (old) + α (-1 –z-inK) b1 (new ) = b1 (old) + α (-1 –z-in1)=0.3+0.5(-1-0.55)= - 0.475 w1 1(new ) = w1 1(old) + α (-1 –z-in1) x1=0.05+0.5(-1-0.55)1= -0.725 Similarly w21(new) = -.0575, b2(new) = -0.575 w12(new) = -0.625, w22(new) = -0.525 Step10. Test stopping condition
  • 40. MADALINE Training Algorithm for XOR function After 1st Training pair of 1st Iteration, New Weights and bias w11= -0.725, w21= -0.575, b1= -0.475 w12= -0.625, w22= -0.525, b2= -0.575 These weights and bias are used for 2nd training pair (1 -1: 1) in 1st iteration to get new weights and bias. New weights and bias obtained from 2nd training pair are used for 3rd training pair (-1 1: 1) in 1st iteration to get new weights and bias. New weights and bias obtained from 3rd training pair are used for 4th training pair (-1 -1: -1) in 1st iteration and get new weights and bias. Thus 1st Iteration is completed weights and bias obtained in 1st Iteration (obtained from 4th training pair ) are used for 1st training pair in 2nd Iteration to get new weights and bias Step10. Test stopping condition