SlideShare a Scribd company logo
UNIT – II
ARTIFICIAL NEURAL NETWORKS
 Basics of ANN
 Comparison between Artificial and Biological Neural Networks
 Basic Building Blocks of ANN
 Artificial Neural Network Terminologies
 McCulloch Pitts Neuron Model
 Learning Rules
 ADALINE and MADALINE Models
 Perceptron Networks
 Back Propagation Neural Networks
 Associative Memories.
 Neural networks are parallel computing devices, which is basically an attempt to
make a computer model of the brain.
 The main objective is to develop a system to perform various computational
tasks faster than the traditional systems.
 These tasks include pattern recognition and classification, approximation,
optimization, and data clustering.
BASICS OF ANN
DEFINITION:-
An neural network is a highly interconnected network of a large number
of processing elements called neurons, in an architecture inspired by the brain.
BIOLOGICAL NEURONS
 A human brain consists of approximately 1011 computing elements called
neurons.
 They communicate through a connection network of axons and synapses having
a density of approximately 104 synapses per neuron.
 The elementary nerve cell, called a neuron, is the fundamental building block of
the biological neural network.
 Its schematic diagram is shown in Figure.
A typical cell has four major regions:
 Cell body (which is also called the soma)
 Axon
 Synapses
 Dendrites.
MODEL OF ARTIFICIAL NEURAL NETWORK
The output can be calculated by applying the activation function over
the net input.
BASIC BUILDING BLOCKS OF ANN
ANN depends upon the following three building blocks:-
 Network Architecture
 Adjustments of Weights or Learning
 Activation Functions
NETWORK ARCHITECTURE
 A network topology is the arrangement of a network along with its nodes and
connecting lines.
 According to the topology, ANN can be classified as the following types:
1) Single layer feedforward network
2) Multilayer feedforward network
3) Recurrent network
SINGLE LAYER FEEDFORWARD NETWORK
 The concept is of feedforward ANN having only one weighted layer.
 In other words, we can say the input layer is fully connected to the output layer.
MULTILAYER FEEDFORWARD NETWORK
 The concept is of feedforward ANN having more than one weighted layer.
 As this network has one or more layers between the input and the output layer, it
is called hidden layers.
RECURRENT NETWORKS
They are feedback networks with closed loops.
ADJUSTMENTS OF WEIGHTS OR LEARNING
 Learning, in artificial neural network, is the method of modifying the weights
of connections between the neurons of a specified network.
 Learning in ANN can be classified into three categories namely:
1) supervised learning,
2) unsupervised learning,
3) reinforcement learning.
SUPERVISED LEARNING
As the name suggests, this type of learning is done under the supervision
of a teacher. This learning process is dependent.
UNSUPERVISED LEARNING
As the name suggests, this type of learning is done without the
supervision of a teacher. This learning process is independent.
REINFORCEMENT LEARNING
 As the name suggests, this type of learning is used to reinforce or strengthen
the network over some critic information.
 This learning process is similar to supervised learning, however we might have
very less information.
ACTIVATION FUNCTIONS
ARTIFICIAL NEURAL NETWORK TERMINOLOGIES
1) Weights
2) Bias
3) Threshold
4) Learning rate
5) notations
MCCULLOCH-PITTS NEURON MODEL
 The first computational model of a neuron was proposed by Warren MuCulloch
(neuroscientist) and Walter Pitts (logician) in 1943.
 The McCulloch-Pitts model was an extremely simple artificial neuron.
 The inputs could be either a zero or a one. And the output was a zero or a one.
 And each input could be either excitatory or inhibitory. Now the whole point
was to sum the inputs.
 If an input is one, and is excitatory in nature, it added one. If it was one, and was
inhibitory, it subtracted one from the sum. This is done for all inputs, and a final
sum is calculated.
 Now, if this final sum is less than some value (which you decide, say T), then
the output is zero. Otherwise, the output is a one.
NEURAL NETWORK LEARNING RULES
1) Hebbian Learning Rule
2) Perceptron Learning Rule
3) Delta Learning Rule (or) Widrow−Hoff Rule
4) Competitive Learning Rule (or) Winner−takes−all rule
5) Outstar Learning Rule
HEBBIAN LEARNING RULE
 This rule, one of the oldest and simplest, was introduced by Donald Hebb.
 It is a kind of feed-forward, unsupervised learning.
 “When an axon of cell A is near enough to excite a cell B and repeatedly or
persistently takes part in firing it, some growth process or metabolic change takes
place in one or both cells such that A’s efficiency, as one of the cells firing B, is
increased.”
PERCEPTRON LEARNING RULE
 This rule is an error correcting the supervised learning algorithm of single layer
feedforward networks with linear activation function, introduced by Rosenblatt.
 As being supervised in nature, to calculate the error, there would be a
comparison between the desired/target output and the actual output.
 If there is any difference found, then a change must be made to the weights of
connection.
DELTA LEARNING RULE (OR) WIDROW−HOFF RULE
 It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean
Square LMS method, to minimize the error over all training patterns.
 It is kind of supervised learning algorithm with having continuous activation
function.
 The base of this rule is gradient-descent approach, which continues forever.
Delta rule updates the synaptic weights so as to minimize the net input to the output
unit and the target value.
COMPETITIVE LEARNING RULE
(OR)
WINNER−TAKES−ALL
 It is concerned with unsupervised training in which the output nodes try to
compete with each other to represent the input pattern.
 This network is just like a single layer feedforward network with feedback
connection between outputs.
 The connections between outputs are inhibitory type, shown by dotted lines,
which means the competitors never support themselves.
OUTSTAR LEARNING RULE
 This rule, introduced by Grossberg, is concerned with supervised learning
because the desired outputs are known. It is also called Grossberg learning.
 This rule is applied over the neurons arranged in a layer.
 It is specially designed to produce a desired output d of the layer of p neurons.
PERCEPTRON
 Developed by Frank Rosenblatt by using McCulloch and Pitts model,
perceptron is the basic operational unit of artificial neural networks.
 It employs supervised learning rule and is able to classify the data into two
classes.
 Operational characteristics of the perceptron: It consists of a single neuron with
an arbitrary number of inputs along with adjustable weights, but the output of the
neuron is 1 or 0 depending upon the threshold.
 It also consists of a bias whose weight is always 1.
Perceptron thus has the following three basic elements −
Links − It would have a set of connection links, which carries a weight
including a bias always having weight 1.
Adder − It adds the input after they are multiplied with their respective
weights.
Activation function − It limits the output of neuron. The most basic
activation function is a Heaviside step function that has two possible outputs.
This function returns 1, if the input is positive, and 0 for any negative input.
Perceptron network can be trained for single output unit as well as
multiple output units.
Training Algorithm for Single Output Unit
Step 1 − Initialize the following to start the training −
• Weights
• Bias
• Learning rate α
For easy calculation and simplicity, weights and bias must be set equal to 0
and the learning rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows −
Step 5 − Now obtain the net input with the following relation −
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output.
Step 7 − Adjust the weight and bias as follows −
Case 1 − if y ≠ t then,
Case 2 − if y = t then,
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Step 8 − Test for the stopping condition, which would happen when there is no
change in weight.
Training Algorithm for Multiple Output Units
Step 1 − Initialize the following to start the training −
• Weights
• Bias
• Learning rate α
For easy calculation and simplicity, weights and bias must be set equal to 0 and the
learning rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows −
Step 5 − Obtain the net input with the following relation −
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output for each
output unit j = 1 to m −
Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows −
Case 1 − if yj ≠ tj then,
Case 2 − if yj = tj then,
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Step 8 − Test for the stopping condition, which will happen when there is no
change in weight.
ADAPTIVE LINEAR NEURON (ADALINE)
 Adaline which stands for Adaptive Linear Neuron, is a network having a single
linear unit.
 It was developed by Widrow and Hoff in 1960. Some important points about
Adaline are as follows −
 It uses bipolar activation function.
 It uses delta rule for training to minimize the Mean-Squared
Error (MSE) between the actual output and the desired/target
output.
 The weights and the bias are adjustable.
MULTIPLE ADAPTIVE LINEAR NEURON
(MADALINE)
 Madaline which stands for Multiple Adaptive Linear Neuron, is a network
which consists of many Adalines in parallel.
 It will have a single output unit.
 Some important points about Madaline are as follows −
1) It is just like a multilayer perceptron, where Adaline will act
as a hidden unit between the input and the Madaline layer.
2) The weights and the bias between the input and Adaline
layers, as in we see in the Adaline architecture, are adjustable.
3) The Adaline and Madaline layers have fixed weights and bias
of 1.
Training can be done with the help of Delta rule.
BACK PROPAGATION NEURAL NETWORKS
 Back Propagation Neural (BPN) is a multilayer neural network consisting of the
input layer, at least one hidden layer and output layer.
 As its name suggests, back propagating will take place in this network.
 The error which is calculated at the output layer, by comparing the target output
and the actual output, will be propagated back towards the input layer.
ASSOCIATE MEMORY NETWORK
 These kinds of neural networks work on the basis of pattern association,
which means they can store different patterns and at the time of giving an
output they can produce one of the stored patterns by matching them with the
given input pattern.
 These types of memories are also called Content-Addressable
Memory (CAM).
 Associative memory makes a parallel search with the stored patterns as data
files.
Following are the two types of associative memories we can observe −
 Auto Associative Memory
 Hetero Associative memory
AUTO ASSOCIATIVE MEMORY
 This is a single layer neural network in which the input training vector and the
output target vectors are the same.
 The weights are determined so that the network stores a set of patterns.
HETERO ASSOCIATIVE MEMORY
 Similar to Auto Associative Memory network, this is also a single layer neural
network.
 However, in this network the input training vector and the output target vectors
are not the same.
 The weights are determined so that the network stores a set of patterns.
 Hetero associative network is static in nature, hence, there would be no non-
linear and delay operations.
BIDIRECTIONALASSOCIATIVE MEMORY (BAM)
 Bidirectional Associative Memory (BAM) is a supervised learning model in
Artificial Neural Network.
 This is hetero-associative memory, for an input pattern, it returns another
pattern which is potentially of a different size.
 This phenomenon is very similar to the human brain. Human memory is
necessarily associative.
 It uses a chain of mental associations to recover a lost memory like associations
of faces with names, in exam questions with answers, etc.
 In such memory associations for one type of object with another, a Recurrent
neural network is needed to receive a pattern of one set of neurons as an input and
generate a related, but different, output pattern of another set of neurons.
HOPFIELD NETWORKS
 Hopfield neural network was invented by Dr. John J. Hopfield in 1982.
 It consists of a single layer which contains one or more fully connected
recurrent neurons.
 The Hopfield network is commonly used for auto-association and optimization
tasks.
DISCRETE HOPFIELD NETWORK
CONTINUOUS HOPFIELD NETWORK
 In comparison with Discrete Hopfield network, continuous network has time as
a continuous variable.
 It is also used in auto association and optimization problems such as travelling
salesman problem.

More Related Content

What's hot

DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
International Journal of Power Electronics and Drive Systems
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
ijcsit
 
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
IJET - International Journal of Engineering and Techniques
 
Lecture 11 & 12
Lecture 11 & 12Lecture 11 & 12
AI IEEE
AI IEEEAI IEEE
AI IEEE
Utsav Yagnik
 
Forecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniquesForecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniques
Hitesh Dua
 
Comparison of the forecasting techniques – arima, ann and svm a review-2
Comparison of the forecasting techniques – arima, ann and svm   a review-2Comparison of the forecasting techniques – arima, ann and svm   a review-2
Comparison of the forecasting techniques – arima, ann and svm a review-2
IAEME Publication
 
Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A Review
IDES Editor
 
SoftComputing6
SoftComputing6SoftComputing6
SoftComputing6
DrPrafullNarooka
 
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
IRJET Journal
 
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
ijfls
 
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution FactorsPredicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
Dr. Amarjeet Singh
 
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
ijaia
 
Md simulation and stochastic simulation
Md simulation and stochastic simulationMd simulation and stochastic simulation
Md simulation and stochastic simulation
AbdulAhad358
 
Recognition of Handwritten Mathematical Equations
Recognition of  Handwritten Mathematical EquationsRecognition of  Handwritten Mathematical Equations
Recognition of Handwritten Mathematical Equations
IRJET Journal
 
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
EEIJ journal
 
Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...
eSAT Journals
 

What's hot (17)

DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
 
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
 
Lecture 11 & 12
Lecture 11 & 12Lecture 11 & 12
Lecture 11 & 12
 
AI IEEE
AI IEEEAI IEEE
AI IEEE
 
Forecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniquesForecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniques
 
Comparison of the forecasting techniques – arima, ann and svm a review-2
Comparison of the forecasting techniques – arima, ann and svm   a review-2Comparison of the forecasting techniques – arima, ann and svm   a review-2
Comparison of the forecasting techniques – arima, ann and svm a review-2
 
Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A Review
 
SoftComputing6
SoftComputing6SoftComputing6
SoftComputing6
 
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
 
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
 
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution FactorsPredicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
 
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
 
Md simulation and stochastic simulation
Md simulation and stochastic simulationMd simulation and stochastic simulation
Md simulation and stochastic simulation
 
Recognition of Handwritten Mathematical Equations
Recognition of  Handwritten Mathematical EquationsRecognition of  Handwritten Mathematical Equations
Recognition of Handwritten Mathematical Equations
 
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
 
Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...
 

Similar to Unit 2

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
madhu sudhakar
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
AKANKSHAVERMA20MIP10
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
butest
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
Mohd Faiz
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
bihira aggrey
 
20120140503023
2012014050302320120140503023
20120140503023
IAEME Publication
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
CSITSansar
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptx
VandanaVipparthi
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
SowmyaJyothi3
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
sai anjaneya
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
gnans Kgnanshek
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
Tamer Ahmed Farrag, PhD
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
Mohd Arafat Shaikh
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
neelamsanjeevkumar
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
mustafa aadel
 
Perceptron
PerceptronPerceptron
Perceptron
Nagarajan
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
NikitaRuhela
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
sweetysweety8
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
Deepu Gupta
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
RINUSATHYAN
 

Similar to Unit 2 (20)

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
 
20120140503023
2012014050302320120140503023
20120140503023
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptx
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Perceptron
PerceptronPerceptron
Perceptron
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 

More from kypameenendranathred

V fuzzy logic implementation for induction motor control
V fuzzy logic implementation for induction motor controlV fuzzy logic implementation for induction motor control
V fuzzy logic implementation for induction motor control
kypameenendranathred
 
V fuzzy logic control of a switched reluctance motor
V fuzzy logic control of a switched reluctance motorV fuzzy logic control of a switched reluctance motor
V fuzzy logic control of a switched reluctance motor
kypameenendranathred
 
V fuzzy logic applications to electrical systems
V fuzzy logic applications to electrical systemsV fuzzy logic applications to electrical systems
V fuzzy logic applications to electrical systems
kypameenendranathred
 
Unit 1
Unit 1Unit 1
Unit 1 ai
Unit 1 aiUnit 1 ai
Iv unit-rule rule base
Iv unit-rule rule baseIv unit-rule rule base
Iv unit-rule rule base
kypameenendranathred
 
Iv unit-fuzzy logic control design
Iv unit-fuzzy logic control designIv unit-fuzzy logic control design
Iv unit-fuzzy logic control design
kypameenendranathred
 
Iv unit-fuzzification and de-fuzzification
Iv unit-fuzzification and de-fuzzificationIv unit-fuzzification and de-fuzzification
Iv unit-fuzzification and de-fuzzification
kypameenendranathred
 
Iv defuzzification methods
Iv defuzzification methodsIv defuzzification methods
Iv defuzzification methods
kypameenendranathred
 
D.C. Motors
D.C. MotorsD.C. Motors
DC GENERATORS
DC GENERATORS DC GENERATORS
DC GENERATORS
kypameenendranathred
 
LIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
LIGHTING AND ENERGY INSTRUMENTS FOR AUDITLIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
LIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
kypameenendranathred
 
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENTENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
kypameenendranathred
 
INTRODUCTION TO ENERGY AUDITING
INTRODUCTION TO ENERGY AUDITING INTRODUCTION TO ENERGY AUDITING
INTRODUCTION TO ENERGY AUDITING
kypameenendranathred
 
Unit iii-stability
Unit iii-stabilityUnit iii-stability
Unit iii-stability
kypameenendranathred
 
Unit iii---root locus concept
Unit iii---root locus conceptUnit iii---root locus concept
Unit iii---root locus concept
kypameenendranathred
 
Ppt unit-1
Ppt unit-1Ppt unit-1

More from kypameenendranathred (17)

V fuzzy logic implementation for induction motor control
V fuzzy logic implementation for induction motor controlV fuzzy logic implementation for induction motor control
V fuzzy logic implementation for induction motor control
 
V fuzzy logic control of a switched reluctance motor
V fuzzy logic control of a switched reluctance motorV fuzzy logic control of a switched reluctance motor
V fuzzy logic control of a switched reluctance motor
 
V fuzzy logic applications to electrical systems
V fuzzy logic applications to electrical systemsV fuzzy logic applications to electrical systems
V fuzzy logic applications to electrical systems
 
Unit 1
Unit 1Unit 1
Unit 1
 
Unit 1 ai
Unit 1 aiUnit 1 ai
Unit 1 ai
 
Iv unit-rule rule base
Iv unit-rule rule baseIv unit-rule rule base
Iv unit-rule rule base
 
Iv unit-fuzzy logic control design
Iv unit-fuzzy logic control designIv unit-fuzzy logic control design
Iv unit-fuzzy logic control design
 
Iv unit-fuzzification and de-fuzzification
Iv unit-fuzzification and de-fuzzificationIv unit-fuzzification and de-fuzzification
Iv unit-fuzzification and de-fuzzification
 
Iv defuzzification methods
Iv defuzzification methodsIv defuzzification methods
Iv defuzzification methods
 
D.C. Motors
D.C. MotorsD.C. Motors
D.C. Motors
 
DC GENERATORS
DC GENERATORS DC GENERATORS
DC GENERATORS
 
LIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
LIGHTING AND ENERGY INSTRUMENTS FOR AUDITLIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
LIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
 
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENTENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
 
INTRODUCTION TO ENERGY AUDITING
INTRODUCTION TO ENERGY AUDITING INTRODUCTION TO ENERGY AUDITING
INTRODUCTION TO ENERGY AUDITING
 
Unit iii-stability
Unit iii-stabilityUnit iii-stability
Unit iii-stability
 
Unit iii---root locus concept
Unit iii---root locus conceptUnit iii---root locus concept
Unit iii---root locus concept
 
Ppt unit-1
Ppt unit-1Ppt unit-1
Ppt unit-1
 

Recently uploaded

Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
MDSABBIROJJAMANPAYEL
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
Yasser Mahgoub
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
JamalHussainArman
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball playEric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
enizeyimana36
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
co23btech11018
 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
Las Vegas Warehouse
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
KrishnaveniKrishnara1
 
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
171ticu
 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
Madan Karki
 
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdfIron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
RadiNasr
 
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMTIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
HODECEDSIET
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
insn4465
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
kandramariana6
 
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesHarnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Christina Lin
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Sinan KOZAK
 
Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
bijceesjournal
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
Dr Ramhari Poudyal
 
Engineering Drawings Lecture Detail Drawings 2014.pdf
Engineering Drawings Lecture Detail Drawings 2014.pdfEngineering Drawings Lecture Detail Drawings 2014.pdf
Engineering Drawings Lecture Detail Drawings 2014.pdf
abbyasa1014
 

Recently uploaded (20)

Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
 
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
2008 BUILDING CONSTRUCTION Illustrated - Ching Chapter 02 The Building.pdf
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball playEric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
Eric Nizeyimana's document 2006 from gicumbi to ttc nyamata handball play
 
Computational Engineering IITH Presentation
Computational Engineering IITH PresentationComputational Engineering IITH Presentation
Computational Engineering IITH Presentation
 
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have oneISPM 15 Heat Treated Wood Stamps and why your shipping must have one
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
 
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.pptUnit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
Unit-III-ELECTROCHEMICAL STORAGE DEVICES.ppt
 
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样学校原版美国波士顿大学毕业证学历学位证书原版一模一样
学校原版美国波士顿大学毕业证学历学位证书原版一模一样
 
Manufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptxManufacturing Process of molasses based distillery ppt.pptx
Manufacturing Process of molasses based distillery ppt.pptx
 
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdfIron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
Iron and Steel Technology Roadmap - Towards more sustainable steelmaking.pdf
 
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEMTIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
TIME DIVISION MULTIPLEXING TECHNIQUE FOR COMMUNICATION SYSTEM
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
 
132/33KV substation case study Presentation
132/33KV substation case study Presentation132/33KV substation case study Presentation
132/33KV substation case study Presentation
 
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesHarnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
 
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024
 
Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...Comparative analysis between traditional aquaponics and reconstructed aquapon...
Comparative analysis between traditional aquaponics and reconstructed aquapon...
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
 
Engineering Drawings Lecture Detail Drawings 2014.pdf
Engineering Drawings Lecture Detail Drawings 2014.pdfEngineering Drawings Lecture Detail Drawings 2014.pdf
Engineering Drawings Lecture Detail Drawings 2014.pdf
 

Unit 2

  • 1. UNIT – II ARTIFICIAL NEURAL NETWORKS
  • 2.  Basics of ANN  Comparison between Artificial and Biological Neural Networks  Basic Building Blocks of ANN  Artificial Neural Network Terminologies  McCulloch Pitts Neuron Model  Learning Rules  ADALINE and MADALINE Models  Perceptron Networks  Back Propagation Neural Networks  Associative Memories.
  • 3.  Neural networks are parallel computing devices, which is basically an attempt to make a computer model of the brain.  The main objective is to develop a system to perform various computational tasks faster than the traditional systems.  These tasks include pattern recognition and classification, approximation, optimization, and data clustering. BASICS OF ANN
  • 4. DEFINITION:- An neural network is a highly interconnected network of a large number of processing elements called neurons, in an architecture inspired by the brain.
  • 5. BIOLOGICAL NEURONS  A human brain consists of approximately 1011 computing elements called neurons.  They communicate through a connection network of axons and synapses having a density of approximately 104 synapses per neuron.  The elementary nerve cell, called a neuron, is the fundamental building block of the biological neural network.  Its schematic diagram is shown in Figure.
  • 6. A typical cell has four major regions:  Cell body (which is also called the soma)  Axon  Synapses  Dendrites.
  • 7.
  • 8. MODEL OF ARTIFICIAL NEURAL NETWORK
  • 9. The output can be calculated by applying the activation function over the net input.
  • 10. BASIC BUILDING BLOCKS OF ANN ANN depends upon the following three building blocks:-  Network Architecture  Adjustments of Weights or Learning  Activation Functions
  • 11. NETWORK ARCHITECTURE  A network topology is the arrangement of a network along with its nodes and connecting lines.  According to the topology, ANN can be classified as the following types: 1) Single layer feedforward network 2) Multilayer feedforward network 3) Recurrent network
  • 12. SINGLE LAYER FEEDFORWARD NETWORK  The concept is of feedforward ANN having only one weighted layer.  In other words, we can say the input layer is fully connected to the output layer.
  • 13. MULTILAYER FEEDFORWARD NETWORK  The concept is of feedforward ANN having more than one weighted layer.  As this network has one or more layers between the input and the output layer, it is called hidden layers.
  • 14. RECURRENT NETWORKS They are feedback networks with closed loops.
  • 15. ADJUSTMENTS OF WEIGHTS OR LEARNING  Learning, in artificial neural network, is the method of modifying the weights of connections between the neurons of a specified network.  Learning in ANN can be classified into three categories namely: 1) supervised learning, 2) unsupervised learning, 3) reinforcement learning.
  • 16. SUPERVISED LEARNING As the name suggests, this type of learning is done under the supervision of a teacher. This learning process is dependent.
  • 17. UNSUPERVISED LEARNING As the name suggests, this type of learning is done without the supervision of a teacher. This learning process is independent.
  • 18. REINFORCEMENT LEARNING  As the name suggests, this type of learning is used to reinforce or strengthen the network over some critic information.  This learning process is similar to supervised learning, however we might have very less information.
  • 20.
  • 21.
  • 22. ARTIFICIAL NEURAL NETWORK TERMINOLOGIES 1) Weights 2) Bias 3) Threshold 4) Learning rate 5) notations
  • 23. MCCULLOCH-PITTS NEURON MODEL  The first computational model of a neuron was proposed by Warren MuCulloch (neuroscientist) and Walter Pitts (logician) in 1943.  The McCulloch-Pitts model was an extremely simple artificial neuron.  The inputs could be either a zero or a one. And the output was a zero or a one.  And each input could be either excitatory or inhibitory. Now the whole point was to sum the inputs.  If an input is one, and is excitatory in nature, it added one. If it was one, and was inhibitory, it subtracted one from the sum. This is done for all inputs, and a final sum is calculated.  Now, if this final sum is less than some value (which you decide, say T), then the output is zero. Otherwise, the output is a one.
  • 24.
  • 25. NEURAL NETWORK LEARNING RULES 1) Hebbian Learning Rule 2) Perceptron Learning Rule 3) Delta Learning Rule (or) Widrow−Hoff Rule 4) Competitive Learning Rule (or) Winner−takes−all rule 5) Outstar Learning Rule
  • 26. HEBBIAN LEARNING RULE  This rule, one of the oldest and simplest, was introduced by Donald Hebb.  It is a kind of feed-forward, unsupervised learning.  “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
  • 27. PERCEPTRON LEARNING RULE  This rule is an error correcting the supervised learning algorithm of single layer feedforward networks with linear activation function, introduced by Rosenblatt.  As being supervised in nature, to calculate the error, there would be a comparison between the desired/target output and the actual output.  If there is any difference found, then a change must be made to the weights of connection.
  • 28.
  • 29. DELTA LEARNING RULE (OR) WIDROW−HOFF RULE  It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean Square LMS method, to minimize the error over all training patterns.  It is kind of supervised learning algorithm with having continuous activation function.  The base of this rule is gradient-descent approach, which continues forever. Delta rule updates the synaptic weights so as to minimize the net input to the output unit and the target value.
  • 30.
  • 31. COMPETITIVE LEARNING RULE (OR) WINNER−TAKES−ALL  It is concerned with unsupervised training in which the output nodes try to compete with each other to represent the input pattern.  This network is just like a single layer feedforward network with feedback connection between outputs.  The connections between outputs are inhibitory type, shown by dotted lines, which means the competitors never support themselves.
  • 32.
  • 33. OUTSTAR LEARNING RULE  This rule, introduced by Grossberg, is concerned with supervised learning because the desired outputs are known. It is also called Grossberg learning.  This rule is applied over the neurons arranged in a layer.  It is specially designed to produce a desired output d of the layer of p neurons.
  • 34.
  • 35. PERCEPTRON  Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic operational unit of artificial neural networks.  It employs supervised learning rule and is able to classify the data into two classes.  Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary number of inputs along with adjustable weights, but the output of the neuron is 1 or 0 depending upon the threshold.  It also consists of a bias whose weight is always 1.
  • 36.
  • 37. Perceptron thus has the following three basic elements − Links − It would have a set of connection links, which carries a weight including a bias always having weight 1. Adder − It adds the input after they are multiplied with their respective weights. Activation function − It limits the output of neuron. The most basic activation function is a Heaviside step function that has two possible outputs. This function returns 1, if the input is positive, and 0 for any negative input. Perceptron network can be trained for single output unit as well as multiple output units.
  • 38. Training Algorithm for Single Output Unit Step 1 − Initialize the following to start the training − • Weights • Bias • Learning rate α For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows −
  • 39. Step 5 − Now obtain the net input with the following relation − Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output. Step 7 − Adjust the weight and bias as follows − Case 1 − if y ≠ t then,
  • 40. Case 2 − if y = t then, Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which would happen when there is no change in weight.
  • 41. Training Algorithm for Multiple Output Units
  • 42. Step 1 − Initialize the following to start the training − • Weights • Bias • Learning rate α For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows −
  • 43. Step 5 − Obtain the net input with the following relation − Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output for each output unit j = 1 to m − Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows − Case 1 − if yj ≠ tj then,
  • 44. Case 2 − if yj = tj then, Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which will happen when there is no change in weight.
  • 45. ADAPTIVE LINEAR NEURON (ADALINE)  Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit.  It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows −  It uses bipolar activation function.  It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual output and the desired/target output.  The weights and the bias are adjustable.
  • 46. MULTIPLE ADAPTIVE LINEAR NEURON (MADALINE)  Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of many Adalines in parallel.  It will have a single output unit.  Some important points about Madaline are as follows − 1) It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer. 2) The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable. 3) The Adaline and Madaline layers have fixed weights and bias of 1. Training can be done with the help of Delta rule.
  • 47. BACK PROPAGATION NEURAL NETWORKS  Back Propagation Neural (BPN) is a multilayer neural network consisting of the input layer, at least one hidden layer and output layer.  As its name suggests, back propagating will take place in this network.  The error which is calculated at the output layer, by comparing the target output and the actual output, will be propagated back towards the input layer.
  • 48.
  • 49. ASSOCIATE MEMORY NETWORK  These kinds of neural networks work on the basis of pattern association, which means they can store different patterns and at the time of giving an output they can produce one of the stored patterns by matching them with the given input pattern.  These types of memories are also called Content-Addressable Memory (CAM).  Associative memory makes a parallel search with the stored patterns as data files.
  • 50. Following are the two types of associative memories we can observe −  Auto Associative Memory  Hetero Associative memory
  • 51. AUTO ASSOCIATIVE MEMORY  This is a single layer neural network in which the input training vector and the output target vectors are the same.  The weights are determined so that the network stores a set of patterns.
  • 52. HETERO ASSOCIATIVE MEMORY  Similar to Auto Associative Memory network, this is also a single layer neural network.  However, in this network the input training vector and the output target vectors are not the same.  The weights are determined so that the network stores a set of patterns.  Hetero associative network is static in nature, hence, there would be no non- linear and delay operations.
  • 53.
  • 54. BIDIRECTIONALASSOCIATIVE MEMORY (BAM)  Bidirectional Associative Memory (BAM) is a supervised learning model in Artificial Neural Network.  This is hetero-associative memory, for an input pattern, it returns another pattern which is potentially of a different size.  This phenomenon is very similar to the human brain. Human memory is necessarily associative.  It uses a chain of mental associations to recover a lost memory like associations of faces with names, in exam questions with answers, etc.  In such memory associations for one type of object with another, a Recurrent neural network is needed to receive a pattern of one set of neurons as an input and generate a related, but different, output pattern of another set of neurons.
  • 55.
  • 56. HOPFIELD NETWORKS  Hopfield neural network was invented by Dr. John J. Hopfield in 1982.  It consists of a single layer which contains one or more fully connected recurrent neurons.  The Hopfield network is commonly used for auto-association and optimization tasks.
  • 58. CONTINUOUS HOPFIELD NETWORK  In comparison with Discrete Hopfield network, continuous network has time as a continuous variable.  It is also used in auto association and optimization problems such as travelling salesman problem.