SlideShare a Scribd company logo
1 of 58
UNIT – II
ARTIFICIAL NEURAL NETWORKS
 Basics of ANN
 Comparison between Artificial and Biological Neural Networks
 Basic Building Blocks of ANN
 Artificial Neural Network Terminologies
 McCulloch Pitts Neuron Model
 Learning Rules
 ADALINE and MADALINE Models
 Perceptron Networks
 Back Propagation Neural Networks
 Associative Memories.
 Neural networks are parallel computing devices, which is basically an attempt to
make a computer model of the brain.
 The main objective is to develop a system to perform various computational
tasks faster than the traditional systems.
 These tasks include pattern recognition and classification, approximation,
optimization, and data clustering.
BASICS OF ANN
DEFINITION:-
An neural network is a highly interconnected network of a large number
of processing elements called neurons, in an architecture inspired by the brain.
BIOLOGICAL NEURONS
 A human brain consists of approximately 1011 computing elements called
neurons.
 They communicate through a connection network of axons and synapses having
a density of approximately 104 synapses per neuron.
 The elementary nerve cell, called a neuron, is the fundamental building block of
the biological neural network.
 Its schematic diagram is shown in Figure.
A typical cell has four major regions:
 Cell body (which is also called the soma)
 Axon
 Synapses
 Dendrites.
MODEL OF ARTIFICIAL NEURAL NETWORK
The output can be calculated by applying the activation function over
the net input.
BASIC BUILDING BLOCKS OF ANN
ANN depends upon the following three building blocks:-
 Network Architecture
 Adjustments of Weights or Learning
 Activation Functions
NETWORK ARCHITECTURE
 A network topology is the arrangement of a network along with its nodes and
connecting lines.
 According to the topology, ANN can be classified as the following types:
1) Single layer feedforward network
2) Multilayer feedforward network
3) Recurrent network
SINGLE LAYER FEEDFORWARD NETWORK
 The concept is of feedforward ANN having only one weighted layer.
 In other words, we can say the input layer is fully connected to the output layer.
MULTILAYER FEEDFORWARD NETWORK
 The concept is of feedforward ANN having more than one weighted layer.
 As this network has one or more layers between the input and the output layer, it
is called hidden layers.
RECURRENT NETWORKS
They are feedback networks with closed loops.
ADJUSTMENTS OF WEIGHTS OR LEARNING
 Learning, in artificial neural network, is the method of modifying the weights
of connections between the neurons of a specified network.
 Learning in ANN can be classified into three categories namely:
1) supervised learning,
2) unsupervised learning,
3) reinforcement learning.
SUPERVISED LEARNING
As the name suggests, this type of learning is done under the supervision
of a teacher. This learning process is dependent.
UNSUPERVISED LEARNING
As the name suggests, this type of learning is done without the
supervision of a teacher. This learning process is independent.
REINFORCEMENT LEARNING
 As the name suggests, this type of learning is used to reinforce or strengthen
the network over some critic information.
 This learning process is similar to supervised learning, however we might have
very less information.
ACTIVATION FUNCTIONS
ARTIFICIAL NEURAL NETWORK TERMINOLOGIES
1) Weights
2) Bias
3) Threshold
4) Learning rate
5) notations
MCCULLOCH-PITTS NEURON MODEL
 The first computational model of a neuron was proposed by Warren MuCulloch
(neuroscientist) and Walter Pitts (logician) in 1943.
 The McCulloch-Pitts model was an extremely simple artificial neuron.
 The inputs could be either a zero or a one. And the output was a zero or a one.
 And each input could be either excitatory or inhibitory. Now the whole point
was to sum the inputs.
 If an input is one, and is excitatory in nature, it added one. If it was one, and was
inhibitory, it subtracted one from the sum. This is done for all inputs, and a final
sum is calculated.
 Now, if this final sum is less than some value (which you decide, say T), then
the output is zero. Otherwise, the output is a one.
NEURAL NETWORK LEARNING RULES
1) Hebbian Learning Rule
2) Perceptron Learning Rule
3) Delta Learning Rule (or) Widrow−Hoff Rule
4) Competitive Learning Rule (or) Winner−takes−all rule
5) Outstar Learning Rule
HEBBIAN LEARNING RULE
 This rule, one of the oldest and simplest, was introduced by Donald Hebb.
 It is a kind of feed-forward, unsupervised learning.
 “When an axon of cell A is near enough to excite a cell B and repeatedly or
persistently takes part in firing it, some growth process or metabolic change takes
place in one or both cells such that A’s efficiency, as one of the cells firing B, is
increased.”
PERCEPTRON LEARNING RULE
 This rule is an error correcting the supervised learning algorithm of single layer
feedforward networks with linear activation function, introduced by Rosenblatt.
 As being supervised in nature, to calculate the error, there would be a
comparison between the desired/target output and the actual output.
 If there is any difference found, then a change must be made to the weights of
connection.
DELTA LEARNING RULE (OR) WIDROW−HOFF RULE
 It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean
Square LMS method, to minimize the error over all training patterns.
 It is kind of supervised learning algorithm with having continuous activation
function.
 The base of this rule is gradient-descent approach, which continues forever.
Delta rule updates the synaptic weights so as to minimize the net input to the output
unit and the target value.
COMPETITIVE LEARNING RULE
(OR)
WINNER−TAKES−ALL
 It is concerned with unsupervised training in which the output nodes try to
compete with each other to represent the input pattern.
 This network is just like a single layer feedforward network with feedback
connection between outputs.
 The connections between outputs are inhibitory type, shown by dotted lines,
which means the competitors never support themselves.
OUTSTAR LEARNING RULE
 This rule, introduced by Grossberg, is concerned with supervised learning
because the desired outputs are known. It is also called Grossberg learning.
 This rule is applied over the neurons arranged in a layer.
 It is specially designed to produce a desired output d of the layer of p neurons.
PERCEPTRON
 Developed by Frank Rosenblatt by using McCulloch and Pitts model,
perceptron is the basic operational unit of artificial neural networks.
 It employs supervised learning rule and is able to classify the data into two
classes.
 Operational characteristics of the perceptron: It consists of a single neuron with
an arbitrary number of inputs along with adjustable weights, but the output of the
neuron is 1 or 0 depending upon the threshold.
 It also consists of a bias whose weight is always 1.
Perceptron thus has the following three basic elements −
Links − It would have a set of connection links, which carries a weight
including a bias always having weight 1.
Adder − It adds the input after they are multiplied with their respective
weights.
Activation function − It limits the output of neuron. The most basic
activation function is a Heaviside step function that has two possible outputs.
This function returns 1, if the input is positive, and 0 for any negative input.
Perceptron network can be trained for single output unit as well as
multiple output units.
Training Algorithm for Single Output Unit
Step 1 − Initialize the following to start the training −
• Weights
• Bias
• Learning rate α
For easy calculation and simplicity, weights and bias must be set equal to 0
and the learning rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows −
Step 5 − Now obtain the net input with the following relation −
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output.
Step 7 − Adjust the weight and bias as follows −
Case 1 − if y ≠ t then,
Case 2 − if y = t then,
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Step 8 − Test for the stopping condition, which would happen when there is no
change in weight.
Training Algorithm for Multiple Output Units
Step 1 − Initialize the following to start the training −
• Weights
• Bias
• Learning rate α
For easy calculation and simplicity, weights and bias must be set equal to 0 and the
learning rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows −
Step 5 − Obtain the net input with the following relation −
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output for each
output unit j = 1 to m −
Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows −
Case 1 − if yj ≠ tj then,
Case 2 − if yj = tj then,
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Step 8 − Test for the stopping condition, which will happen when there is no
change in weight.
ADAPTIVE LINEAR NEURON (ADALINE)
 Adaline which stands for Adaptive Linear Neuron, is a network having a single
linear unit.
 It was developed by Widrow and Hoff in 1960. Some important points about
Adaline are as follows −
 It uses bipolar activation function.
 It uses delta rule for training to minimize the Mean-Squared
Error (MSE) between the actual output and the desired/target
output.
 The weights and the bias are adjustable.
MULTIPLE ADAPTIVE LINEAR NEURON
(MADALINE)
 Madaline which stands for Multiple Adaptive Linear Neuron, is a network
which consists of many Adalines in parallel.
 It will have a single output unit.
 Some important points about Madaline are as follows −
1) It is just like a multilayer perceptron, where Adaline will act
as a hidden unit between the input and the Madaline layer.
2) The weights and the bias between the input and Adaline
layers, as in we see in the Adaline architecture, are adjustable.
3) The Adaline and Madaline layers have fixed weights and bias
of 1.
Training can be done with the help of Delta rule.
BACK PROPAGATION NEURAL NETWORKS
 Back Propagation Neural (BPN) is a multilayer neural network consisting of the
input layer, at least one hidden layer and output layer.
 As its name suggests, back propagating will take place in this network.
 The error which is calculated at the output layer, by comparing the target output
and the actual output, will be propagated back towards the input layer.
ASSOCIATE MEMORY NETWORK
 These kinds of neural networks work on the basis of pattern association,
which means they can store different patterns and at the time of giving an
output they can produce one of the stored patterns by matching them with the
given input pattern.
 These types of memories are also called Content-Addressable
Memory (CAM).
 Associative memory makes a parallel search with the stored patterns as data
files.
Following are the two types of associative memories we can observe −
 Auto Associative Memory
 Hetero Associative memory
AUTO ASSOCIATIVE MEMORY
 This is a single layer neural network in which the input training vector and the
output target vectors are the same.
 The weights are determined so that the network stores a set of patterns.
HETERO ASSOCIATIVE MEMORY
 Similar to Auto Associative Memory network, this is also a single layer neural
network.
 However, in this network the input training vector and the output target vectors
are not the same.
 The weights are determined so that the network stores a set of patterns.
 Hetero associative network is static in nature, hence, there would be no non-
linear and delay operations.
BIDIRECTIONALASSOCIATIVE MEMORY (BAM)
 Bidirectional Associative Memory (BAM) is a supervised learning model in
Artificial Neural Network.
 This is hetero-associative memory, for an input pattern, it returns another
pattern which is potentially of a different size.
 This phenomenon is very similar to the human brain. Human memory is
necessarily associative.
 It uses a chain of mental associations to recover a lost memory like associations
of faces with names, in exam questions with answers, etc.
 In such memory associations for one type of object with another, a Recurrent
neural network is needed to receive a pattern of one set of neurons as an input and
generate a related, but different, output pattern of another set of neurons.
HOPFIELD NETWORKS
 Hopfield neural network was invented by Dr. John J. Hopfield in 1982.
 It consists of a single layer which contains one or more fully connected
recurrent neurons.
 The Hopfield network is commonly used for auto-association and optimization
tasks.
DISCRETE HOPFIELD NETWORK
CONTINUOUS HOPFIELD NETWORK
 In comparison with Discrete Hopfield network, continuous network has time as
a continuous variable.
 It is also used in auto association and optimization problems such as travelling
salesman problem.

More Related Content

What's hot

COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...ijcsit
 
Forecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniquesForecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniquesHitesh Dua
 
Comparison of the forecasting techniques – arima, ann and svm a review-2
Comparison of the forecasting techniques – arima, ann and svm   a review-2Comparison of the forecasting techniques – arima, ann and svm   a review-2
Comparison of the forecasting techniques – arima, ann and svm a review-2IAEME Publication
 
Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A ReviewIDES Editor
 
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...IRJET Journal
 
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...ijfls
 
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution FactorsPredicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution FactorsDr. Amarjeet Singh
 
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...ijaia
 
Md simulation and stochastic simulation
Md simulation and stochastic simulationMd simulation and stochastic simulation
Md simulation and stochastic simulationAbdulAhad358
 
Recognition of Handwritten Mathematical Equations
Recognition of  Handwritten Mathematical EquationsRecognition of  Handwritten Mathematical Equations
Recognition of Handwritten Mathematical EquationsIRJET Journal
 
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...EEIJ journal
 
Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...eSAT Journals
 

What's hot (17)

DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
DTC-ANN-2-level hybrid by neuronal hysteresis with mechanical sensorless indu...
 
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
 
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
[IJET V2I5P13] Authors: Pawan Kumar Kaushal, Mrs. Minal Tomar
 
Lecture 11 & 12
Lecture 11 & 12Lecture 11 & 12
Lecture 11 & 12
 
AI IEEE
AI IEEEAI IEEE
AI IEEE
 
Forecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniquesForecasting of Sales using Neural network techniques
Forecasting of Sales using Neural network techniques
 
Comparison of the forecasting techniques – arima, ann and svm a review-2
Comparison of the forecasting techniques – arima, ann and svm   a review-2Comparison of the forecasting techniques – arima, ann and svm   a review-2
Comparison of the forecasting techniques – arima, ann and svm a review-2
 
Power System State Estimation - A Review
Power System State Estimation - A ReviewPower System State Estimation - A Review
Power System State Estimation - A Review
 
SoftComputing6
SoftComputing6SoftComputing6
SoftComputing6
 
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...IRJET-  	  Overview of Artificial Neural Networks Applications in Groundwater...
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...
 
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
DESIGN OF OBSERVER BASED QUASI DECENTRALIZED FUZZY LOAD FREQUENCY CONTROLLER ...
 
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution FactorsPredicting Post Outage Transmission Line Flows using Linear Distribution Factors
Predicting Post Outage Transmission Line Flows using Linear Distribution Factors
 
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
COMPARISON OF ANFIS AND ANN TECHNIQUES IN THE SIMULATION OF A TYPICAL AIRCRAF...
 
Md simulation and stochastic simulation
Md simulation and stochastic simulationMd simulation and stochastic simulation
Md simulation and stochastic simulation
 
Recognition of Handwritten Mathematical Equations
Recognition of  Handwritten Mathematical EquationsRecognition of  Handwritten Mathematical Equations
Recognition of Handwritten Mathematical Equations
 
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...ARTIFICIAL NEURAL NETWORK BASED POWER  SYSTEM STABILIZER ON A SINGLE MACHINE ...
ARTIFICIAL NEURAL NETWORK BASED POWER SYSTEM STABILIZER ON A SINGLE MACHINE ...
 
Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...Available transfer capability computations in the indian southern e.h.v power...
Available transfer capability computations in the indian southern e.h.v power...
 

Similar to ANN Basics, Models, and Learning Rules

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksmadhu sudhakar
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Mohd Faiz
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...bihira aggrey
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4CSITSansar
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfSowmyaJyothi3
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)sai anjaneya
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfneelamsanjeevkumar
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networksweetysweety8
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Deepu Gupta
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.pptRINUSATHYAN
 

Similar to ANN Basics, Models, and Learning Rules (20)

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...Classification by back propagation, multi layered feed forward neural network...
Classification by back propagation, multi layered feed forward neural network...
 
20120140503023
2012014050302320120140503023
20120140503023
 
Cognitive Science Unit 4
Cognitive Science Unit 4Cognitive Science Unit 4
Cognitive Science Unit 4
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptx
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
 
Artificial neural networks (2)
Artificial neural networks (2)Artificial neural networks (2)
Artificial neural networks (2)
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdfNeural networks are parallel computing devices.docx.pdf
Neural networks are parallel computing devices.docx.pdf
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Perceptron
PerceptronPerceptron
Perceptron
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02Neuralnetwork 101222074552-phpapp02
Neuralnetwork 101222074552-phpapp02
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 

More from kypameenendranathred

More from kypameenendranathred (17)

V fuzzy logic implementation for induction motor control
V fuzzy logic implementation for induction motor controlV fuzzy logic implementation for induction motor control
V fuzzy logic implementation for induction motor control
 
V fuzzy logic control of a switched reluctance motor
V fuzzy logic control of a switched reluctance motorV fuzzy logic control of a switched reluctance motor
V fuzzy logic control of a switched reluctance motor
 
V fuzzy logic applications to electrical systems
V fuzzy logic applications to electrical systemsV fuzzy logic applications to electrical systems
V fuzzy logic applications to electrical systems
 
Unit 1
Unit 1Unit 1
Unit 1
 
Unit 1 ai
Unit 1 aiUnit 1 ai
Unit 1 ai
 
Iv unit-rule rule base
Iv unit-rule rule baseIv unit-rule rule base
Iv unit-rule rule base
 
Iv unit-fuzzy logic control design
Iv unit-fuzzy logic control designIv unit-fuzzy logic control design
Iv unit-fuzzy logic control design
 
Iv unit-fuzzification and de-fuzzification
Iv unit-fuzzification and de-fuzzificationIv unit-fuzzification and de-fuzzification
Iv unit-fuzzification and de-fuzzification
 
Iv defuzzification methods
Iv defuzzification methodsIv defuzzification methods
Iv defuzzification methods
 
D.C. Motors
D.C. MotorsD.C. Motors
D.C. Motors
 
DC GENERATORS
DC GENERATORS DC GENERATORS
DC GENERATORS
 
LIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
LIGHTING AND ENERGY INSTRUMENTS FOR AUDITLIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
LIGHTING AND ENERGY INSTRUMENTS FOR AUDIT
 
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENTENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
ENERGY EFFICIENT MOTORS AND POWER FACTOR IMPROVEMENT
 
INTRODUCTION TO ENERGY AUDITING
INTRODUCTION TO ENERGY AUDITING INTRODUCTION TO ENERGY AUDITING
INTRODUCTION TO ENERGY AUDITING
 
Unit iii-stability
Unit iii-stabilityUnit iii-stability
Unit iii-stability
 
Unit iii---root locus concept
Unit iii---root locus conceptUnit iii---root locus concept
Unit iii---root locus concept
 
Ppt unit-1
Ppt unit-1Ppt unit-1
Ppt unit-1
 

Recently uploaded

Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .Satyam Kumar
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEroselinkalist12
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learningmisbanausheenparvam
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxbritheesh05
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 

Recently uploaded (20)

Churning of Butter, Factors affecting .
Churning of Butter, Factors affecting  .Churning of Butter, Factors affecting  .
Churning of Butter, Factors affecting .
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETEINFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
INFLUENCE OF NANOSILICA ON THE PROPERTIES OF CONCRETE
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Serviceyoung call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
young call girls in Rajiv Chowk🔝 9953056974 🔝 Delhi escort Service
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
chaitra-1.pptx fake news detection using machine learning
chaitra-1.pptx  fake news detection using machine learningchaitra-1.pptx  fake news detection using machine learning
chaitra-1.pptx fake news detection using machine learning
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
young call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Serviceyoung call girls in Green Park🔝 9953056974 🔝 escort Service
young call girls in Green Park🔝 9953056974 🔝 escort Service
 
Artificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptxArtificial-Intelligence-in-Electronics (K).pptx
Artificial-Intelligence-in-Electronics (K).pptx
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 

ANN Basics, Models, and Learning Rules

  • 1. UNIT – II ARTIFICIAL NEURAL NETWORKS
  • 2.  Basics of ANN  Comparison between Artificial and Biological Neural Networks  Basic Building Blocks of ANN  Artificial Neural Network Terminologies  McCulloch Pitts Neuron Model  Learning Rules  ADALINE and MADALINE Models  Perceptron Networks  Back Propagation Neural Networks  Associative Memories.
  • 3.  Neural networks are parallel computing devices, which is basically an attempt to make a computer model of the brain.  The main objective is to develop a system to perform various computational tasks faster than the traditional systems.  These tasks include pattern recognition and classification, approximation, optimization, and data clustering. BASICS OF ANN
  • 4. DEFINITION:- An neural network is a highly interconnected network of a large number of processing elements called neurons, in an architecture inspired by the brain.
  • 5. BIOLOGICAL NEURONS  A human brain consists of approximately 1011 computing elements called neurons.  They communicate through a connection network of axons and synapses having a density of approximately 104 synapses per neuron.  The elementary nerve cell, called a neuron, is the fundamental building block of the biological neural network.  Its schematic diagram is shown in Figure.
  • 6. A typical cell has four major regions:  Cell body (which is also called the soma)  Axon  Synapses  Dendrites.
  • 7.
  • 8. MODEL OF ARTIFICIAL NEURAL NETWORK
  • 9. The output can be calculated by applying the activation function over the net input.
  • 10. BASIC BUILDING BLOCKS OF ANN ANN depends upon the following three building blocks:-  Network Architecture  Adjustments of Weights or Learning  Activation Functions
  • 11. NETWORK ARCHITECTURE  A network topology is the arrangement of a network along with its nodes and connecting lines.  According to the topology, ANN can be classified as the following types: 1) Single layer feedforward network 2) Multilayer feedforward network 3) Recurrent network
  • 12. SINGLE LAYER FEEDFORWARD NETWORK  The concept is of feedforward ANN having only one weighted layer.  In other words, we can say the input layer is fully connected to the output layer.
  • 13. MULTILAYER FEEDFORWARD NETWORK  The concept is of feedforward ANN having more than one weighted layer.  As this network has one or more layers between the input and the output layer, it is called hidden layers.
  • 14. RECURRENT NETWORKS They are feedback networks with closed loops.
  • 15. ADJUSTMENTS OF WEIGHTS OR LEARNING  Learning, in artificial neural network, is the method of modifying the weights of connections between the neurons of a specified network.  Learning in ANN can be classified into three categories namely: 1) supervised learning, 2) unsupervised learning, 3) reinforcement learning.
  • 16. SUPERVISED LEARNING As the name suggests, this type of learning is done under the supervision of a teacher. This learning process is dependent.
  • 17. UNSUPERVISED LEARNING As the name suggests, this type of learning is done without the supervision of a teacher. This learning process is independent.
  • 18. REINFORCEMENT LEARNING  As the name suggests, this type of learning is used to reinforce or strengthen the network over some critic information.  This learning process is similar to supervised learning, however we might have very less information.
  • 20.
  • 21.
  • 22. ARTIFICIAL NEURAL NETWORK TERMINOLOGIES 1) Weights 2) Bias 3) Threshold 4) Learning rate 5) notations
  • 23. MCCULLOCH-PITTS NEURON MODEL  The first computational model of a neuron was proposed by Warren MuCulloch (neuroscientist) and Walter Pitts (logician) in 1943.  The McCulloch-Pitts model was an extremely simple artificial neuron.  The inputs could be either a zero or a one. And the output was a zero or a one.  And each input could be either excitatory or inhibitory. Now the whole point was to sum the inputs.  If an input is one, and is excitatory in nature, it added one. If it was one, and was inhibitory, it subtracted one from the sum. This is done for all inputs, and a final sum is calculated.  Now, if this final sum is less than some value (which you decide, say T), then the output is zero. Otherwise, the output is a one.
  • 24.
  • 25. NEURAL NETWORK LEARNING RULES 1) Hebbian Learning Rule 2) Perceptron Learning Rule 3) Delta Learning Rule (or) Widrow−Hoff Rule 4) Competitive Learning Rule (or) Winner−takes−all rule 5) Outstar Learning Rule
  • 26. HEBBIAN LEARNING RULE  This rule, one of the oldest and simplest, was introduced by Donald Hebb.  It is a kind of feed-forward, unsupervised learning.  “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.”
  • 27. PERCEPTRON LEARNING RULE  This rule is an error correcting the supervised learning algorithm of single layer feedforward networks with linear activation function, introduced by Rosenblatt.  As being supervised in nature, to calculate the error, there would be a comparison between the desired/target output and the actual output.  If there is any difference found, then a change must be made to the weights of connection.
  • 28.
  • 29. DELTA LEARNING RULE (OR) WIDROW−HOFF RULE  It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean Square LMS method, to minimize the error over all training patterns.  It is kind of supervised learning algorithm with having continuous activation function.  The base of this rule is gradient-descent approach, which continues forever. Delta rule updates the synaptic weights so as to minimize the net input to the output unit and the target value.
  • 30.
  • 31. COMPETITIVE LEARNING RULE (OR) WINNER−TAKES−ALL  It is concerned with unsupervised training in which the output nodes try to compete with each other to represent the input pattern.  This network is just like a single layer feedforward network with feedback connection between outputs.  The connections between outputs are inhibitory type, shown by dotted lines, which means the competitors never support themselves.
  • 32.
  • 33. OUTSTAR LEARNING RULE  This rule, introduced by Grossberg, is concerned with supervised learning because the desired outputs are known. It is also called Grossberg learning.  This rule is applied over the neurons arranged in a layer.  It is specially designed to produce a desired output d of the layer of p neurons.
  • 34.
  • 35. PERCEPTRON  Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic operational unit of artificial neural networks.  It employs supervised learning rule and is able to classify the data into two classes.  Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary number of inputs along with adjustable weights, but the output of the neuron is 1 or 0 depending upon the threshold.  It also consists of a bias whose weight is always 1.
  • 36.
  • 37. Perceptron thus has the following three basic elements − Links − It would have a set of connection links, which carries a weight including a bias always having weight 1. Adder − It adds the input after they are multiplied with their respective weights. Activation function − It limits the output of neuron. The most basic activation function is a Heaviside step function that has two possible outputs. This function returns 1, if the input is positive, and 0 for any negative input. Perceptron network can be trained for single output unit as well as multiple output units.
  • 38. Training Algorithm for Single Output Unit Step 1 − Initialize the following to start the training − • Weights • Bias • Learning rate α For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows −
  • 39. Step 5 − Now obtain the net input with the following relation − Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output. Step 7 − Adjust the weight and bias as follows − Case 1 − if y ≠ t then,
  • 40. Case 2 − if y = t then, Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which would happen when there is no change in weight.
  • 41. Training Algorithm for Multiple Output Units
  • 42. Step 1 − Initialize the following to start the training − • Weights • Bias • Learning rate α For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows −
  • 43. Step 5 − Obtain the net input with the following relation − Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output for each output unit j = 1 to m − Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows − Case 1 − if yj ≠ tj then,
  • 44. Case 2 − if yj = tj then, Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which will happen when there is no change in weight.
  • 45. ADAPTIVE LINEAR NEURON (ADALINE)  Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit.  It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows −  It uses bipolar activation function.  It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual output and the desired/target output.  The weights and the bias are adjustable.
  • 46. MULTIPLE ADAPTIVE LINEAR NEURON (MADALINE)  Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of many Adalines in parallel.  It will have a single output unit.  Some important points about Madaline are as follows − 1) It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer. 2) The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable. 3) The Adaline and Madaline layers have fixed weights and bias of 1. Training can be done with the help of Delta rule.
  • 47. BACK PROPAGATION NEURAL NETWORKS  Back Propagation Neural (BPN) is a multilayer neural network consisting of the input layer, at least one hidden layer and output layer.  As its name suggests, back propagating will take place in this network.  The error which is calculated at the output layer, by comparing the target output and the actual output, will be propagated back towards the input layer.
  • 48.
  • 49. ASSOCIATE MEMORY NETWORK  These kinds of neural networks work on the basis of pattern association, which means they can store different patterns and at the time of giving an output they can produce one of the stored patterns by matching them with the given input pattern.  These types of memories are also called Content-Addressable Memory (CAM).  Associative memory makes a parallel search with the stored patterns as data files.
  • 50. Following are the two types of associative memories we can observe −  Auto Associative Memory  Hetero Associative memory
  • 51. AUTO ASSOCIATIVE MEMORY  This is a single layer neural network in which the input training vector and the output target vectors are the same.  The weights are determined so that the network stores a set of patterns.
  • 52. HETERO ASSOCIATIVE MEMORY  Similar to Auto Associative Memory network, this is also a single layer neural network.  However, in this network the input training vector and the output target vectors are not the same.  The weights are determined so that the network stores a set of patterns.  Hetero associative network is static in nature, hence, there would be no non- linear and delay operations.
  • 53.
  • 54. BIDIRECTIONALASSOCIATIVE MEMORY (BAM)  Bidirectional Associative Memory (BAM) is a supervised learning model in Artificial Neural Network.  This is hetero-associative memory, for an input pattern, it returns another pattern which is potentially of a different size.  This phenomenon is very similar to the human brain. Human memory is necessarily associative.  It uses a chain of mental associations to recover a lost memory like associations of faces with names, in exam questions with answers, etc.  In such memory associations for one type of object with another, a Recurrent neural network is needed to receive a pattern of one set of neurons as an input and generate a related, but different, output pattern of another set of neurons.
  • 55.
  • 56. HOPFIELD NETWORKS  Hopfield neural network was invented by Dr. John J. Hopfield in 1982.  It consists of a single layer which contains one or more fully connected recurrent neurons.  The Hopfield network is commonly used for auto-association and optimization tasks.
  • 58. CONTINUOUS HOPFIELD NETWORK  In comparison with Discrete Hopfield network, continuous network has time as a continuous variable.  It is also used in auto association and optimization problems such as travelling salesman problem.