SlideShare a Scribd company logo
BY
S.RAGAVI
M.SC(COMP.SCI)
NADAR SARASWATHI COLLEGE OF ARTS AND SCIENCE
CONTENT
•Multilayer Feedforward Neural Network
•Learning Methods in Neural Network
MULTILAYER FEEDFORWARD
NEURAL NETWORK
*Consist of multiple layers.
*Besides input and output layer there is an intermediary layer
called hidden layer.
*Unit of hidden layer is called hidden unit or hidden neuron.
*Hidden layer performs intermediary operations.
*Input layer neuron are linked to hidden layer neurons and
weight on these links are referred as input hidden layer
weights.
*Hidden layer neuron are linked to output layer neurons and
weight on these links are referred as hidden output layer
weights.
MULTILAYER FEEDFORWARD
NETWORK DIAGRAM
LEARNING METHODS IN NEURAL
NETWORKS
*Supervised Learning.
*Unsupervised Learning.
*Reinforced Learning.
*Hebbian Learning.
*Gradient descent Learning.
*Competitive Learning.
*Stochastic Learning.
Classifications of Neural Network Learning
Algorithms
1. Supervised Learning
.Gradient descent Learning
.Stochastic Learning
2. Unsupervised Learning
. Hebbian Learning
.Competitive Learning
3. Reinforced Learning
Supervised Learning
*Every input pattern used to train the network is associated
with an output pattern or target pattern.
*A teacher is assumed to be present during the learning
process.
*A comparison is made between the network computed
output and correct expected output, to determine the error.
*Error can be used to change the parameter , which result in
an improvement in performance.
Unsupervised Learning
*Target output is not presented to the network.
*No teacher is there to present the desired pattern.
*So, the system learns of its own by discovering and adapting
to structural features in the input patterns.
Reinforced learning
*Though teacher is available in this learning method , does
not present the expected answer but only indicates if the
computed output is correct or incorrect.
*The information provided help the network in its learning
process.
*A reward is given for correct computed answer.
*A penalty is given for wrong answer.
Hebbian Learning
* Rule was proposed by Hebb(1949).
*It is based on correlative weight adjustment.
*It is the oldest method of learning in neural network.
*A synapse between two neurons is strengthened when the
neurons on either side of the synapse (input and output) have
highly correlated outputs.
Gradient descent Learning
*This is based on the minimization of error E defined in
terms of weights and activation function of network.
*Activation function employed by the network is
differentiable , as the weight update is dependent on gradient
of the error E.
*∆Wij=ƞ δE/δWij
*Č  is the learning rate parameter.
*δE/δWij is the error gradient with reference to the weight
Wij.
*Hoffs Delta rule and Backpropagation rule are the examples
of this type of learning mechanism.
Competitive Learning
*In this method , Neuron strongly respond to the input stimuli
and have their weights updated.
*When the input pattern is presented , all the neuron in the
layer compete and the winning neuron undergoes weight
adjustment.
*Hence , it is called a “winner-takes-all strategy.
Stochastic Learning
*Stochastic neural networks are a type of artificial neural
networks built by introducing random variations into
the network, either by giving
the network's neurons stochastic transfer functions, or by
giving them stochastic weights.
*In this method , weights are adjusted in probabilistic fashion.
*This learning mechanism is employed by Boltzmann and
Cauchy machines.
Conclusion
*In conclusion to the learning rules in Neural Network, we
can say that the most promising feature of the Artificial
Neural Network is its ability to learn.
* The learning process of brain alters its neural structure. The
increasing or decreasing the strength of its synaptic
connections depending on their activity.
* The more relevant information has a stronger synaptic
connection.
Feedforward

More Related Content

Similar to Feedforward

nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
Neural network and fuzzy logic
Neural network and fuzzy logicNeural network and fuzzy logic
Neural network and fuzzy logic
Lakshmi Sarveypalli
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
stellajoseph
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
alldesign
 

Similar to Feedforward (20)

Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.Artificial neural networks seminar presentation using MSWord.
Artificial neural networks seminar presentation using MSWord.
 
nural network ER. Abhishek k. upadhyay
nural network ER. Abhishek  k. upadhyaynural network ER. Abhishek  k. upadhyay
nural network ER. Abhishek k. upadhyay
 
2.2 CLASS.pdf
2.2 CLASS.pdf2.2 CLASS.pdf
2.2 CLASS.pdf
 
Backpropagation.pptx
Backpropagation.pptxBackpropagation.pptx
Backpropagation.pptx
 
Ffnn
FfnnFfnn
Ffnn
 
Neural network and fuzzy logic
Neural network and fuzzy logicNeural network and fuzzy logic
Neural network and fuzzy logic
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial Neural Network (ANN
Artificial Neural Network (ANNArtificial Neural Network (ANN
Artificial Neural Network (ANN
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 
02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN02 Fundamental Concepts of ANN
02 Fundamental Concepts of ANN
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
 
18700120047_ARYAN_RAJ.pdf
18700120047_ARYAN_RAJ.pdf18700120047_ARYAN_RAJ.pdf
18700120047_ARYAN_RAJ.pdf
 
sathiya new final.pptx
sathiya new final.pptxsathiya new final.pptx
sathiya new final.pptx
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Neural networks of artificial intelligence
Neural networks of artificial  intelligenceNeural networks of artificial  intelligence
Neural networks of artificial intelligence
 
Basic Learning Algorithms of ANN
Basic Learning Algorithms of ANNBasic Learning Algorithms of ANN
Basic Learning Algorithms of ANN
 
ML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptxML Module 3 Non Linear Learning.pptx
ML Module 3 Non Linear Learning.pptx
 

More from DEEPIKA T (20)

See
SeeSee
See
 
71619109 configuration-management.pdf (1) (1)
71619109 configuration-management.pdf (1) (1)71619109 configuration-management.pdf (1) (1)
71619109 configuration-management.pdf (1) (1)
 
80068
8006880068
80068
 
242296
242296242296
242296
 
Data mining
Data miningData mining
Data mining
 
Parallelizing matrix multiplication
Parallelizing  matrix multiplicationParallelizing  matrix multiplication
Parallelizing matrix multiplication
 
Health care in big data analytics
Health care in big data analyticsHealth care in big data analytics
Health care in big data analytics
 
Ajax
AjaxAjax
Ajax
 
Role of human interaction
Role of human interactionRole of human interaction
Role of human interaction
 
Basic analtyics & advanced analtyics
Basic analtyics & advanced analtyicsBasic analtyics & advanced analtyics
Basic analtyics & advanced analtyics
 
Soap,Rest&Json
Soap,Rest&JsonSoap,Rest&Json
Soap,Rest&Json
 
Applet (1)
Applet (1)Applet (1)
Applet (1)
 
Jdbc ja
Jdbc jaJdbc ja
Jdbc ja
 
Appletjava
AppletjavaAppletjava
Appletjava
 
Remote method invocation
Remote  method invocationRemote  method invocation
Remote method invocation
 
Graph representation
Graph representationGraph representation
Graph representation
 
Al
AlAl
Al
 
Presentation2
Presentation2Presentation2
Presentation2
 
Depth first search [dfs]
Depth first search [dfs]Depth first search [dfs]
Depth first search [dfs]
 
Topological sort
Topological sortTopological sort
Topological sort
 

Recently uploaded

plant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated cropsplant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated crops
parmarsneha2
 
Industrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training ReportIndustrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training Report
Avinash Rai
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 

Recently uploaded (20)

Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
 
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...Basic Civil Engineering Notes of Chapter-6,  Topic- Ecosystem, Biodiversity G...
Basic Civil Engineering Notes of Chapter-6, Topic- Ecosystem, Biodiversity G...
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
plant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated cropsplant breeding methods in asexually or clonally propagated crops
plant breeding methods in asexually or clonally propagated crops
 
NCERT Solutions Power Sharing Class 10 Notes pdf
NCERT Solutions Power Sharing Class 10 Notes pdfNCERT Solutions Power Sharing Class 10 Notes pdf
NCERT Solutions Power Sharing Class 10 Notes pdf
 
Salient features of Environment protection Act 1986.pptx
Salient features of Environment protection Act 1986.pptxSalient features of Environment protection Act 1986.pptx
Salient features of Environment protection Act 1986.pptx
 
Industrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training ReportIndustrial Training Report- AKTU Industrial Training Report
Industrial Training Report- AKTU Industrial Training Report
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptxSolid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
Solid waste management & Types of Basic civil Engineering notes by DJ Sir.pptx
 

Feedforward

  • 2. CONTENT •Multilayer Feedforward Neural Network •Learning Methods in Neural Network
  • 3. MULTILAYER FEEDFORWARD NEURAL NETWORK *Consist of multiple layers. *Besides input and output layer there is an intermediary layer called hidden layer. *Unit of hidden layer is called hidden unit or hidden neuron. *Hidden layer performs intermediary operations. *Input layer neuron are linked to hidden layer neurons and weight on these links are referred as input hidden layer weights. *Hidden layer neuron are linked to output layer neurons and weight on these links are referred as hidden output layer weights.
  • 5. LEARNING METHODS IN NEURAL NETWORKS *Supervised Learning. *Unsupervised Learning. *Reinforced Learning. *Hebbian Learning. *Gradient descent Learning. *Competitive Learning. *Stochastic Learning.
  • 6. Classifications of Neural Network Learning Algorithms 1. Supervised Learning .Gradient descent Learning .Stochastic Learning 2. Unsupervised Learning . Hebbian Learning .Competitive Learning 3. Reinforced Learning
  • 7. Supervised Learning *Every input pattern used to train the network is associated with an output pattern or target pattern. *A teacher is assumed to be present during the learning process. *A comparison is made between the network computed output and correct expected output, to determine the error. *Error can be used to change the parameter , which result in an improvement in performance.
  • 8. Unsupervised Learning *Target output is not presented to the network. *No teacher is there to present the desired pattern. *So, the system learns of its own by discovering and adapting to structural features in the input patterns.
  • 9. Reinforced learning *Though teacher is available in this learning method , does not present the expected answer but only indicates if the computed output is correct or incorrect. *The information provided help the network in its learning process. *A reward is given for correct computed answer. *A penalty is given for wrong answer.
  • 10. Hebbian Learning * Rule was proposed by Hebb(1949). *It is based on correlative weight adjustment. *It is the oldest method of learning in neural network. *A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs.
  • 11. Gradient descent Learning *This is based on the minimization of error E defined in terms of weights and activation function of network. *Activation function employed by the network is differentiable , as the weight update is dependent on gradient of the error E. *∆Wij=ƞ δE/δWij *Č  is the learning rate parameter. *δE/δWij is the error gradient with reference to the weight Wij. *Hoffs Delta rule and Backpropagation rule are the examples of this type of learning mechanism.
  • 12. Competitive Learning *In this method , Neuron strongly respond to the input stimuli and have their weights updated. *When the input pattern is presented , all the neuron in the layer compete and the winning neuron undergoes weight adjustment. *Hence , it is called a “winner-takes-all strategy.
  • 13. Stochastic Learning *Stochastic neural networks are a type of artificial neural networks built by introducing random variations into the network, either by giving the network's neurons stochastic transfer functions, or by giving them stochastic weights. *In this method , weights are adjusted in probabilistic fashion. *This learning mechanism is employed by Boltzmann and Cauchy machines.
  • 14. Conclusion *In conclusion to the learning rules in Neural Network, we can say that the most promising feature of the Artificial Neural Network is its ability to learn. * The learning process of brain alters its neural structure. The increasing or decreasing the strength of its synaptic connections depending on their activity. * The more relevant information has a stronger synaptic connection.