SlideShare a Scribd company logo
Next Assignment ,[object Object],[object Object],[object Object],[object Object]
ART1 Demo Increasing  vigilance  causes the network to be more selective, to introduce a new prototype when the fit is not good. Try different patterns
Hebbian Learning
Hebb’s Postulate “ When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” D. O. Hebb, 1949 A B In other words, when a weight contributes to firing a neuron, the weight is increased. (If the neuron doesn’t fire, then it is not).
A B
A B
Colloquial Corollaries ,[object Object]
Colloquial Corollaries? ,[object Object]
Generalized Hebb Rule ,[object Object],[object Object]
Flavors of Hebbian Learning ,[object Object],[object Object],[object Object],[object Object]
Unsupervised Hebbian Learning (aka Associative Learning)
Simple Associative Network input output
Banana Associator Unconditioned Stimulus Conditioned Stimulus Didn’t Pavlov anticipate this?
Banana Associator Demo can be toggled
Unsupervised Hebb Rule Vector Form: Training Sequence: actual response input
Learning Banana Smell Initial Weights: Training Sequence: First Iteration (sight fails, smell present):    = 1 unconditioned (shape) conditioned (smell) a 1   h a r d l i m w 0 p 0 1   w 0   p 1   0.5 – +   h a r d l i m 1 0  0 1  0.5 – +   0 (no banana) = = =
Example Second Iteration (sight works, smell present): Third Iteration (sight fails, smell present): Banana will now be detected if either sensor works. a 2   h a r d l i m w 0 p 0 2   w 1   p 2   0.5 – +   h a r d l i m 1 1  0 1  0.5 – +   1 (banana) = = =
Problems with Hebb Rule ,[object Object],[object Object]
Hebb Rule with Decay This keeps the weight matrix from growing without bound, which can be demonstrated by setting both  a i  and  p j  to 1:
Banana Associator with Decay
Example: Banana Associator First Iteration (sight fails, smell present): Second Iteration (sight works, smell present):    = 0.1    = 1 a 1   h a r d l i m w 0 p 0 1   w 0   p 1   0.5 – +   h a r d l i m 1 0  0 1  0.5 – +   0 (no banana) = = = a 2   h a r d l i m w 0 p 0 2   w 1   p 2   0.5 – +   h a r d l i m 1 1  0 1  0.5 – +   1 (banana) = = =
Example Third Iteration (sight fails, smell present):
General Decay Demo no decay larger decay w i j m a x   - - - =
Problem of Hebb with Decay Associations  will be lost if stimuli are not occasionally presented. If  a i  = 0, then If    = 0, this becomes Therefore the weight decays by 10% at each iteration where there is no stimulus.
Solution to Hebb Decay Problem ,[object Object],[object Object]
Instar (Recognition Network)
Instar Operation The instar will be active when or For normalized vectors, the largest inner product occurs when the angle between the weight vector and the input vector is zero -- the input vector is equal to the weight vector. The rows of a weight matrix represent patterns to be recognized. w T 1 p w 1 p  cos b –  =
Vector Recognition If we set the instar will only be active when    =  0. If we set the instar will be active for a range of angles.  As  b  is increased, the more patterns there will be (over a wider range of   ) which will activate the instar. b w 1 p – = b w 1 p – > w 1
Instar Rule Hebb with Decay Modify so that  learning and forgetting will only occur when the neuron is active  - Instar Rule: or Vector Form: w i j q   w i j q 1 –    a i q   p j q    a i q   w q 1 –   – + = i j
Graphical Representation For the case where the instar is active ( a i  =   1): or For the case where the instar is inactive ( a i  =   0):
Instar Demo weight vector input vector  W
Outstar ( Recall  Network)
Outstar Operation Suppose we want the outstar to recall a certain pattern  a *  whenever the input  p   =   1 is presented to the network. Let  Then, when  p   =   1 and the pattern is correctly recalled. The columns of a weight matrix represent patterns  to be recalled.
Outstar Rule For the instar rule we made the weight decay term of the Hebb rule proportional to the  output  of the network.  For the outstar rule we make the weight decay term proportional to the  input  of the network. If we make the decay rate    equal to the learning rate   , Vector Form:
Example - Pineapple Recall
Definitions
Outstar Demo
Iteration 1    = 1
Convergence
Supervised Hebbian Learning
Linear Associator Training Set:
Hebb Rule Presynaptic Signal Postsynaptic Signal Simplified Form: Supervised Form: Matrix Form: actual output input pattern desired  output
Batch Operation Matrix Form: (Zero Initial Weights)  W t 1 t 2  t Q p 1 T p 2 T p Q T T P T = = T t 1 t 2  t Q = P p 1 p 2  p Q =
Performance Analysis Case I, input patterns are orthogonal. Therefore the network output equals the target: Case II, input patterns are normalized, but not orthogonal. Error term 0 q k  =
Example Banana Apple Normalized Prototype Patterns Weight Matrix (Hebb Rule): Tests: Banana Apple
Pseudoinverse Rule - (1) Performance Index: Matrix Form: Mean-squared error T t 1 t 2  t Q = P p 1 p 2  p Q = || E || 2 e i j 2 j  i  =
Pseudoinverse Rule - (2) Minimize: If an inverse exists for  P ,  F ( W ) can be made zero: When an inverse does not exist   F ( W ) can be minimized using the pseudoinverse:
Relationship to the Hebb Rule Hebb Rule Pseudoinverse Rule If the prototype patterns are orthonormal: W T P T =
Example
Autoassociative Memory
Tests 50% Occluded 67% Occluded Noisy Patterns (7 pixels)
Supervised Hebbian Demo
Spectrum of Hebbian Learning Basic Supervised Rule: Supervised with Learning Rate: Smoothing: Delta Rule: Unsupervised: target actual

More Related Content

What's hot

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
Mohammed Bennamoun
 
Back propagation
Back propagationBack propagation
Back propagation
Nagarajan
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning system
swapnac12
 
neural networksNnf
neural networksNnfneural networksNnf
neural networksNnf
Sandilya Sridhara
 
Hebb network
Hebb networkHebb network
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
Naveen Kumar
 
FUNCTION APPROXIMATION
FUNCTION APPROXIMATIONFUNCTION APPROXIMATION
FUNCTION APPROXIMATIONankita pandey
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORKESCOM
 
Lecture 9 Perceptron
Lecture 9 PerceptronLecture 9 Perceptron
Lecture 9 Perceptron
Marina Santini
 
Probabilistic Reasoning
Probabilistic ReasoningProbabilistic Reasoning
Probabilistic Reasoning
Junya Tanaka
 
Cnn
CnnCnn
Resnet
ResnetResnet
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
mustafa aadel
 
Inductive bias
Inductive biasInductive bias
Inductive bias
swapnac12
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORKESCOM
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networksstellajoseph
 
Using prior knowledge to initialize the hypothesis,kbann
Using prior knowledge to initialize the hypothesis,kbannUsing prior knowledge to initialize the hypothesis,kbann
Using prior knowledge to initialize the hypothesis,kbann
swapnac12
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Mohammed Bennamoun
 

What's hot (20)

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Back propagation
Back propagationBack propagation
Back propagation
 
Introdution and designing a learning system
Introdution and designing a learning systemIntrodution and designing a learning system
Introdution and designing a learning system
 
neural networksNnf
neural networksNnfneural networksNnf
neural networksNnf
 
Hebb network
Hebb networkHebb network
Hebb network
 
Adaptive Resonance Theory
Adaptive Resonance TheoryAdaptive Resonance Theory
Adaptive Resonance Theory
 
FUNCTION APPROXIMATION
FUNCTION APPROXIMATIONFUNCTION APPROXIMATION
FUNCTION APPROXIMATION
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORK
 
Lecture 9 Perceptron
Lecture 9 PerceptronLecture 9 Perceptron
Lecture 9 Perceptron
 
Probabilistic Reasoning
Probabilistic ReasoningProbabilistic Reasoning
Probabilistic Reasoning
 
Cnn
CnnCnn
Cnn
 
Resnet
ResnetResnet
Resnet
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Inductive bias
Inductive biasInductive bias
Inductive bias
 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORK
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Using prior knowledge to initialize the hypothesis,kbann
Using prior knowledge to initialize the hypothesis,kbannUsing prior knowledge to initialize the hypothesis,kbann
Using prior knowledge to initialize the hypothesis,kbann
 
The medium access sublayer
 The medium  access sublayer The medium  access sublayer
The medium access sublayer
 
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
 

Similar to Hebbian Learning

Neural Networks
Neural NetworksNeural Networks
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
ssuserab4f3e
 
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdfconstructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
SayanSamanta39
 
Associative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxAssociative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptx
dgfsdf1
 
Lec10
Lec10Lec10
CS532L4_Backpropagation.pptx
CS532L4_Backpropagation.pptxCS532L4_Backpropagation.pptx
CS532L4_Backpropagation.pptx
MFaisalRiaz5
 
NN-Ch3 (1).ppt
NN-Ch3 (1).pptNN-Ch3 (1).ppt
NN-Ch3 (1).ppt
RafeeqAhmed42
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
ShujatHussainGadi
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.pptbutest
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
Indira Priyadarsini
 
S1 Dualsimplex
S1 DualsimplexS1 Dualsimplex
S1 Dualsimplex
Ngo Hung Long
 
Perceptron
PerceptronPerceptron
Perceptron
Nagarajan
 
Bayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelsBayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelskhbrodersen
 
Theories of continuous optimization
Theories of continuous optimizationTheories of continuous optimization
Theories of continuous optimization
Olivier Teytaud
 
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rulesJAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
hirokazutanaka
 
Perceptron 2015.ppt
Perceptron 2015.pptPerceptron 2015.ppt
Perceptron 2015.ppt
SadafAyesha9
 
Artificial Neural Network.pptx
Artificial Neural Network.pptxArtificial Neural Network.pptx
Artificial Neural Network.pptx
shashankbhadouria4
 
Rules of exponents 1
Rules of exponents 1Rules of exponents 1
Rules of exponents 1
lothomas
 

Similar to Hebbian Learning (20)

Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Unit 2
Unit 2Unit 2
Unit 2
 
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdfconstructing_generic_algorithms__ben_deane__cppcon_2020.pdf
constructing_generic_algorithms__ben_deane__cppcon_2020.pdf
 
Associative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxAssociative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptx
 
Lec10
Lec10Lec10
Lec10
 
CS532L4_Backpropagation.pptx
CS532L4_Backpropagation.pptxCS532L4_Backpropagation.pptx
CS532L4_Backpropagation.pptx
 
NN-Ch3 (1).ppt
NN-Ch3 (1).pptNN-Ch3 (1).ppt
NN-Ch3 (1).ppt
 
CS767_Lecture_04.pptx
CS767_Lecture_04.pptxCS767_Lecture_04.pptx
CS767_Lecture_04.pptx
 
lecture07.ppt
lecture07.pptlecture07.ppt
lecture07.ppt
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
 
S1 Dualsimplex
S1 DualsimplexS1 Dualsimplex
S1 Dualsimplex
 
Perceptron
PerceptronPerceptron
Perceptron
 
Bayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelsBayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal models
 
Theories of continuous optimization
Theories of continuous optimizationTheories of continuous optimization
Theories of continuous optimization
 
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rulesJAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
JAISTサマースクール2016「脳を知るための理論」講義02 Synaptic Learning rules
 
Perceptron 2015.ppt
Perceptron 2015.pptPerceptron 2015.ppt
Perceptron 2015.ppt
 
Artificial Neural Network.pptx
Artificial Neural Network.pptxArtificial Neural Network.pptx
Artificial Neural Network.pptx
 
Exponent 2
Exponent 2Exponent 2
Exponent 2
 
Rules of exponents 1
Rules of exponents 1Rules of exponents 1
Rules of exponents 1
 

More from ESCOM

redes neuronales tipo Som
redes neuronales tipo Somredes neuronales tipo Som
redes neuronales tipo SomESCOM
 
redes neuronales Som
redes neuronales Somredes neuronales Som
redes neuronales SomESCOM
 
redes neuronales Som Slides
redes neuronales Som Slidesredes neuronales Som Slides
redes neuronales Som SlidesESCOM
 
red neuronal Som Net
red neuronal Som Netred neuronal Som Net
red neuronal Som NetESCOM
 
Self Organinising neural networks
Self Organinising  neural networksSelf Organinising  neural networks
Self Organinising neural networksESCOM
 
redes neuronales Kohonen
redes neuronales Kohonenredes neuronales Kohonen
redes neuronales KohonenESCOM
 
Teoria Resonancia Adaptativa
Teoria Resonancia AdaptativaTeoria Resonancia Adaptativa
Teoria Resonancia AdaptativaESCOM
 
ejemplo red neuronal Art1
ejemplo red neuronal Art1ejemplo red neuronal Art1
ejemplo red neuronal Art1ESCOM
 
redes neuronales tipo Art3
redes neuronales tipo Art3redes neuronales tipo Art3
redes neuronales tipo Art3ESCOM
 
Art2
Art2Art2
Art2ESCOM
 
Redes neuronales tipo Art
Redes neuronales tipo ArtRedes neuronales tipo Art
Redes neuronales tipo ArtESCOM
 
Neocognitron
NeocognitronNeocognitron
NeocognitronESCOM
 
Neocognitron
NeocognitronNeocognitron
NeocognitronESCOM
 
Neocognitron
NeocognitronNeocognitron
NeocognitronESCOM
 
Fukushima Cognitron
Fukushima CognitronFukushima Cognitron
Fukushima CognitronESCOM
 
Counterpropagation
CounterpropagationCounterpropagation
CounterpropagationESCOM
 
Teoría de Resonancia Adaptativa Art2 ARTMAP
Teoría de Resonancia Adaptativa Art2 ARTMAPTeoría de Resonancia Adaptativa Art2 ARTMAP
Teoría de Resonancia Adaptativa Art2 ARTMAPESCOM
 
Teoría de Resonancia Adaptativa ART1
Teoría de Resonancia Adaptativa ART1Teoría de Resonancia Adaptativa ART1
Teoría de Resonancia Adaptativa ART1ESCOM
 
Teoría de Resonancia Adaptativa ART
Teoría de Resonancia Adaptativa ARTTeoría de Resonancia Adaptativa ART
Teoría de Resonancia Adaptativa ARTESCOM
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3ESCOM
 

More from ESCOM (20)

redes neuronales tipo Som
redes neuronales tipo Somredes neuronales tipo Som
redes neuronales tipo Som
 
redes neuronales Som
redes neuronales Somredes neuronales Som
redes neuronales Som
 
redes neuronales Som Slides
redes neuronales Som Slidesredes neuronales Som Slides
redes neuronales Som Slides
 
red neuronal Som Net
red neuronal Som Netred neuronal Som Net
red neuronal Som Net
 
Self Organinising neural networks
Self Organinising  neural networksSelf Organinising  neural networks
Self Organinising neural networks
 
redes neuronales Kohonen
redes neuronales Kohonenredes neuronales Kohonen
redes neuronales Kohonen
 
Teoria Resonancia Adaptativa
Teoria Resonancia AdaptativaTeoria Resonancia Adaptativa
Teoria Resonancia Adaptativa
 
ejemplo red neuronal Art1
ejemplo red neuronal Art1ejemplo red neuronal Art1
ejemplo red neuronal Art1
 
redes neuronales tipo Art3
redes neuronales tipo Art3redes neuronales tipo Art3
redes neuronales tipo Art3
 
Art2
Art2Art2
Art2
 
Redes neuronales tipo Art
Redes neuronales tipo ArtRedes neuronales tipo Art
Redes neuronales tipo Art
 
Neocognitron
NeocognitronNeocognitron
Neocognitron
 
Neocognitron
NeocognitronNeocognitron
Neocognitron
 
Neocognitron
NeocognitronNeocognitron
Neocognitron
 
Fukushima Cognitron
Fukushima CognitronFukushima Cognitron
Fukushima Cognitron
 
Counterpropagation
CounterpropagationCounterpropagation
Counterpropagation
 
Teoría de Resonancia Adaptativa Art2 ARTMAP
Teoría de Resonancia Adaptativa Art2 ARTMAPTeoría de Resonancia Adaptativa Art2 ARTMAP
Teoría de Resonancia Adaptativa Art2 ARTMAP
 
Teoría de Resonancia Adaptativa ART1
Teoría de Resonancia Adaptativa ART1Teoría de Resonancia Adaptativa ART1
Teoría de Resonancia Adaptativa ART1
 
Teoría de Resonancia Adaptativa ART
Teoría de Resonancia Adaptativa ARTTeoría de Resonancia Adaptativa ART
Teoría de Resonancia Adaptativa ART
 
learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3learning Vector Quantization LVQ2 LVQ3
learning Vector Quantization LVQ2 LVQ3
 

Recently uploaded

Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Group Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana BuscigliopptxGroup Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana Buscigliopptx
ArianaBusciglio
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
Peter Windle
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
Multithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race conditionMultithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race condition
Mohammed Sikander
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 

Recently uploaded (20)

Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Group Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana BuscigliopptxGroup Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana Buscigliopptx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
Multithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race conditionMultithreading_in_C++ - std::thread, race condition
Multithreading_in_C++ - std::thread, race condition
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 

Hebbian Learning

  • 1.
  • 2. ART1 Demo Increasing vigilance causes the network to be more selective, to introduce a new prototype when the fit is not good. Try different patterns
  • 4. Hebb’s Postulate “ When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” D. O. Hebb, 1949 A B In other words, when a weight contributes to firing a neuron, the weight is increased. (If the neuron doesn’t fire, then it is not).
  • 5. A B
  • 6. A B
  • 7.
  • 8.
  • 9.
  • 10.
  • 11. Unsupervised Hebbian Learning (aka Associative Learning)
  • 13. Banana Associator Unconditioned Stimulus Conditioned Stimulus Didn’t Pavlov anticipate this?
  • 14. Banana Associator Demo can be toggled
  • 15. Unsupervised Hebb Rule Vector Form: Training Sequence: actual response input
  • 16. Learning Banana Smell Initial Weights: Training Sequence: First Iteration (sight fails, smell present):  = 1 unconditioned (shape) conditioned (smell) a 1   h a r d l i m w 0 p 0 1   w 0   p 1   0.5 – +   h a r d l i m 1 0  0 1  0.5 – +   0 (no banana) = = =
  • 17. Example Second Iteration (sight works, smell present): Third Iteration (sight fails, smell present): Banana will now be detected if either sensor works. a 2   h a r d l i m w 0 p 0 2   w 1   p 2   0.5 – +   h a r d l i m 1 1  0 1  0.5 – +   1 (banana) = = =
  • 18.
  • 19. Hebb Rule with Decay This keeps the weight matrix from growing without bound, which can be demonstrated by setting both a i and p j to 1:
  • 21. Example: Banana Associator First Iteration (sight fails, smell present): Second Iteration (sight works, smell present):  = 0.1  = 1 a 1   h a r d l i m w 0 p 0 1   w 0   p 1   0.5 – +   h a r d l i m 1 0  0 1  0.5 – +   0 (no banana) = = = a 2   h a r d l i m w 0 p 0 2   w 1   p 2   0.5 – +   h a r d l i m 1 1  0 1  0.5 – +   1 (banana) = = =
  • 22. Example Third Iteration (sight fails, smell present):
  • 23. General Decay Demo no decay larger decay w i j m a x   - - - =
  • 24. Problem of Hebb with Decay Associations will be lost if stimuli are not occasionally presented. If a i = 0, then If  = 0, this becomes Therefore the weight decays by 10% at each iteration where there is no stimulus.
  • 25.
  • 27. Instar Operation The instar will be active when or For normalized vectors, the largest inner product occurs when the angle between the weight vector and the input vector is zero -- the input vector is equal to the weight vector. The rows of a weight matrix represent patterns to be recognized. w T 1 p w 1 p  cos b –  =
  • 28. Vector Recognition If we set the instar will only be active when   =  0. If we set the instar will be active for a range of angles. As b is increased, the more patterns there will be (over a wider range of  ) which will activate the instar. b w 1 p – = b w 1 p – > w 1
  • 29. Instar Rule Hebb with Decay Modify so that learning and forgetting will only occur when the neuron is active - Instar Rule: or Vector Form: w i j q   w i j q 1 –    a i q   p j q    a i q   w q 1 –   – + = i j
  • 30. Graphical Representation For the case where the instar is active ( a i = 1): or For the case where the instar is inactive ( a i = 0):
  • 31. Instar Demo weight vector input vector  W
  • 32. Outstar ( Recall Network)
  • 33. Outstar Operation Suppose we want the outstar to recall a certain pattern a * whenever the input p = 1 is presented to the network. Let Then, when p = 1 and the pattern is correctly recalled. The columns of a weight matrix represent patterns to be recalled.
  • 34. Outstar Rule For the instar rule we made the weight decay term of the Hebb rule proportional to the output of the network. For the outstar rule we make the weight decay term proportional to the input of the network. If we make the decay rate  equal to the learning rate  , Vector Form:
  • 42. Hebb Rule Presynaptic Signal Postsynaptic Signal Simplified Form: Supervised Form: Matrix Form: actual output input pattern desired output
  • 43. Batch Operation Matrix Form: (Zero Initial Weights)  W t 1 t 2  t Q p 1 T p 2 T p Q T T P T = = T t 1 t 2  t Q = P p 1 p 2  p Q =
  • 44. Performance Analysis Case I, input patterns are orthogonal. Therefore the network output equals the target: Case II, input patterns are normalized, but not orthogonal. Error term 0 q k  =
  • 45. Example Banana Apple Normalized Prototype Patterns Weight Matrix (Hebb Rule): Tests: Banana Apple
  • 46. Pseudoinverse Rule - (1) Performance Index: Matrix Form: Mean-squared error T t 1 t 2  t Q = P p 1 p 2  p Q = || E || 2 e i j 2 j  i  =
  • 47. Pseudoinverse Rule - (2) Minimize: If an inverse exists for P , F ( W ) can be made zero: When an inverse does not exist F ( W ) can be minimized using the pseudoinverse:
  • 48. Relationship to the Hebb Rule Hebb Rule Pseudoinverse Rule If the prototype patterns are orthonormal: W T P T =
  • 51. Tests 50% Occluded 67% Occluded Noisy Patterns (7 pixels)
  • 53. Spectrum of Hebbian Learning Basic Supervised Rule: Supervised with Learning Rate: Smoothing: Delta Rule: Unsupervised: target actual