SlideShare a Scribd company logo
1 of 18
Autoassociative Memory
performance with and without
pseudoinverse weight matrix
Submitted by:- Submitted to:-
Bhupender Singh (151602) Dr. Rajesh Mehra
NITTTR, Chandigarh
Introduction
A content-addressable memory is a type
of memory that allows for the recall of
data based on the degree of similarity
between the input pattern and the
patterns stored in memory
memory is robust and fault-tolerant
auto associative
 hetero associative
• .
 An auto associative memory is used to retrieve a previously stored
pattern that most closely resembles the current pattern
In auto associative memory y[1],y[2],y[3],…….y[m] number of stored
pattern and an output pattern vector y[m] can be obtained from noisy
y[m]
Hetero associative memory:- the retrieved pattern is, in general,
different from the input pattern not only in content but possibly also
different in type and format.
in hetero associative memory {c(1),y(1)},{c(2),y(2)},…….{( 𝑐m,ym)}
output a pattern vector y(m) if noisy or incomplete version of c(m) is
input.
Applications:-
 Image segmentation
 Face detection
 Computer graphics ,multimedia and multifractal analysis.
 Signature detection
 Image recognition
Encoding or memoriization :-
An associative memory can be form by constructing a weight matrix W
connection. Weight matrix value of correlation matrix are computed as
(𝑊𝑖𝑗) 𝑘= (𝑥𝑖) 𝑘 (𝑦𝑗) 𝑘
(𝑥𝑖) 𝑘 is ith component of Xk path and (𝑦𝑗) 𝑘is jth component of pattern (𝑦) 𝑘
W=∝ 𝑘=1
𝑝
𝑤 𝑘
∝ is proportionality constant or normalizing constan
W= 𝑇 ∗ 𝑇 𝑇
• The output of the function a=hard lim (W*𝑡 𝑛𝑜𝑖𝑠𝑒)
Retrieval or recalling:-
The process of retrieval of stored pattern is called decoding.
𝑌𝑖𝑛𝑗= 𝑖=1
𝑛
𝑥𝑖 𝑤𝑖𝑗
 Apply the following activation function to calculate the output
𝑌𝑗 = 𝑓 𝑌𝑖𝑛𝑗 =
−1 𝑖𝑓 𝑌𝑖𝑛𝑗 > 0
+1 𝑖𝑓 𝑌𝑖𝑛𝑗 < 0
To minimize the error the pseudoinverse of the target matrix T to
minimize the cross correlation between input vectors t.
• W=T*𝑇+
Literature survey:-
Publication Author Title Problem Strength Weakness
IEEE World
Congress on
Computational
Intelligence
June, 10-15, 2012
- Brisbane,
Australia
Kazuaki Masuda
Faculty of
Engineering
Kanagawa
University
A Weighting
Approach for
Autoassociative
Memories to
Improve Accuracy
in Memorization
cause of errors
with
memorization
rules
propose a weighting
approach for the
memorization rules so that
the structure of
the energy function can be
altered in a desirable
manner.
capacity of a
memory in
terms
of the
feasibility not
calculated
IEEE
TRANSACTIONS
ON NEURAL
NETWORKS, VOL.
15, NO. 1,
JANUARY 2004
Mehmet Kerem
Müezzino˘glu,
Student
Member, IEEE,
and Cüneyt
Güzelis
A Boolean Hebb
Rule for Binary
Associative
Memory Design
A binary
associative
memory design
procedure that
gives a Hopfield
network with a
symmetric binary
weight matrix
introducing the memory
vectors as maximal
independent sets to
an undirected graph, which
is constructed by Boolean
operations
analogous to the
conventional Hebb rule.
Does not give
weight in
signed integer
valued,
Publication Author Title Problem Strength Weakness
IEEE
TRANSACTIONS
ON NEURAL
NETWORKS, VOL.
16, NO. 6,
NOVEMBER 2005
Donq-Liang Lee
and Thomas C.
Chuang
Designing
Asymmetric
Hopfield-Type
Associative
Memory With
Higher Order
Hamming Stability
optimal
asymmetric
Hopfield-type
associative
memory (HAM)
design based on
perceptron-type
learning
algorithms
recall capability as well as
the number of spurious
memories are all improved
by using, increase the basin
width
around each prototype
vector
cost of slightly
increasing
the number of
spurious
memories in
the state
space.
International Joint
Conference on
Neural Networks,
Orlando, Florida,
USA, August 12-
17, 2007
Vicente O. Baez-
Monroy and
Simon O’Keefe
An Associative
Memory for
Association Rule
Mining
generation of
association rules
An auto-associative memory
based on a correlation
matrix
memory has been chosen
from the large taxonomy of
ANNs
Errors in the
recalls have
resulted from
IEEE
TRANSACTIONS
ON NEURAL
Sri Garimella
and Hynek
Hermansky,
Factor Analysis of
Auto-Associative
Neural
When the
amount of
speaker data
yields a 23% relative
improvement in equal error
rate over the previously
The problem is divided into 5 sections
1) generating the alphabetical
target vectors
2) calculating the weight matrix
W with the pseudoinverse
3) testing the auto associative
memory without noise,
4) testing the auto associative
memory with noise
 5) comparing to the results
without using the pseudoinverse
Training algorithms using the Hebb or Delta
learning rule
Step 1 − Initialize all the weights to zero
as 𝑊𝑖𝑗 = 0 (i = 1 to n, j = 1 to n)
Step 2 − Perform steps 3-4 for each input
vector.
Step 3 − Activate each input unit as
follows –
𝑋𝑖= 𝑠𝑖(i=1to n)
Step 4 − Activate each output unit as
follows –
𝑌𝑗= 𝑠𝑗(j=1to n)
Step 5 − Adjust the weights as follows –
𝑊𝑖𝑗(𝑛𝑒𝑤)= 𝑊𝑖𝑗(old) +𝑋𝑖 𝑌𝑖
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s
rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of
the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to n
𝑌𝑖𝑛𝑗= 𝑖=1
𝑛
𝑥𝑖 𝑤𝑖𝑗
Step 5 − Apply the following activation function to calculate
the output
𝑌𝑗 = 𝑓 𝑌𝑖𝑛𝑗 =
−1 𝑖𝑓 𝑌𝑖𝑛𝑗 > 0
+1 𝑖𝑓 𝑌𝑖𝑛𝑗 < 0
RESULT and DISCUSSION(Performance with
and without pseudoinverse weight matrix:-):-
Weight matrix with pseudoinverse Weight matrix without pseudoinverse
Using the pseudoinverse in limits the range of the weight matrix from 0 to 1. Not using the pseudoinverse results in a larger
range of values –20 to 25. The off diagonal elements have a lot higher values. This indicates cross correlation
Weight matrix with pseudoinverse Weight matrix without pseudoinverse
There are significantly more character errors without noise when the pseudoinverse is not used in the weight matrix
Weight matrix with pseudoinverse Weight matrix without pseudoinverse
autoassociative memory using the pseudoinverse has much improved performance in noise which follows from the
performance without noise.
Conclusion:-
 There are significantly more character errors without noise when the
pseudoinverse is not used in the weight matrix
Using the pseudoinverse in limits the range of the weight matrix
from 0 to 1. Not using the pseudoinverse results in a larger range of
values –20 to 25. The off diagonal elements have a lot higher values.
This indicates cross correlation
Auto associative memory using the pseudoinverse has much
improved performance in noise which follows from the performance
without noise.
References:-
[1]Donq-Liang Lee “Designing Asymmetric Hopfield-Type Associative Memory With Higher
Order Hamming Stability” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 6,
NOVEMBER 2005
[2]Kazuaki Masuda “Weighting Approach for Autoassociative Memories to Improve Accuracy in
Memorization” IEEE World Congress on Computational Intelligence June, 10-15, 2012 -
Brisbane, Australia
[3]Vicente O. Baez-Monroy “An Associative Memory for Association Rule Mining” International Joint Conference
on Neural Networks, Orlando, Florida, USA, August 12-17, 2007
[4] Sri Garimella “Factor Analysis of Auto-Associative Neural Networks With Application in
Speaker Verification” IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,
VOL. 24, NO. 4, APRIL 2013
[5] Mehmet Kerem Müezzino˘glu “A Boolean Hebb Rule for Binary Associative Memory Design”
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 1, JANUARY 2004
Contd.
[6]J. A. Anderson, “A simple neural network generating an interactive
memory,” Mathematical Biosciences, vol. 14, no. 3-4, pp. 197–220, 1972.
[7]T. Kohonen, “Correlation matrix memories,” IEEE Trans. Comput., vol.
C-21, no. 4, pp. 353–359, 1972.
[8] K. Nakano, “Associatron–a model of associative memory,” IEEE Trans.
Syst., Man, Cybern., vol. 2, no. 3, pp. 380–388, 1972.
[9]J. J. Hopfield, “Neural networks and physical systems with emergent
collective computational abilities,” Proc. Natl. Acad. Sci. USA, vol. 79,
no. 8, pp. 2554–2558, 1982.
Thanks

More Related Content

What's hot

Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
Iman Ardekani
 

What's hot (19)

Nural network ER.Abhishek k. upadhyay
Nural network  ER.Abhishek k. upadhyayNural network  ER.Abhishek k. upadhyay
Nural network ER.Abhishek k. upadhyay
 
Neural networks1
Neural networks1Neural networks1
Neural networks1
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Back propagation
Back propagation Back propagation
Back propagation
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
 
P1121133746
P1121133746P1121133746
P1121133746
 
Introduction to Applied Machine Learning
Introduction to Applied Machine LearningIntroduction to Applied Machine Learning
Introduction to Applied Machine Learning
 
Neural networks
Neural networksNeural networks
Neural networks
 
Artificial Neural Network
Artificial Neural Network Artificial Neural Network
Artificial Neural Network
 
A survey research summary on neural networks
A survey research summary on neural networksA survey research summary on neural networks
A survey research summary on neural networks
 
Ejsr 86 3
Ejsr 86 3Ejsr 86 3
Ejsr 86 3
 
Using petri net with inherent fuzzy in the recognition of ecg signals
Using petri net with inherent fuzzy in the recognition of ecg signalsUsing petri net with inherent fuzzy in the recognition of ecg signals
Using petri net with inherent fuzzy in the recognition of ecg signals
 
Neural Network Models on the Prediction of Tool Wear in Turning Processes: A ...
Neural Network Models on the Prediction of Tool Wear in Turning Processes: A ...Neural Network Models on the Prediction of Tool Wear in Turning Processes: A ...
Neural Network Models on the Prediction of Tool Wear in Turning Processes: A ...
 
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGEAPPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
 
A neuro fuzzy linguistic approach to component elements of a grinding wheel
A neuro fuzzy linguistic approach to component elements of a grinding wheelA neuro fuzzy linguistic approach to component elements of a grinding wheel
A neuro fuzzy linguistic approach to component elements of a grinding wheel
 
Multilayer perceptron
Multilayer perceptronMultilayer perceptron
Multilayer perceptron
 
Modeling of RF Power Amplifier with Memory Effects using Memory Polynomial
Modeling of RF Power Amplifier with Memory Effects using Memory PolynomialModeling of RF Power Amplifier with Memory Effects using Memory Polynomial
Modeling of RF Power Amplifier with Memory Effects using Memory Polynomial
 
IRJET - Clustering Algorithm for Brain Image Segmentation
IRJET - Clustering Algorithm for Brain Image SegmentationIRJET - Clustering Algorithm for Brain Image Segmentation
IRJET - Clustering Algorithm for Brain Image Segmentation
 
Sefl Organizing Map
Sefl Organizing MapSefl Organizing Map
Sefl Organizing Map
 

Similar to Nn

Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Cemal Ardil
 
Live to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networkLive to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural network
nooriasukmaningtyas
 

Similar to Nn (20)

Survey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique AlgorithmsSurvey on Artificial Neural Network Learning Technique Algorithms
Survey on Artificial Neural Network Learning Technique Algorithms
 
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
 
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Artificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning ModelsArtificial Neural Networks-Supervised Learning Models
Artificial Neural Networks-Supervised Learning Models
 
Neural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlabNeural network based numerical digits recognization using nnt in matlab
Neural network based numerical digits recognization using nnt in matlab
 
Live to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural networkLive to learn: learning rules-based artificial neural network
Live to learn: learning rules-based artificial neural network
 
Associative memory network
Associative memory networkAssociative memory network
Associative memory network
 
Y4502158163
Y4502158163Y4502158163
Y4502158163
 
Web spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithmsWeb spam classification using supervised artificial neural network algorithms
Web spam classification using supervised artificial neural network algorithms
 
N ns 1
N ns 1N ns 1
N ns 1
 
Evolutionary Algorithm for Optimal Connection Weights in Artificial Neural Ne...
Evolutionary Algorithm for Optimal Connection Weights in Artificial Neural Ne...Evolutionary Algorithm for Optimal Connection Weights in Artificial Neural Ne...
Evolutionary Algorithm for Optimal Connection Weights in Artificial Neural Ne...
 
Hetro associative memory
Hetro associative memoryHetro associative memory
Hetro associative memory
 
Solving linear equations from an image using ann
Solving linear equations from an image using annSolving linear equations from an image using ann
Solving linear equations from an image using ann
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdfCCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
 
SoftComputing6
SoftComputing6SoftComputing6
SoftComputing6
 
Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...Comparison of Neural Network Training Functions for Hematoma Classification i...
Comparison of Neural Network Training Functions for Hematoma Classification i...
 

Recently uploaded

1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
AldoGarca30
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
mphochane1998
 
Verification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxVerification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptx
chumtiyababu
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
Epec Engineered Technologies
 

Recently uploaded (20)

Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
Unit 4_Part 1 CSE2001 Exception Handling and Function Template and Class Temp...
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Block diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.pptBlock diagram reduction techniques in control systems.ppt
Block diagram reduction techniques in control systems.ppt
 
Moment Distribution Method For Btech Civil
Moment Distribution Method For Btech CivilMoment Distribution Method For Btech Civil
Moment Distribution Method For Btech Civil
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptxOrlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
Orlando’s Arnold Palmer Hospital Layout Strategy-1.pptx
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
Navigating Complexity: The Role of Trusted Partners and VIAS3D in Dassault Sy...
 
Thermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.pptThermal Engineering -unit - III & IV.ppt
Thermal Engineering -unit - III & IV.ppt
 
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments""Lesotho Leaps Forward: A Chronicle of Transformative Developments"
"Lesotho Leaps Forward: A Chronicle of Transformative Developments"
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
kiln thermal load.pptx kiln tgermal load
kiln thermal load.pptx kiln tgermal loadkiln thermal load.pptx kiln tgermal load
kiln thermal load.pptx kiln tgermal load
 
Online food ordering system project report.pdf
Online food ordering system project report.pdfOnline food ordering system project report.pdf
Online food ordering system project report.pdf
 
Verification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptxVerification of thevenin's theorem for BEEE Lab (1).pptx
Verification of thevenin's theorem for BEEE Lab (1).pptx
 
PE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and propertiesPE 459 LECTURE 2- natural gas basic concepts and properties
PE 459 LECTURE 2- natural gas basic concepts and properties
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdf
 
Work-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptxWork-Permit-Receiver-in-Saudi-Aramco.pptx
Work-Permit-Receiver-in-Saudi-Aramco.pptx
 
AIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech studentsAIRCANVAS[1].pdf mini project for btech students
AIRCANVAS[1].pdf mini project for btech students
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Engineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planesEngineering Drawing focus on projection of planes
Engineering Drawing focus on projection of planes
 

Nn

  • 1. Autoassociative Memory performance with and without pseudoinverse weight matrix Submitted by:- Submitted to:- Bhupender Singh (151602) Dr. Rajesh Mehra NITTTR, Chandigarh
  • 2. Introduction A content-addressable memory is a type of memory that allows for the recall of data based on the degree of similarity between the input pattern and the patterns stored in memory memory is robust and fault-tolerant auto associative  hetero associative • .
  • 3.  An auto associative memory is used to retrieve a previously stored pattern that most closely resembles the current pattern In auto associative memory y[1],y[2],y[3],…….y[m] number of stored pattern and an output pattern vector y[m] can be obtained from noisy y[m] Hetero associative memory:- the retrieved pattern is, in general, different from the input pattern not only in content but possibly also different in type and format. in hetero associative memory {c(1),y(1)},{c(2),y(2)},…….{( 𝑐m,ym)} output a pattern vector y(m) if noisy or incomplete version of c(m) is input.
  • 4. Applications:-  Image segmentation  Face detection  Computer graphics ,multimedia and multifractal analysis.  Signature detection  Image recognition
  • 5. Encoding or memoriization :- An associative memory can be form by constructing a weight matrix W connection. Weight matrix value of correlation matrix are computed as (𝑊𝑖𝑗) 𝑘= (𝑥𝑖) 𝑘 (𝑦𝑗) 𝑘 (𝑥𝑖) 𝑘 is ith component of Xk path and (𝑦𝑗) 𝑘is jth component of pattern (𝑦) 𝑘 W=∝ 𝑘=1 𝑝 𝑤 𝑘 ∝ is proportionality constant or normalizing constan W= 𝑇 ∗ 𝑇 𝑇 • The output of the function a=hard lim (W*𝑡 𝑛𝑜𝑖𝑠𝑒)
  • 6. Retrieval or recalling:- The process of retrieval of stored pattern is called decoding. 𝑌𝑖𝑛𝑗= 𝑖=1 𝑛 𝑥𝑖 𝑤𝑖𝑗  Apply the following activation function to calculate the output 𝑌𝑗 = 𝑓 𝑌𝑖𝑛𝑗 = −1 𝑖𝑓 𝑌𝑖𝑛𝑗 > 0 +1 𝑖𝑓 𝑌𝑖𝑛𝑗 < 0 To minimize the error the pseudoinverse of the target matrix T to minimize the cross correlation between input vectors t. • W=T*𝑇+
  • 7. Literature survey:- Publication Author Title Problem Strength Weakness IEEE World Congress on Computational Intelligence June, 10-15, 2012 - Brisbane, Australia Kazuaki Masuda Faculty of Engineering Kanagawa University A Weighting Approach for Autoassociative Memories to Improve Accuracy in Memorization cause of errors with memorization rules propose a weighting approach for the memorization rules so that the structure of the energy function can be altered in a desirable manner. capacity of a memory in terms of the feasibility not calculated IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 1, JANUARY 2004 Mehmet Kerem Müezzino˘glu, Student Member, IEEE, and Cüneyt Güzelis A Boolean Hebb Rule for Binary Associative Memory Design A binary associative memory design procedure that gives a Hopfield network with a symmetric binary weight matrix introducing the memory vectors as maximal independent sets to an undirected graph, which is constructed by Boolean operations analogous to the conventional Hebb rule. Does not give weight in signed integer valued,
  • 8. Publication Author Title Problem Strength Weakness IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 6, NOVEMBER 2005 Donq-Liang Lee and Thomas C. Chuang Designing Asymmetric Hopfield-Type Associative Memory With Higher Order Hamming Stability optimal asymmetric Hopfield-type associative memory (HAM) design based on perceptron-type learning algorithms recall capability as well as the number of spurious memories are all improved by using, increase the basin width around each prototype vector cost of slightly increasing the number of spurious memories in the state space. International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12- 17, 2007 Vicente O. Baez- Monroy and Simon O’Keefe An Associative Memory for Association Rule Mining generation of association rules An auto-associative memory based on a correlation matrix memory has been chosen from the large taxonomy of ANNs Errors in the recalls have resulted from IEEE TRANSACTIONS ON NEURAL Sri Garimella and Hynek Hermansky, Factor Analysis of Auto-Associative Neural When the amount of speaker data yields a 23% relative improvement in equal error rate over the previously
  • 9. The problem is divided into 5 sections 1) generating the alphabetical target vectors 2) calculating the weight matrix W with the pseudoinverse 3) testing the auto associative memory without noise, 4) testing the auto associative memory with noise  5) comparing to the results without using the pseudoinverse
  • 10. Training algorithms using the Hebb or Delta learning rule Step 1 − Initialize all the weights to zero as 𝑊𝑖𝑗 = 0 (i = 1 to n, j = 1 to n) Step 2 − Perform steps 3-4 for each input vector. Step 3 − Activate each input unit as follows – 𝑋𝑖= 𝑠𝑖(i=1to n) Step 4 − Activate each output unit as follows – 𝑌𝑗= 𝑠𝑗(j=1to n) Step 5 − Adjust the weights as follows – 𝑊𝑖𝑗(𝑛𝑒𝑤)= 𝑊𝑖𝑗(old) +𝑋𝑖 𝑌𝑖
  • 11. Testing Algorithm Step 1 − Set the weights obtained during training for Hebb’s rule. Step 2 − Perform steps 3-5 for each input vector. Step 3 − Set the activation of the input units equal to that of the input vector. Step 4 − Calculate the net input to each output unit j = 1 to n 𝑌𝑖𝑛𝑗= 𝑖=1 𝑛 𝑥𝑖 𝑤𝑖𝑗 Step 5 − Apply the following activation function to calculate the output 𝑌𝑗 = 𝑓 𝑌𝑖𝑛𝑗 = −1 𝑖𝑓 𝑌𝑖𝑛𝑗 > 0 +1 𝑖𝑓 𝑌𝑖𝑛𝑗 < 0
  • 12. RESULT and DISCUSSION(Performance with and without pseudoinverse weight matrix:-):- Weight matrix with pseudoinverse Weight matrix without pseudoinverse Using the pseudoinverse in limits the range of the weight matrix from 0 to 1. Not using the pseudoinverse results in a larger range of values –20 to 25. The off diagonal elements have a lot higher values. This indicates cross correlation
  • 13. Weight matrix with pseudoinverse Weight matrix without pseudoinverse There are significantly more character errors without noise when the pseudoinverse is not used in the weight matrix
  • 14. Weight matrix with pseudoinverse Weight matrix without pseudoinverse autoassociative memory using the pseudoinverse has much improved performance in noise which follows from the performance without noise.
  • 15. Conclusion:-  There are significantly more character errors without noise when the pseudoinverse is not used in the weight matrix Using the pseudoinverse in limits the range of the weight matrix from 0 to 1. Not using the pseudoinverse results in a larger range of values –20 to 25. The off diagonal elements have a lot higher values. This indicates cross correlation Auto associative memory using the pseudoinverse has much improved performance in noise which follows from the performance without noise.
  • 16. References:- [1]Donq-Liang Lee “Designing Asymmetric Hopfield-Type Associative Memory With Higher Order Hamming Stability” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 6, NOVEMBER 2005 [2]Kazuaki Masuda “Weighting Approach for Autoassociative Memories to Improve Accuracy in Memorization” IEEE World Congress on Computational Intelligence June, 10-15, 2012 - Brisbane, Australia [3]Vicente O. Baez-Monroy “An Associative Memory for Association Rule Mining” International Joint Conference on Neural Networks, Orlando, Florida, USA, August 12-17, 2007 [4] Sri Garimella “Factor Analysis of Auto-Associative Neural Networks With Application in Speaker Verification” IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 24, NO. 4, APRIL 2013 [5] Mehmet Kerem Müezzino˘glu “A Boolean Hebb Rule for Binary Associative Memory Design” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 1, JANUARY 2004
  • 17. Contd. [6]J. A. Anderson, “A simple neural network generating an interactive memory,” Mathematical Biosciences, vol. 14, no. 3-4, pp. 197–220, 1972. [7]T. Kohonen, “Correlation matrix memories,” IEEE Trans. Comput., vol. C-21, no. 4, pp. 353–359, 1972. [8] K. Nakano, “Associatron–a model of associative memory,” IEEE Trans. Syst., Man, Cybern., vol. 2, no. 3, pp. 380–388, 1972. [9]J. J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sci. USA, vol. 79, no. 8, pp. 2554–2558, 1982.