1. Autoassociative Memory
performance with and without
pseudoinverse weight matrix
Submitted by:- Submitted to:-
Bhupender Singh (151602) Dr. Rajesh Mehra
NITTTR, Chandigarh
2. Introduction
A content-addressable memory is a type
of memory that allows for the recall of
data based on the degree of similarity
between the input pattern and the
patterns stored in memory
memory is robust and fault-tolerant
auto associative
hetero associative
• .
3. An auto associative memory is used to retrieve a previously stored
pattern that most closely resembles the current pattern
In auto associative memory y[1],y[2],y[3],…….y[m] number of stored
pattern and an output pattern vector y[m] can be obtained from noisy
y[m]
Hetero associative memory:- the retrieved pattern is, in general,
different from the input pattern not only in content but possibly also
different in type and format.
in hetero associative memory {c(1),y(1)},{c(2),y(2)},…….{( 𝑐m,ym)}
output a pattern vector y(m) if noisy or incomplete version of c(m) is
input.
5. Encoding or memoriization :-
An associative memory can be form by constructing a weight matrix W
connection. Weight matrix value of correlation matrix are computed as
(𝑊𝑖𝑗) 𝑘= (𝑥𝑖) 𝑘 (𝑦𝑗) 𝑘
(𝑥𝑖) 𝑘 is ith component of Xk path and (𝑦𝑗) 𝑘is jth component of pattern (𝑦) 𝑘
W=∝ 𝑘=1
𝑝
𝑤 𝑘
∝ is proportionality constant or normalizing constan
W= 𝑇 ∗ 𝑇 𝑇
• The output of the function a=hard lim (W*𝑡 𝑛𝑜𝑖𝑠𝑒)
6. Retrieval or recalling:-
The process of retrieval of stored pattern is called decoding.
𝑌𝑖𝑛𝑗= 𝑖=1
𝑛
𝑥𝑖 𝑤𝑖𝑗
Apply the following activation function to calculate the output
𝑌𝑗 = 𝑓 𝑌𝑖𝑛𝑗 =
−1 𝑖𝑓 𝑌𝑖𝑛𝑗 > 0
+1 𝑖𝑓 𝑌𝑖𝑛𝑗 < 0
To minimize the error the pseudoinverse of the target matrix T to
minimize the cross correlation between input vectors t.
• W=T*𝑇+
7. Literature survey:-
Publication Author Title Problem Strength Weakness
IEEE World
Congress on
Computational
Intelligence
June, 10-15, 2012
- Brisbane,
Australia
Kazuaki Masuda
Faculty of
Engineering
Kanagawa
University
A Weighting
Approach for
Autoassociative
Memories to
Improve Accuracy
in Memorization
cause of errors
with
memorization
rules
propose a weighting
approach for the
memorization rules so that
the structure of
the energy function can be
altered in a desirable
manner.
capacity of a
memory in
terms
of the
feasibility not
calculated
IEEE
TRANSACTIONS
ON NEURAL
NETWORKS, VOL.
15, NO. 1,
JANUARY 2004
Mehmet Kerem
Müezzino˘glu,
Student
Member, IEEE,
and Cüneyt
Güzelis
A Boolean Hebb
Rule for Binary
Associative
Memory Design
A binary
associative
memory design
procedure that
gives a Hopfield
network with a
symmetric binary
weight matrix
introducing the memory
vectors as maximal
independent sets to
an undirected graph, which
is constructed by Boolean
operations
analogous to the
conventional Hebb rule.
Does not give
weight in
signed integer
valued,
8. Publication Author Title Problem Strength Weakness
IEEE
TRANSACTIONS
ON NEURAL
NETWORKS, VOL.
16, NO. 6,
NOVEMBER 2005
Donq-Liang Lee
and Thomas C.
Chuang
Designing
Asymmetric
Hopfield-Type
Associative
Memory With
Higher Order
Hamming Stability
optimal
asymmetric
Hopfield-type
associative
memory (HAM)
design based on
perceptron-type
learning
algorithms
recall capability as well as
the number of spurious
memories are all improved
by using, increase the basin
width
around each prototype
vector
cost of slightly
increasing
the number of
spurious
memories in
the state
space.
International Joint
Conference on
Neural Networks,
Orlando, Florida,
USA, August 12-
17, 2007
Vicente O. Baez-
Monroy and
Simon O’Keefe
An Associative
Memory for
Association Rule
Mining
generation of
association rules
An auto-associative memory
based on a correlation
matrix
memory has been chosen
from the large taxonomy of
ANNs
Errors in the
recalls have
resulted from
IEEE
TRANSACTIONS
ON NEURAL
Sri Garimella
and Hynek
Hermansky,
Factor Analysis of
Auto-Associative
Neural
When the
amount of
speaker data
yields a 23% relative
improvement in equal error
rate over the previously
9. The problem is divided into 5 sections
1) generating the alphabetical
target vectors
2) calculating the weight matrix
W with the pseudoinverse
3) testing the auto associative
memory without noise,
4) testing the auto associative
memory with noise
5) comparing to the results
without using the pseudoinverse
10. Training algorithms using the Hebb or Delta
learning rule
Step 1 − Initialize all the weights to zero
as 𝑊𝑖𝑗 = 0 (i = 1 to n, j = 1 to n)
Step 2 − Perform steps 3-4 for each input
vector.
Step 3 − Activate each input unit as
follows –
𝑋𝑖= 𝑠𝑖(i=1to n)
Step 4 − Activate each output unit as
follows –
𝑌𝑗= 𝑠𝑗(j=1to n)
Step 5 − Adjust the weights as follows –
𝑊𝑖𝑗(𝑛𝑒𝑤)= 𝑊𝑖𝑗(old) +𝑋𝑖 𝑌𝑖
11. Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s
rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of
the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to n
𝑌𝑖𝑛𝑗= 𝑖=1
𝑛
𝑥𝑖 𝑤𝑖𝑗
Step 5 − Apply the following activation function to calculate
the output
𝑌𝑗 = 𝑓 𝑌𝑖𝑛𝑗 =
−1 𝑖𝑓 𝑌𝑖𝑛𝑗 > 0
+1 𝑖𝑓 𝑌𝑖𝑛𝑗 < 0
12. RESULT and DISCUSSION(Performance with
and without pseudoinverse weight matrix:-):-
Weight matrix with pseudoinverse Weight matrix without pseudoinverse
Using the pseudoinverse in limits the range of the weight matrix from 0 to 1. Not using the pseudoinverse results in a larger
range of values –20 to 25. The off diagonal elements have a lot higher values. This indicates cross correlation
13. Weight matrix with pseudoinverse Weight matrix without pseudoinverse
There are significantly more character errors without noise when the pseudoinverse is not used in the weight matrix
14. Weight matrix with pseudoinverse Weight matrix without pseudoinverse
autoassociative memory using the pseudoinverse has much improved performance in noise which follows from the
performance without noise.
15. Conclusion:-
There are significantly more character errors without noise when the
pseudoinverse is not used in the weight matrix
Using the pseudoinverse in limits the range of the weight matrix
from 0 to 1. Not using the pseudoinverse results in a larger range of
values –20 to 25. The off diagonal elements have a lot higher values.
This indicates cross correlation
Auto associative memory using the pseudoinverse has much
improved performance in noise which follows from the performance
without noise.
16. References:-
[1]Donq-Liang Lee “Designing Asymmetric Hopfield-Type Associative Memory With Higher
Order Hamming Stability” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 16, NO. 6,
NOVEMBER 2005
[2]Kazuaki Masuda “Weighting Approach for Autoassociative Memories to Improve Accuracy in
Memorization” IEEE World Congress on Computational Intelligence June, 10-15, 2012 -
Brisbane, Australia
[3]Vicente O. Baez-Monroy “An Associative Memory for Association Rule Mining” International Joint Conference
on Neural Networks, Orlando, Florida, USA, August 12-17, 2007
[4] Sri Garimella “Factor Analysis of Auto-Associative Neural Networks With Application in
Speaker Verification” IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,
VOL. 24, NO. 4, APRIL 2013
[5] Mehmet Kerem Müezzino˘glu “A Boolean Hebb Rule for Binary Associative Memory Design”
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 1, JANUARY 2004
17. Contd.
[6]J. A. Anderson, “A simple neural network generating an interactive
memory,” Mathematical Biosciences, vol. 14, no. 3-4, pp. 197–220, 1972.
[7]T. Kohonen, “Correlation matrix memories,” IEEE Trans. Comput., vol.
C-21, no. 4, pp. 353–359, 1972.
[8] K. Nakano, “Associatron–a model of associative memory,” IEEE Trans.
Syst., Man, Cybern., vol. 2, no. 3, pp. 380–388, 1972.
[9]J. J. Hopfield, “Neural networks and physical systems with emergent
collective computational abilities,” Proc. Natl. Acad. Sci. USA, vol. 79,
no. 8, pp. 2554–2558, 1982.