SlideShare a Scribd company logo
SUBJECT 20EEE523T
INTELLIGENT CONTROLLER
UNIT 2 PATTERN ASSOCIATION
Present by
Mrs. R.SATHIYA
Reg no :PA2313005023001
Research Scholar, Department of Electrical and Electronics Engineering
SRM Institute of Technology & Science, Chennai
HEBB RULE – PATTERN ASSOCIATION
Contd ..
• it learns associations between input patterns and output
patterns
• A pattern association can be trained to respond with a
certain output pattern when presented with an input
pattern.
• The connection weights can be adjusted in order to change
the input/output behaviour
• It specifies how a network changes it weights for a given
input/output association.
• The most commonly used learning rules with pattern
associators are the Hebb rule and the Delta rule
Training Algorithms For Pattern Association
• used for finding the weights for an associative memory NN.
• Its is represented in binary or bipolar.
• Similar algorithm with slight extension where finding the
weights by outer products.
• We want to consider examples in which the input to the net
after training is a pattern that is similar to, but not the same
as one of the training inputs.
• Each association is an input-ouput vector pair ,s:t
Training Algorithms For Pattern Association
Contd ..
• To store a set of association s(p):t(p),p=1,…P,where
• The weight matrix W is given by
Outer product
• Instead of obtaining W by iterative updates ,it can be
computed from the training set by calculating the
outer product of s and t
• The weights are initially zero
• The outer product of two vectors:
example
contd..
Example
Perfect recall versus cross talk
• The suitability of the hebb rule for a particular problem
depends on the correlation among the input training vectors .
• If the input vector are uncorrelated(orthogonal),the hebb rule
will produce the correct weights ,and the response of the net
when tested with one of the training vectors will be prefect
recall of the input vector’s associated target
• If the input vector are not orthogonal ,the response will
include a portion of each of their target values .This is
commonly called cross talk
Contd ..
• two vector are orthogonal ,if their dot product is 0.
• Orthogonality between the input patterns can be
checked only for binary or bipolar patterns
Delta rule
• In its original form ,as introduced in chapter 2,the delta rule
assumed that the activation function for the output unit was
the identity function.
• A simple extension allows for the use of any differential
activation function;we shall call this the extended delta rule
Associate Memory Network
• These kinds of neural networks work on the basis of pattern
association, which means they can store different patterns and at
the time of giving an output they can produce one of the stored
patterns by matching them with the given input pattern.
• These types of memories are also called Content-Addressable
Memory (CAM)Associative memory makes a parallel search with
the stored patterns as data files.
• Example
Contd ..
the two types of associative memories
• Auto Associative Memory
• Hetero Associative memory
Auto associative Memory
• .training input and output vector are same
• Determination of weight is called storing of vectors
• Weight is set to zero
• Auto associative net with no self connection
• Its performance is judged by its ability to reproduce a stored
pattern from noisy input
• Its performance in general better for bipolar vector than binary
vector
Architecture
• Input and output
vector are same
• The input vector
has n inputs and
output vector has n
outputs
• The input and
output are
connected through
weighted
connection
Training Algorithm
Testing Algorithm
• An auto associative can be used to determine whether the given
vector is ‘known’ or ‘unknown vector’
• A net is known to recognize a ‘known’vector if the net produce a
pattern of activation on the output which is same as one stored
• Testing procedure as follows
Example
Hetero Associative Memory
• The input training vector and the output target vectors are
not the same.
• Determination of weight by hebb rule or delta rule
• The weights are determined so that the network stores a set
of patterns.
• Hetero associative network is static in nature, hence, there
would be no non-linear and delay operations.
.
Architecture
• Input has ‘n’ input and
output has ‘m’ output
• There weighted
interconnection
between input and
output
• Associative memory
neural networks are
nets in which the
weights are determined
in such a way that the
net can store a set of P
pattern association
• Each association is pair
of vector (s(p),t(p)), with
p=1,2,…..P
Testing Algorithm
Example
Contd..
Artificial Neural Network - Hopfield
Networks
• Hopfield neural network was invented by Dr. John J. Hopfield in
1982.
• It consists of a single layer which contains one or more fully
connected recurrent neurons.
• The Hopfield network is commonly used for auto-association and
optimization tasks
• Two types of network
1.discrete hopfield network
2.continuous hopfield network
Discrete hopfield network
• A Hopfield network which operates in a discrete line fashion or
in other words, it can be said the input and output patterns are
discrete vector, which can be either binary 0, 1 or bipolar +1, -1
in nature.
• The network has symmetrical weights with no self-connections
i.e.,
• Only one unit updates Its activation at a time
• The asynchronous updating of the units allows a function,
known as an energy or Lyapunov function, to be found for the
net
Architecture
Architecture
Following are some important points to
keep in mind about discrete Hopfield
network
• This model consists of neurons with one
inverting and one non-inverting output.
• The output of each neuron should be
the input of other neurons but not the
input of self.
• Weight/connection strength is
represented by Wij .
• Connections can be excitatory as well as
inhibitory. It would be excitatory, if the
output of the neuron is same as the
input, otherwise inhibitory.
• Weights should be symmetrical, i.e. Wij
= Wji
• the output from Y1 going to Y2 , Yi and
Yn ,have the weights W12 , W1i and
W1n respectively. Similarly,other arcs
have the weights on them
Algorithm
Testing algorithm
.
Example – recalling of corrupted
pattern
Example
Contd..
Energy function
• An energy function is defined as a function that is
bonded and non-increasing function of the state of
the system.
• Energy function Ef, also called Lyapunov function
determines the stability of discrete Hopfield
network, and is characterized as follows
Contd ..
The change in energy depends on the fact that only one unit can update its
activation at a time.
Storage capacity
• The number of binary pattern that can be stored
and recalled in a net with reasonable accuracy is
given approximately by
• For bipolar pattern
Where n is number of neuron in a net
Continuous Hopfield network
• In comparison with Discrete Hopfield network,
continuous network has time as a continuous variable.
• It is also used in auto association and optimization
problems such as travelling salesman problem.
• Node has continuous graded output
• Energy decreases continuously with time
• Electrical circuit which uses non- linear amplifers and
resistors
• Used in building hopfield with VLSI technology
Energy function
Iterative Autoassociative networks
• Net does not respond to the input signal with the
stored target pattern.
• Respond like stored pattern.
• Use the first response as input to the net again.
• Iterative auto associative network recover original
stored vector when presented with test vector
close to it.
• Recurrent Autoassociative networks.
Example
Contd ..
Linear Autoassociative Memory
• James Anderson, 1977
• Based on Hebbian rule.
• Linear algebra is used for analyzing the performance of the
net.
• Stored vector is eigen vector.
• Eigen value-number of times the vector are presented
• When the input vector is X, then output response is XW,
where W is the weight matrix.
Brain In The Box Network
• An activity pattern inside the box receives positive
feedback on certain components, which will force it
outward.
• When it hit the walls, it moves to the corner of the
box where it remains such
• Represents saturation limit of each state.
• Restricted between -1 and +1.
• Self connection exists.
Training Algorithm
Autoassociative With Threshold unit
• If threshold unit is set, then a threshold function is used
as activation function
• Training algorithm
EXAMPLE
Contd ..
Temporal Associative Memory
Network
• Storing sequence of patterns as dynamic
transitions.
• Temporal patterns and associative memory with
this capacity is temporal associative memory
network.
Bidirectional associative
memory(BAM)
• It is first proposed by Bart Kosko in the year 1988
• Performs backward and forward search
• It associates patterns, say from set A to patterns
from set B and vice versa is also performed.
• Encodes bipolar/binary pattern using hebbian
learning rule
• Human memory is necessarily associative.
• It uses a chain of mental associations to recover a
lost memory .eg if we have lost an umberalla
BAM Architecture
• Weights are bidirectional
• X layer has ‘n’ input units
• Y layer has ‘m’ output units
• Weight matrix from X to Y is W
and from Y to X is WT
• Process repeated untill the input
and output vector become
unchanged (reach stable state)
• two types
1 .discrete BAM
2. continuous BAM
DISCRETE BIDIRECTIONAL AUTO
ASSOCIATIVE MEMORY
• Here weights are found to be the sum of outer product of bipolar
form training vector pair.
• Activation function is step up function with non zero threshold
• Determination of Weights
1. Let the input vectors be denoted by s(p) and target vectors
by t(p)
2.the weight matrix to store a set of input and target vectors,
where s(p) = (s1(p), .. , si(p), ... , sn(p))
t(p) = (t1(p), .. , tj(p), ... , tm(p))
3.It can be determined by Hebb rule training a1gorithm.
4. if the input is binary , the weight matrix W = {wij} is given by
contd
• If the input vector are bipolar , the weight matrix W =
{wij} can be defined as
• Activation function for BAM
• The activation function is based on whether the input
target vector pairs used are binary or bipolar
• The activation function for the Y-layer
1. With binary input vectors is
2. with bipolar input vector is
Testing Algorithm for Discrete
Bidirectional Associative Memory
Continuous BAM
• A continuous BAM[Kosko, 1988] transforms input smoothly and
continuously into output in the range [0, 1] using the logistic
sigmoid function as the activation function for all units
• For binary input vectors,the weights are
• The activation function is the logistic sigmoid
• With bias included ,the net input is
Hamming distance ,analysis of energy
function and storage capacity
• Hamming distance
• the number of mismatched component of two given
bipolar /binary vector .
• Denoted by
• Average distance =
Contd..
• Energy function
• Stability is determined by lyapunov
function(energy function)
Storage capacity
• Memory capacity min(m,n)
• “n” is the number of unit in X layer and “m” is the
number of unit in Y layer
• More conservative capacity is estimated as follows
Application of BAM
• Fault Detection
• Pattern Association
• Real Time Patient Monitoring
• Medical Diagnosis
• Pattern Mapping
• Pattern Recognition systems
• Optimization problems
• Constraint satisfaction problem
Example
Competitive learning network
• It is concerned with unsupervised training in which the
output node tries to compete with each other to
represent the input pattern
• Basic concept of competitive network
• This network is just like single layer feed –forward
network have feedback connection between output .
• The connection between the outputs are inhibitory type
,which shown by dotted lines ,which means the
competitors never support themselves
Contd..
• Example
• Considering set of student if you want to classify them on
the basis of evaluation performance, their score my be
calculated and the one who score is higher than the
others should be the winner
• The is called competitive net .the extreme form of these
competitive net is called winner –take –all
• i.e ;only one neuron in the competing group will posses
non zero output signal at the end of competition
• Only one neuron is active at a time. Only the winner has
updated weights, the rest remain unchanged.
Contd..
• Some of the neural network that comes under these
category
1. Kohonen self orgnizing feature maps
2. Learning vector quantization
3. Adaptive resonance theory
Kohonen self organizing feature map
• Self Organizing Map (or Kohonen Map or SOM) is a type
of Artificial Neural Network .
• It follows an unsupervised learning approach and trained
its network through a competitive learning algorithm.
• SOM is used for clustering and mapping (or
dimensionality reduction) techniques to map
multidimensional data onto lower-dimensional which
allows people to reduce complex problems for easy
interpretation.
• SOM has two layers,
1. Input layer
2. Output layer.
operation
• SOM operates in Two modes (1) Training
(2) Mapping
• Training Process:
Develops the map using competitive procedure
(Vector Quantization)
• Mapping Process:
Classifies the new supplied input based on the
training outcomes
• Basic competitive learning implies that the competition
process takes place before the cycle of learning.
• The competition process suggests that some criteria select
a winning processing element.
• After the winning processing element is selected, its
weight vector is adjusted according to the used learning
law.
• Feature mapping is a process which converts the patterns
of arbitrary dimensionality into a response of one or two
dimensions array of neurons.
• The network performing such a mapping is called feature
map. The reason for reducing the higher dimensionality,
the ability to preserve the neighbor topology.
Training algorithm
Application-speech recognition
• The short segments of the speech waveform is
given as input .
• It will map the same kind of phonemes as the
output array, called feature extraction technique.
• After extracting the features, with the help of some
acoustic models as back-end processing, it will
recognize the utterance
Learning vector quantization(LVQ)
• Purpose :dimentionality reduction and data compression
• Self organizing map (SOM) is to encode a large set of input
vector{x} by finding a smaller set of representatives
/prototype/cluster
• It is a supervised version of vector quantization that can be
used when we have labelled input data
• It is a two stage process- a SOM is followed by LVQ
• The first step is feature selection: the unsupervised
identification of a reasonably a small set of pattern
• Second step is classification -where the feature domains are
assigned to individual classes
Architecture
Example
• the first step is to train the machine
with all the different fruits one by one
like this:
• If the shape of the object is rounded and
has a depression at the top, is red in
color, then it will be labeled as –Apple.
• If the shape of the object is a long curving
cylinder having Green-Yellow color, then
it will be labeled as –Banana.
• after training the data, you have given a
new separate fruit, say Banana from the
basket, and asked to identify it.
• It will first classify the fruit with its shape
and color and would confirm the fruit
name as BANANA and put it in the
Banana category. Thus the machine
learns the things from training
data(basket containing fruits) and then
applies the knowledge to test data(new
fruit).
Flowchart
adaptive resonance theory
• adaptive - that they are open to new learning
• resonance - without discarding the previous or the old
information
• The ART networks are known to solve the stability-
plasticity dilemma
• i.e., stability refers to their nature of memorizing the
learning
• plasticity refers to the fact that they are flexible to gain
new information.
• Due to this the nature of ART they are always able to
learn new input patterns without forgetting the past
Contd..
• Invented by Grossberg in 1976 and based on
unsupervised learning model.
• Resonance means a target vector matches close enough
the input vector.
• ART matching leads to resonance and only in resonance
state the ART network learns.
• Suitable for problems that uses online dynamic large
databases.
• Types: (1) ART 1- classifies binary input vectors
(2) ART 2 – clusters real valued input (continuous
valued) vectors.
• Used to solve Plasticity – stability dilemma.
Architecture
• It consist of
1. A comparison field
2. A recognition field-
composed of neuron
3. A vigilance parameter
4. A reset module
• Comparison phase − In this phase, a comparison of the
input vector to the comparison layer vector is done.
• Recognition phase − The input vector is compared with the
classification presented at every node in the output layer.
The output of the neuron becomes “1” if it best matches
with the classification applied, otherwise it becomes “0”.
• Vigliance parameter –After the i/p vectors are classified a
reset module compares the strength of match to vigilance
parameter (defined by the user).
• Higher vigilance produces fine detailed memories and
lower vigilance value gives more general memory.
• Rest module - compares the strength of recognition
phase. When vigilance threshold is met then
training starts otherwise neurons are inhibited until
a new i/p is provided
• There are two set of weights
(1) Bottom -up weight - from F1 layer to F2 Layer
(2) Top –Down weight – F2 to F1 Layer
Notation used in algorithm
Training algorithm
Contd..
Application
• ART neural networks used for fast, stable learning and
prediction have been applied in different areas.
• Application of ART
• target recognition, face recognition, medical diagnosis,
signature verification, mobile control robot
• Signature verification:
• signature verification is used in bank check confirmation, ATM
access, etc.
• the training of the network is finished using ART1 that uses
global features as input vector and
• The testing phase has two step 1.the verification and
2. recognition phase
• In the initial step, the input vector is coordinated with the
stored reference vector, which was used as a training set, and in
the second step, cluster formation takes place.
Signature verification -flowchart
sathiya new final.pptx

More Related Content

Similar to sathiya new final.pptx

Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Randa Elanwar
 
Neural Networks Lec3.pptx
Neural Networks Lec3.pptxNeural Networks Lec3.pptx
Neural Networks Lec3.pptx
moah92926
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
IshaneeSharma
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
PradeeshSAI
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
Prof. Neeta Awasthy
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
madhu sudhakar
 
Artificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.pptArtificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.ppt
NJUSTAiMo
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
Indira Priyadarsini
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Randa Elanwar
 
Artificial Neural Network (ANN
Artificial Neural Network (ANNArtificial Neural Network (ANN
Artificial Neural Network (ANN
Andrew Molina
 
Cerebellar Model Articulation Controller
Cerebellar Model Articulation ControllerCerebellar Model Articulation Controller
Cerebellar Model Articulation Controller
Zahra Sadeghi
 
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptxartificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
REG83NITHYANANTHANN
 
UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
Sivam Chinna
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
Mohit Shrivastava
 
Lec16 - Autoencoders.pptx
Lec16 - Autoencoders.pptxLec16 - Autoencoders.pptx
Lec16 - Autoencoders.pptx
Sameer Gulshan
 
Associative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxAssociative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptx
dgfsdf1
 
Artificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharma
Er. Arpit Sharma
 
Neural network
Neural networkNeural network
Neural network
Akhash Kumar
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
Adri Jovin
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
raghavaram5555
 

Similar to sathiya new final.pptx (20)

Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9Introduction to Neural networks (under graduate course) Lecture 9 of 9
Introduction to Neural networks (under graduate course) Lecture 9 of 9
 
Neural Networks Lec3.pptx
Neural Networks Lec3.pptxNeural Networks Lec3.pptx
Neural Networks Lec3.pptx
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Artificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.pptArtificial Neural Network Learning Algorithm.ppt
Artificial Neural Network Learning Algorithm.ppt
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
 
Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9Introduction to Neural networks (under graduate course) Lecture 7 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
 
Artificial Neural Network (ANN
Artificial Neural Network (ANNArtificial Neural Network (ANN
Artificial Neural Network (ANN
 
Cerebellar Model Articulation Controller
Cerebellar Model Articulation ControllerCerebellar Model Articulation Controller
Cerebellar Model Articulation Controller
 
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptxartificialneuralnetwork-130409001108-phpapp02 (2).pptx
artificialneuralnetwork-130409001108-phpapp02 (2).pptx
 
UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
 
Neural net and back propagation
Neural net and back propagationNeural net and back propagation
Neural net and back propagation
 
Lec16 - Autoencoders.pptx
Lec16 - Autoencoders.pptxLec16 - Autoencoders.pptx
Lec16 - Autoencoders.pptx
 
Associative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptxAssociative_Memory_Neural_Networks_pptx.pptx
Associative_Memory_Neural_Networks_pptx.pptx
 
Artificial neural network by arpit_sharma
Artificial neural network by arpit_sharmaArtificial neural network by arpit_sharma
Artificial neural network by arpit_sharma
 
Neural network
Neural networkNeural network
Neural network
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
 
Competitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptxCompetitive Learning [Deep Learning And Nueral Networks].pptx
Competitive Learning [Deep Learning And Nueral Networks].pptx
 

Recently uploaded

CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Lapbook sobre os Regimes TotalitĂĄrios.pdf
Lapbook sobre os Regimes TotalitĂĄrios.pdfLapbook sobre os Regimes TotalitĂĄrios.pdf
Lapbook sobre os Regimes TotalitĂĄrios.pdf
Jean Carlos Nunes PaixĂŁo
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
GeoBlogs
 

Recently uploaded (20)

CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Lapbook sobre os Regimes TotalitĂĄrios.pdf
Lapbook sobre os Regimes TotalitĂĄrios.pdfLapbook sobre os Regimes TotalitĂĄrios.pdf
Lapbook sobre os Regimes TotalitĂĄrios.pdf
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
 

sathiya new final.pptx

  • 1. SUBJECT 20EEE523T INTELLIGENT CONTROLLER UNIT 2 PATTERN ASSOCIATION Present by Mrs. R.SATHIYA Reg no :PA2313005023001 Research Scholar, Department of Electrical and Electronics Engineering SRM Institute of Technology & Science, Chennai
  • 2. HEBB RULE – PATTERN ASSOCIATION
  • 3. Contd .. • it learns associations between input patterns and output patterns • A pattern association can be trained to respond with a certain output pattern when presented with an input pattern. • The connection weights can be adjusted in order to change the input/output behaviour • It specifies how a network changes it weights for a given input/output association. • The most commonly used learning rules with pattern associators are the Hebb rule and the Delta rule
  • 4. Training Algorithms For Pattern Association • used for finding the weights for an associative memory NN. • Its is represented in binary or bipolar. • Similar algorithm with slight extension where finding the weights by outer products. • We want to consider examples in which the input to the net after training is a pattern that is similar to, but not the same as one of the training inputs. • Each association is an input-ouput vector pair ,s:t
  • 5. Training Algorithms For Pattern Association
  • 6. Contd .. • To store a set of association s(p):t(p),p=1,…P,where • The weight matrix W is given by
  • 7. Outer product • Instead of obtaining W by iterative updates ,it can be computed from the training set by calculating the outer product of s and t • The weights are initially zero • The outer product of two vectors:
  • 11. Perfect recall versus cross talk • The suitability of the hebb rule for a particular problem depends on the correlation among the input training vectors . • If the input vector are uncorrelated(orthogonal),the hebb rule will produce the correct weights ,and the response of the net when tested with one of the training vectors will be prefect recall of the input vector’s associated target • If the input vector are not orthogonal ,the response will include a portion of each of their target values .This is commonly called cross talk
  • 12. Contd .. • two vector are orthogonal ,if their dot product is 0. • Orthogonality between the input patterns can be checked only for binary or bipolar patterns
  • 13. Delta rule • In its original form ,as introduced in chapter 2,the delta rule assumed that the activation function for the output unit was the identity function. • A simple extension allows for the use of any differential activation function;we shall call this the extended delta rule
  • 14. Associate Memory Network • These kinds of neural networks work on the basis of pattern association, which means they can store different patterns and at the time of giving an output they can produce one of the stored patterns by matching them with the given input pattern. • These types of memories are also called Content-Addressable Memory (CAM)Associative memory makes a parallel search with the stored patterns as data files. • Example
  • 15. Contd .. the two types of associative memories • Auto Associative Memory • Hetero Associative memory
  • 16. Auto associative Memory • .training input and output vector are same • Determination of weight is called storing of vectors • Weight is set to zero • Auto associative net with no self connection • Its performance is judged by its ability to reproduce a stored pattern from noisy input • Its performance in general better for bipolar vector than binary vector
  • 17. Architecture • Input and output vector are same • The input vector has n inputs and output vector has n outputs • The input and output are connected through weighted connection
  • 19. Testing Algorithm • An auto associative can be used to determine whether the given vector is ‘known’ or ‘unknown vector’ • A net is known to recognize a ‘known’vector if the net produce a pattern of activation on the output which is same as one stored • Testing procedure as follows
  • 21. Hetero Associative Memory • The input training vector and the output target vectors are not the same. • Determination of weight by hebb rule or delta rule • The weights are determined so that the network stores a set of patterns. • Hetero associative network is static in nature, hence, there would be no non-linear and delay operations. .
  • 22. Architecture • Input has ‘n’ input and output has ‘m’ output • There weighted interconnection between input and output • Associative memory neural networks are nets in which the weights are determined in such a way that the net can store a set of P pattern association • Each association is pair of vector (s(p),t(p)), with p=1,2,…..P
  • 26. Artificial Neural Network - Hopfield Networks • Hopfield neural network was invented by Dr. John J. Hopfield in 1982. • It consists of a single layer which contains one or more fully connected recurrent neurons. • The Hopfield network is commonly used for auto-association and optimization tasks • Two types of network 1.discrete hopfield network 2.continuous hopfield network
  • 27. Discrete hopfield network • A Hopfield network which operates in a discrete line fashion or in other words, it can be said the input and output patterns are discrete vector, which can be either binary 0, 1 or bipolar +1, -1 in nature. • The network has symmetrical weights with no self-connections i.e., • Only one unit updates Its activation at a time • The asynchronous updating of the units allows a function, known as an energy or Lyapunov function, to be found for the net
  • 28. Architecture Architecture Following are some important points to keep in mind about discrete Hopfield network • This model consists of neurons with one inverting and one non-inverting output. • The output of each neuron should be the input of other neurons but not the input of self. • Weight/connection strength is represented by Wij . • Connections can be excitatory as well as inhibitory. It would be excitatory, if the output of the neuron is same as the input, otherwise inhibitory. • Weights should be symmetrical, i.e. Wij = Wji • the output from Y1 going to Y2 , Yi and Yn ,have the weights W12 , W1i and W1n respectively. Similarly,other arcs have the weights on them
  • 31. Example – recalling of corrupted pattern
  • 34. Energy function • An energy function is defined as a function that is bonded and non-increasing function of the state of the system. • Energy function Ef, also called Lyapunov function determines the stability of discrete Hopfield network, and is characterized as follows
  • 35. Contd .. The change in energy depends on the fact that only one unit can update its activation at a time.
  • 36. Storage capacity • The number of binary pattern that can be stored and recalled in a net with reasonable accuracy is given approximately by • For bipolar pattern Where n is number of neuron in a net
  • 37. Continuous Hopfield network • In comparison with Discrete Hopfield network, continuous network has time as a continuous variable. • It is also used in auto association and optimization problems such as travelling salesman problem. • Node has continuous graded output • Energy decreases continuously with time • Electrical circuit which uses non- linear amplifers and resistors • Used in building hopfield with VLSI technology
  • 39. Iterative Autoassociative networks • Net does not respond to the input signal with the stored target pattern. • Respond like stored pattern. • Use the first response as input to the net again. • Iterative auto associative network recover original stored vector when presented with test vector close to it. • Recurrent Autoassociative networks.
  • 42. Linear Autoassociative Memory • James Anderson, 1977 • Based on Hebbian rule. • Linear algebra is used for analyzing the performance of the net. • Stored vector is eigen vector. • Eigen value-number of times the vector are presented • When the input vector is X, then output response is XW, where W is the weight matrix.
  • 43. Brain In The Box Network • An activity pattern inside the box receives positive feedback on certain components, which will force it outward. • When it hit the walls, it moves to the corner of the box where it remains such • Represents saturation limit of each state. • Restricted between -1 and +1. • Self connection exists.
  • 45. Autoassociative With Threshold unit • If threshold unit is set, then a threshold function is used as activation function • Training algorithm
  • 48. Temporal Associative Memory Network • Storing sequence of patterns as dynamic transitions. • Temporal patterns and associative memory with this capacity is temporal associative memory network.
  • 49. Bidirectional associative memory(BAM) • It is first proposed by Bart Kosko in the year 1988 • Performs backward and forward search • It associates patterns, say from set A to patterns from set B and vice versa is also performed. • Encodes bipolar/binary pattern using hebbian learning rule • Human memory is necessarily associative. • It uses a chain of mental associations to recover a lost memory .eg if we have lost an umberalla
  • 50. BAM Architecture • Weights are bidirectional • X layer has ‘n’ input units • Y layer has ‘m’ output units • Weight matrix from X to Y is W and from Y to X is WT • Process repeated untill the input and output vector become unchanged (reach stable state) • two types 1 .discrete BAM 2. continuous BAM
  • 51. DISCRETE BIDIRECTIONAL AUTO ASSOCIATIVE MEMORY • Here weights are found to be the sum of outer product of bipolar form training vector pair. • Activation function is step up function with non zero threshold • Determination of Weights 1. Let the input vectors be denoted by s(p) and target vectors by t(p) 2.the weight matrix to store a set of input and target vectors, where s(p) = (s1(p), .. , si(p), ... , sn(p)) t(p) = (t1(p), .. , tj(p), ... , tm(p)) 3.It can be determined by Hebb rule training a1gorithm. 4. if the input is binary , the weight matrix W = {wij} is given by
  • 52. contd • If the input vector are bipolar , the weight matrix W = {wij} can be defined as • Activation function for BAM • The activation function is based on whether the input target vector pairs used are binary or bipolar • The activation function for the Y-layer 1. With binary input vectors is 2. with bipolar input vector is
  • 53.
  • 54. Testing Algorithm for Discrete Bidirectional Associative Memory
  • 55. Continuous BAM • A continuous BAM[Kosko, 1988] transforms input smoothly and continuously into output in the range [0, 1] using the logistic sigmoid function as the activation function for all units • For binary input vectors,the weights are • The activation function is the logistic sigmoid • With bias included ,the net input is
  • 56. Hamming distance ,analysis of energy function and storage capacity • Hamming distance • the number of mismatched component of two given bipolar /binary vector . • Denoted by • Average distance =
  • 57. Contd.. • Energy function • Stability is determined by lyapunov function(energy function)
  • 58. Storage capacity • Memory capacity min(m,n) • “n” is the number of unit in X layer and “m” is the number of unit in Y layer • More conservative capacity is estimated as follows
  • 59. Application of BAM • Fault Detection • Pattern Association • Real Time Patient Monitoring • Medical Diagnosis • Pattern Mapping • Pattern Recognition systems • Optimization problems • Constraint satisfaction problem
  • 61.
  • 62. Competitive learning network • It is concerned with unsupervised training in which the output node tries to compete with each other to represent the input pattern • Basic concept of competitive network • This network is just like single layer feed –forward network have feedback connection between output . • The connection between the outputs are inhibitory type ,which shown by dotted lines ,which means the competitors never support themselves
  • 63. Contd.. • Example • Considering set of student if you want to classify them on the basis of evaluation performance, their score my be calculated and the one who score is higher than the others should be the winner • The is called competitive net .the extreme form of these competitive net is called winner –take –all • i.e ;only one neuron in the competing group will posses non zero output signal at the end of competition • Only one neuron is active at a time. Only the winner has updated weights, the rest remain unchanged.
  • 64. Contd.. • Some of the neural network that comes under these category 1. Kohonen self orgnizing feature maps 2. Learning vector quantization 3. Adaptive resonance theory
  • 65. Kohonen self organizing feature map • Self Organizing Map (or Kohonen Map or SOM) is a type of Artificial Neural Network . • It follows an unsupervised learning approach and trained its network through a competitive learning algorithm. • SOM is used for clustering and mapping (or dimensionality reduction) techniques to map multidimensional data onto lower-dimensional which allows people to reduce complex problems for easy interpretation. • SOM has two layers, 1. Input layer 2. Output layer.
  • 66. operation • SOM operates in Two modes (1) Training (2) Mapping • Training Process: Develops the map using competitive procedure (Vector Quantization) • Mapping Process: Classifies the new supplied input based on the training outcomes
  • 67. • Basic competitive learning implies that the competition process takes place before the cycle of learning. • The competition process suggests that some criteria select a winning processing element. • After the winning processing element is selected, its weight vector is adjusted according to the used learning law. • Feature mapping is a process which converts the patterns of arbitrary dimensionality into a response of one or two dimensions array of neurons. • The network performing such a mapping is called feature map. The reason for reducing the higher dimensionality, the ability to preserve the neighbor topology.
  • 69.
  • 70. Application-speech recognition • The short segments of the speech waveform is given as input . • It will map the same kind of phonemes as the output array, called feature extraction technique. • After extracting the features, with the help of some acoustic models as back-end processing, it will recognize the utterance
  • 71. Learning vector quantization(LVQ) • Purpose :dimentionality reduction and data compression • Self organizing map (SOM) is to encode a large set of input vector{x} by finding a smaller set of representatives /prototype/cluster • It is a supervised version of vector quantization that can be used when we have labelled input data • It is a two stage process- a SOM is followed by LVQ • The first step is feature selection: the unsupervised identification of a reasonably a small set of pattern • Second step is classification -where the feature domains are assigned to individual classes
  • 73. Example • the first step is to train the machine with all the different fruits one by one like this: • If the shape of the object is rounded and has a depression at the top, is red in color, then it will be labeled as –Apple. • If the shape of the object is a long curving cylinder having Green-Yellow color, then it will be labeled as –Banana. • after training the data, you have given a new separate fruit, say Banana from the basket, and asked to identify it. • It will first classify the fruit with its shape and color and would confirm the fruit name as BANANA and put it in the Banana category. Thus the machine learns the things from training data(basket containing fruits) and then applies the knowledge to test data(new fruit).
  • 75.
  • 76.
  • 77. adaptive resonance theory • adaptive - that they are open to new learning • resonance - without discarding the previous or the old information • The ART networks are known to solve the stability- plasticity dilemma • i.e., stability refers to their nature of memorizing the learning • plasticity refers to the fact that they are flexible to gain new information. • Due to this the nature of ART they are always able to learn new input patterns without forgetting the past
  • 78. Contd.. • Invented by Grossberg in 1976 and based on unsupervised learning model. • Resonance means a target vector matches close enough the input vector. • ART matching leads to resonance and only in resonance state the ART network learns. • Suitable for problems that uses online dynamic large databases. • Types: (1) ART 1- classifies binary input vectors (2) ART 2 – clusters real valued input (continuous valued) vectors. • Used to solve Plasticity – stability dilemma.
  • 79. Architecture • It consist of 1. A comparison field 2. A recognition field- composed of neuron 3. A vigilance parameter 4. A reset module
  • 80. • Comparison phase − In this phase, a comparison of the input vector to the comparison layer vector is done. • Recognition phase − The input vector is compared with the classification presented at every node in the output layer. The output of the neuron becomes “1” if it best matches with the classification applied, otherwise it becomes “0”. • Vigliance parameter –After the i/p vectors are classified a reset module compares the strength of match to vigilance parameter (defined by the user). • Higher vigilance produces fine detailed memories and lower vigilance value gives more general memory.
  • 81. • Rest module - compares the strength of recognition phase. When vigilance threshold is met then training starts otherwise neurons are inhibited until a new i/p is provided • There are two set of weights (1) Bottom -up weight - from F1 layer to F2 Layer (2) Top –Down weight – F2 to F1 Layer
  • 82. Notation used in algorithm
  • 85. Application • ART neural networks used for fast, stable learning and prediction have been applied in different areas. • Application of ART • target recognition, face recognition, medical diagnosis, signature verification, mobile control robot • Signature verification: • signature verification is used in bank check confirmation, ATM access, etc. • the training of the network is finished using ART1 that uses global features as input vector and • The testing phase has two step 1.the verification and 2. recognition phase • In the initial step, the input vector is coordinated with the stored reference vector, which was used as a training set, and in the second step, cluster formation takes place.