This document discusses pattern association in neural networks using the Hebb rule and delta rule. It provides examples of how pattern association works in autoassociative and heteroassociative networks. The key points are:
- Pattern association networks can learn associations between input and output patterns by adjusting connection weights. The Hebb rule and delta rule are commonly used learning algorithms.
- Autoassociative networks have the same input and output patterns, while heteroassociative networks have different input and output patterns.
- Examples show how these networks can accurately recall a stored pattern when presented with a partial or corrupted input pattern.
- Other association models discussed include bidirectional associative memory (BAM) networks, temporal association networks, and
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
This presentation covers the basics of neural network along with the back propagation training algorithm and a code for image classification at the end.
Introduction to Artificial Neural NetworksAdri Jovin
Â
This presentation describes the various components, classification and application of Artificial Neural Networks. It also gives an outline on the other soft computing techniques also.
A Strategic Approach: GenAI in EducationPeter Windle
Â
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
This PPT contains entire content in short. My book on ANN under the title "SOFT COMPUTING" with Watson Publication and my classmates can be referred together.
This presentation covers the basics of neural network along with the back propagation training algorithm and a code for image classification at the end.
Introduction to Artificial Neural NetworksAdri Jovin
Â
This presentation describes the various components, classification and application of Artificial Neural Networks. It also gives an outline on the other soft computing techniques also.
A Strategic Approach: GenAI in EducationPeter Windle
Â
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Â
Francesca Gottschalk from the OECDâs Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Â
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
Â
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Â
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
⢠The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
⢠The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate âany matterâ at âany timeâ under House Rule X.
⢠The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Operation âBlue Starâ is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
Model Attribute Check Company Auto PropertyCeline George
Â
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. SUBJECT 20EEE523T
INTELLIGENT CONTROLLER
UNIT 2 PATTERN ASSOCIATION
Present by
Mrs. R.SATHIYA
Reg no :PA2313005023001
Research Scholar, Department of Electrical and Electronics Engineering
SRM Institute of Technology & Science, Chennai
3. Contd ..
⢠it learns associations between input patterns and output
patterns
⢠A pattern association can be trained to respond with a
certain output pattern when presented with an input
pattern.
⢠The connection weights can be adjusted in order to change
the input/output behaviour
⢠It specifies how a network changes it weights for a given
input/output association.
⢠The most commonly used learning rules with pattern
associators are the Hebb rule and the Delta rule
4. Training Algorithms For Pattern Association
⢠used for finding the weights for an associative memory NN.
⢠Its is represented in binary or bipolar.
⢠Similar algorithm with slight extension where finding the
weights by outer products.
⢠We want to consider examples in which the input to the net
after training is a pattern that is similar to, but not the same
as one of the training inputs.
⢠Each association is an input-ouput vector pair ,s:t
6. Contd ..
⢠To store a set of association s(p):t(p),p=1,âŚP,where
⢠The weight matrix W is given by
7. Outer product
⢠Instead of obtaining W by iterative updates ,it can be
computed from the training set by calculating the
outer product of s and t
⢠The weights are initially zero
⢠The outer product of two vectors:
11. Perfect recall versus cross talk
⢠The suitability of the hebb rule for a particular problem
depends on the correlation among the input training vectors .
⢠If the input vector are uncorrelated(orthogonal),the hebb rule
will produce the correct weights ,and the response of the net
when tested with one of the training vectors will be prefect
recall of the input vectorâs associated target
⢠If the input vector are not orthogonal ,the response will
include a portion of each of their target values .This is
commonly called cross talk
12. Contd ..
⢠two vector are orthogonal ,if their dot product is 0.
⢠Orthogonality between the input patterns can be
checked only for binary or bipolar patterns
13. Delta rule
⢠In its original form ,as introduced in chapter 2,the delta rule
assumed that the activation function for the output unit was
the identity function.
⢠A simple extension allows for the use of any differential
activation function;we shall call this the extended delta rule
14. Associate Memory Network
⢠These kinds of neural networks work on the basis of pattern
association, which means they can store different patterns and at
the time of giving an output they can produce one of the stored
patterns by matching them with the given input pattern.
⢠These types of memories are also called Content-Addressable
Memory (CAM)Associative memory makes a parallel search with
the stored patterns as data files.
⢠Example
15. Contd ..
the two types of associative memories
⢠Auto Associative Memory
⢠Hetero Associative memory
16. Auto associative Memory
⢠.training input and output vector are same
⢠Determination of weight is called storing of vectors
⢠Weight is set to zero
⢠Auto associative net with no self connection
⢠Its performance is judged by its ability to reproduce a stored
pattern from noisy input
⢠Its performance in general better for bipolar vector than binary
vector
17. Architecture
⢠Input and output
vector are same
⢠The input vector
has n inputs and
output vector has n
outputs
⢠The input and
output are
connected through
weighted
connection
19. Testing Algorithm
⢠An auto associative can be used to determine whether the given
vector is âknownâ or âunknown vectorâ
⢠A net is known to recognize a âknownâvector if the net produce a
pattern of activation on the output which is same as one stored
⢠Testing procedure as follows
21. Hetero Associative Memory
⢠The input training vector and the output target vectors are
not the same.
⢠Determination of weight by hebb rule or delta rule
⢠The weights are determined so that the network stores a set
of patterns.
⢠Hetero associative network is static in nature, hence, there
would be no non-linear and delay operations.
.
22. Architecture
⢠Input has ânâ input and
output has âmâ output
⢠There weighted
interconnection
between input and
output
⢠Associative memory
neural networks are
nets in which the
weights are determined
in such a way that the
net can store a set of P
pattern association
⢠Each association is pair
of vector (s(p),t(p)), with
p=1,2,âŚ..P
26. Artificial Neural Network - Hopfield
Networks
⢠Hopfield neural network was invented by Dr. John J. Hopfield in
1982.
⢠It consists of a single layer which contains one or more fully
connected recurrent neurons.
⢠The Hopfield network is commonly used for auto-association and
optimization tasks
⢠Two types of network
1.discrete hopfield network
2.continuous hopfield network
27. Discrete hopfield network
⢠A Hopfield network which operates in a discrete line fashion or
in other words, it can be said the input and output patterns are
discrete vector, which can be either binary 0, 1 or bipolar +1, -1
in nature.
⢠The network has symmetrical weights with no self-connections
i.e.,
⢠Only one unit updates Its activation at a time
⢠The asynchronous updating of the units allows a function,
known as an energy or Lyapunov function, to be found for the
net
28. Architecture
Architecture
Following are some important points to
keep in mind about discrete Hopfield
network
⢠This model consists of neurons with one
inverting and one non-inverting output.
⢠The output of each neuron should be
the input of other neurons but not the
input of self.
⢠Weight/connection strength is
represented by Wij .
⢠Connections can be excitatory as well as
inhibitory. It would be excitatory, if the
output of the neuron is same as the
input, otherwise inhibitory.
⢠Weights should be symmetrical, i.e. Wij
= Wji
⢠the output from Y1 going to Y2 , Yi and
Yn ,have the weights W12 , W1i and
W1n respectively. Similarly,other arcs
have the weights on them
34. Energy function
⢠An energy function is defined as a function that is
bonded and non-increasing function of the state of
the system.
⢠Energy function Ef, also called Lyapunov function
determines the stability of discrete Hopfield
network, and is characterized as follows
35. Contd ..
The change in energy depends on the fact that only one unit can update its
activation at a time.
36. Storage capacity
⢠The number of binary pattern that can be stored
and recalled in a net with reasonable accuracy is
given approximately by
⢠For bipolar pattern
Where n is number of neuron in a net
37. Continuous Hopfield network
⢠In comparison with Discrete Hopfield network,
continuous network has time as a continuous variable.
⢠It is also used in auto association and optimization
problems such as travelling salesman problem.
⢠Node has continuous graded output
⢠Energy decreases continuously with time
⢠Electrical circuit which uses non- linear amplifers and
resistors
⢠Used in building hopfield with VLSI technology
39. Iterative Autoassociative networks
⢠Net does not respond to the input signal with the
stored target pattern.
⢠Respond like stored pattern.
⢠Use the first response as input to the net again.
⢠Iterative auto associative network recover original
stored vector when presented with test vector
close to it.
⢠Recurrent Autoassociative networks.
42. Linear Autoassociative Memory
⢠James Anderson, 1977
⢠Based on Hebbian rule.
⢠Linear algebra is used for analyzing the performance of the
net.
⢠Stored vector is eigen vector.
⢠Eigen value-number of times the vector are presented
⢠When the input vector is X, then output response is XW,
where W is the weight matrix.
43. Brain In The Box Network
⢠An activity pattern inside the box receives positive
feedback on certain components, which will force it
outward.
⢠When it hit the walls, it moves to the corner of the
box where it remains such
⢠Represents saturation limit of each state.
⢠Restricted between -1 and +1.
⢠Self connection exists.
48. Temporal Associative Memory
Network
⢠Storing sequence of patterns as dynamic
transitions.
⢠Temporal patterns and associative memory with
this capacity is temporal associative memory
network.
49. Bidirectional associative
memory(BAM)
⢠It is first proposed by Bart Kosko in the year 1988
⢠Performs backward and forward search
⢠It associates patterns, say from set A to patterns
from set B and vice versa is also performed.
⢠Encodes bipolar/binary pattern using hebbian
learning rule
⢠Human memory is necessarily associative.
⢠It uses a chain of mental associations to recover a
lost memory .eg if we have lost an umberalla
50. BAM Architecture
⢠Weights are bidirectional
⢠X layer has ânâ input units
⢠Y layer has âmâ output units
⢠Weight matrix from X to Y is W
and from Y to X is WT
⢠Process repeated untill the input
and output vector become
unchanged (reach stable state)
⢠two types
1 .discrete BAM
2. continuous BAM
51. DISCRETE BIDIRECTIONAL AUTO
ASSOCIATIVE MEMORY
⢠Here weights are found to be the sum of outer product of bipolar
form training vector pair.
⢠Activation function is step up function with non zero threshold
⢠Determination of Weights
1. Let the input vectors be denoted by s(p) and target vectors
by t(p)
2.the weight matrix to store a set of input and target vectors,
where s(p) = (s1(p), .. , si(p), ... , sn(p))
t(p) = (t1(p), .. , tj(p), ... , tm(p))
3.It can be determined by Hebb rule training a1gorithm.
4. if the input is binary , the weight matrix W = {wij} is given by
52. contd
⢠If the input vector are bipolar , the weight matrix W =
{wij} can be defined as
⢠Activation function for BAM
⢠The activation function is based on whether the input
target vector pairs used are binary or bipolar
⢠The activation function for the Y-layer
1. With binary input vectors is
2. with bipolar input vector is
55. Continuous BAM
⢠A continuous BAM[Kosko, 1988] transforms input smoothly and
continuously into output in the range [0, 1] using the logistic
sigmoid function as the activation function for all units
⢠For binary input vectors,the weights are
⢠The activation function is the logistic sigmoid
⢠With bias included ,the net input is
56. Hamming distance ,analysis of energy
function and storage capacity
⢠Hamming distance
⢠the number of mismatched component of two given
bipolar /binary vector .
⢠Denoted by
⢠Average distance =
58. Storage capacity
⢠Memory capacity min(m,n)
⢠ânâ is the number of unit in X layer and âmâ is the
number of unit in Y layer
⢠More conservative capacity is estimated as follows
59. Application of BAM
⢠Fault Detection
⢠Pattern Association
⢠Real Time Patient Monitoring
⢠Medical Diagnosis
⢠Pattern Mapping
⢠Pattern Recognition systems
⢠Optimization problems
⢠Constraint satisfaction problem
62. Competitive learning network
⢠It is concerned with unsupervised training in which the
output node tries to compete with each other to
represent the input pattern
⢠Basic concept of competitive network
⢠This network is just like single layer feed âforward
network have feedback connection between output .
⢠The connection between the outputs are inhibitory type
,which shown by dotted lines ,which means the
competitors never support themselves
63. Contd..
⢠Example
⢠Considering set of student if you want to classify them on
the basis of evaluation performance, their score my be
calculated and the one who score is higher than the
others should be the winner
⢠The is called competitive net .the extreme form of these
competitive net is called winner âtake âall
⢠i.e ;only one neuron in the competing group will posses
non zero output signal at the end of competition
⢠Only one neuron is active at a time. Only the winner has
updated weights, the rest remain unchanged.
64. Contd..
⢠Some of the neural network that comes under these
category
1. Kohonen self orgnizing feature maps
2. Learning vector quantization
3. Adaptive resonance theory
65. Kohonen self organizing feature map
⢠Self Organizing Map (or Kohonen Map or SOM) is a type
of Artificial Neural Network .
⢠It follows an unsupervised learning approach and trained
its network through a competitive learning algorithm.
⢠SOM is used for clustering and mapping (or
dimensionality reduction) techniques to map
multidimensional data onto lower-dimensional which
allows people to reduce complex problems for easy
interpretation.
⢠SOM has two layers,
1. Input layer
2. Output layer.
66. operation
⢠SOM operates in Two modes (1) Training
(2) Mapping
⢠Training Process:
Develops the map using competitive procedure
(Vector Quantization)
⢠Mapping Process:
Classifies the new supplied input based on the
training outcomes
67. ⢠Basic competitive learning implies that the competition
process takes place before the cycle of learning.
⢠The competition process suggests that some criteria select
a winning processing element.
⢠After the winning processing element is selected, its
weight vector is adjusted according to the used learning
law.
⢠Feature mapping is a process which converts the patterns
of arbitrary dimensionality into a response of one or two
dimensions array of neurons.
⢠The network performing such a mapping is called feature
map. The reason for reducing the higher dimensionality,
the ability to preserve the neighbor topology.
70. Application-speech recognition
⢠The short segments of the speech waveform is
given as input .
⢠It will map the same kind of phonemes as the
output array, called feature extraction technique.
⢠After extracting the features, with the help of some
acoustic models as back-end processing, it will
recognize the utterance
71. Learning vector quantization(LVQ)
⢠Purpose :dimentionality reduction and data compression
⢠Self organizing map (SOM) is to encode a large set of input
vector{x} by finding a smaller set of representatives
/prototype/cluster
⢠It is a supervised version of vector quantization that can be
used when we have labelled input data
⢠It is a two stage process- a SOM is followed by LVQ
⢠The first step is feature selection: the unsupervised
identification of a reasonably a small set of pattern
⢠Second step is classification -where the feature domains are
assigned to individual classes
73. Example
⢠the first step is to train the machine
with all the different fruits one by one
like this:
⢠If the shape of the object is rounded and
has a depression at the top, is red in
color, then it will be labeled as âApple.
⢠If the shape of the object is a long curving
cylinder having Green-Yellow color, then
it will be labeled as âBanana.
⢠after training the data, you have given a
new separate fruit, say Banana from the
basket, and asked to identify it.
⢠It will first classify the fruit with its shape
and color and would confirm the fruit
name as BANANA and put it in the
Banana category. Thus the machine
learns the things from training
data(basket containing fruits) and then
applies the knowledge to test data(new
fruit).
77. adaptive resonance theory
⢠adaptive - that they are open to new learning
⢠resonance - without discarding the previous or the old
information
⢠The ART networks are known to solve the stability-
plasticity dilemma
⢠i.e., stability refers to their nature of memorizing the
learning
⢠plasticity refers to the fact that they are flexible to gain
new information.
⢠Due to this the nature of ART they are always able to
learn new input patterns without forgetting the past
78. Contd..
⢠Invented by Grossberg in 1976 and based on
unsupervised learning model.
⢠Resonance means a target vector matches close enough
the input vector.
⢠ART matching leads to resonance and only in resonance
state the ART network learns.
⢠Suitable for problems that uses online dynamic large
databases.
⢠Types: (1) ART 1- classifies binary input vectors
(2) ART 2 â clusters real valued input (continuous
valued) vectors.
⢠Used to solve Plasticity â stability dilemma.
79. Architecture
⢠It consist of
1. A comparison field
2. A recognition field-
composed of neuron
3. A vigilance parameter
4. A reset module
80. ⢠Comparison phase â In this phase, a comparison of the
input vector to the comparison layer vector is done.
⢠Recognition phase â The input vector is compared with the
classification presented at every node in the output layer.
The output of the neuron becomes â1â if it best matches
with the classification applied, otherwise it becomes â0â.
⢠Vigliance parameter âAfter the i/p vectors are classified a
reset module compares the strength of match to vigilance
parameter (defined by the user).
⢠Higher vigilance produces fine detailed memories and
lower vigilance value gives more general memory.
81. ⢠Rest module - compares the strength of recognition
phase. When vigilance threshold is met then
training starts otherwise neurons are inhibited until
a new i/p is provided
⢠There are two set of weights
(1) Bottom -up weight - from F1 layer to F2 Layer
(2) Top âDown weight â F2 to F1 Layer
85. Application
⢠ART neural networks used for fast, stable learning and
prediction have been applied in different areas.
⢠Application of ART
⢠target recognition, face recognition, medical diagnosis,
signature verification, mobile control robot
⢠Signature verification:
⢠signature verification is used in bank check confirmation, ATM
access, etc.
⢠the training of the network is finished using ART1 that uses
global features as input vector and
⢠The testing phase has two step 1.the verification and
2. recognition phase
⢠In the initial step, the input vector is coordinated with the
stored reference vector, which was used as a training set, and in
the second step, cluster formation takes place.