SlideShare a Scribd company logo
1 of 35
+
levy
+
echo
+
void
+
evil
+
obey
RECALL
+
care
+
mate
+
sear
+
beat
+
mine
RECALL
IAM overview
● Three layers of interconnected neurons (units)
– Layers:
● Word: each neuron corresponds to a word
● Letter: each neuron corresponds to a letter
● Feature: each neuron corresponds to a “feature” (edges in this case)
● Interactivity:
– Activation of the word level alters the lower levels (“top-down
information”)
● Lateral-inhibition:
– Within each layer, the neurons inhibit one-another resulting in
maximal activity for only one neuron (“winner-takes-all”)
Spreading Activation
● Each neuron either inhibits or
activates its neighbours
● Same with the interconnected
layers
From data to theory
IAM was originially devised to help account for the
so-called word-superiority effect, which is the
observation that letters are recognized faster
within words in comparison to nonwords
Demo:
http://www.psychology.nottingham.ac.uk/staff/wvh/jiam/
Orthographic neighbours
● High
– Care
– Mate
– Sear
– Beat
– Mine
● Low
– Void
– Evil
– Echo
– Obey
– levy
Hypotheses
● At least two hypotheses can be derrived from
the model
– Different levels of abstraction interact
● interactivity assumption
– Neurons within each layer compete
● Lateral inhibition assumption
From theory to data
● Lexical decision task:
– Subjects are presented with either words or non-
words on the screen, and asked to decide if each
item was a word or a non-word
● They are instructed to respond as fast as possible
● Response-Time (RT) is the primary measure of
performance
From theory to data
● Segui & Grainger (1990) tested the interactivity
assumption, in several lexical decision
experiments
● They used two different versions of the model
to simulate the data obtained from human
participants
– IAM: standard IAM model
– NIAM: non-interactive activation model
NIAM
From theory to data
What now?
● Interactivity might not be necessary, since
lateral inhibition alone seems to account for the
data
● Suggests that the model might be too complex
than it should be (not good)
– See also Mewhort & Johns (1988)
● More scrutiny is needed!
references
● McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation
model of context effects in letter perception: I. An account of basic
findings. Psychological review, 88(5), 375.
● Jacobs, A. M., & Grainger, J. (1992). Testing a semistochastic variant
of the interactive activation model in different word recognition
experiments. Journal of Experimental Psychology: Human Perception
and Performance, 18(4), 1174.
● Mewhort, D. J. K., & Johns, E. E. (1988). Some tests of the
interactive-activation model for word identification. Psychological
Research, 50(3), 135-147. Chicago
“...the logical 'and' function of a computer can be
realized with switches operated by relays, with
vacuum tubes, or with transistors. A computer-
logic designer does not have to know the physics
of transistors to design with components based on
transistors. So, something in addition to structural
analysis is needed.”
From Sparse Distributed Memory,
by Pentti Kanerva
great book!
Kevin Shabahang: k.shabahang@gmail.com
BSc Psychology (honours), Queen's Universitry
You can usually find me in the Human
Information Processing Lab at 315
Humphrey Hall...
Dr. D. J. K. Mewhort

More Related Content

Similar to cogs_100_demo

CEN launch, Gert Westermann
CEN launch, Gert WestermannCEN launch, Gert Westermann
CEN launch, Gert WestermannYishay Mor
 
Introduction to Artificial Intelligence
Introduction to Artificial IntelligenceIntroduction to Artificial Intelligence
Introduction to Artificial IntelligenceLuca Bianchi
 
Mental model for emotion
Mental model for emotionMental model for emotion
Mental model for emotionShushi Namba
 
deepnet-lourentzou.ppt
deepnet-lourentzou.pptdeepnet-lourentzou.ppt
deepnet-lourentzou.pptyang947066
 
introduction to machine learning and nlp
introduction to machine learning and nlpintroduction to machine learning and nlp
introduction to machine learning and nlpMahmoud Farag
 
Applied game design 2 analysis
Applied game design 2 analysisApplied game design 2 analysis
Applied game design 2 analysisharlequinade
 
A Review On Genetic Algorithm And Its Applications
A Review On Genetic Algorithm And Its ApplicationsA Review On Genetic Algorithm And Its Applications
A Review On Genetic Algorithm And Its ApplicationsKaren Gomez
 
Cognetics (UX Shiraz 2019)
Cognetics (UX Shiraz 2019)Cognetics (UX Shiraz 2019)
Cognetics (UX Shiraz 2019)Hamed Abdi
 
Module 5: Decision Trees
Module 5: Decision TreesModule 5: Decision Trees
Module 5: Decision TreesSara Hooker
 
Cognitive Psychology, Learning and Memory for IGNOU students
Cognitive Psychology, Learning and Memory for IGNOU studentsCognitive Psychology, Learning and Memory for IGNOU students
Cognitive Psychology, Learning and Memory for IGNOU studentsPsychoTech Services
 
Demystifying Machine Learning
Demystifying Machine LearningDemystifying Machine Learning
Demystifying Machine LearningAyodele Odubela
 
Learning to learn unlearned feature for segmentation
Learning to learn unlearned feature for segmentationLearning to learn unlearned feature for segmentation
Learning to learn unlearned feature for segmentationNAVER Engineering
 
MachineLlearning introduction
MachineLlearning introductionMachineLlearning introduction
MachineLlearning introductionThe IOT Academy
 
Neural Network in Knowledge Bases
Neural Network in Knowledge BasesNeural Network in Knowledge Bases
Neural Network in Knowledge BasesKushal Arora
 

Similar to cogs_100_demo (20)

CEN launch, Gert Westermann
CEN launch, Gert WestermannCEN launch, Gert Westermann
CEN launch, Gert Westermann
 
Introduction to Artificial Intelligence
Introduction to Artificial IntelligenceIntroduction to Artificial Intelligence
Introduction to Artificial Intelligence
 
Mental model for emotion
Mental model for emotionMental model for emotion
Mental model for emotion
 
deepnet-lourentzou.ppt
deepnet-lourentzou.pptdeepnet-lourentzou.ppt
deepnet-lourentzou.ppt
 
introduction to machine learning and nlp
introduction to machine learning and nlpintroduction to machine learning and nlp
introduction to machine learning and nlp
 
AI: Learning in AI
AI: Learning in AI AI: Learning in AI
AI: Learning in AI
 
AI: Learning in AI
AI: Learning in AI AI: Learning in AI
AI: Learning in AI
 
Applied game design 2 analysis
Applied game design 2 analysisApplied game design 2 analysis
Applied game design 2 analysis
 
A Review On Genetic Algorithm And Its Applications
A Review On Genetic Algorithm And Its ApplicationsA Review On Genetic Algorithm And Its Applications
A Review On Genetic Algorithm And Its Applications
 
Cognetics (UX Shiraz 2019)
Cognetics (UX Shiraz 2019)Cognetics (UX Shiraz 2019)
Cognetics (UX Shiraz 2019)
 
SoftComputing6
SoftComputing6SoftComputing6
SoftComputing6
 
Module 5: Decision Trees
Module 5: Decision TreesModule 5: Decision Trees
Module 5: Decision Trees
 
Cognitive Psychology, Learning and Memory for IGNOU students
Cognitive Psychology, Learning and Memory for IGNOU studentsCognitive Psychology, Learning and Memory for IGNOU students
Cognitive Psychology, Learning and Memory for IGNOU students
 
Demystifying Machine Learning
Demystifying Machine LearningDemystifying Machine Learning
Demystifying Machine Learning
 
Learning to learn unlearned feature for segmentation
Learning to learn unlearned feature for segmentationLearning to learn unlearned feature for segmentation
Learning to learn unlearned feature for segmentation
 
AI: Logic in AI
AI: Logic in AIAI: Logic in AI
AI: Logic in AI
 
AI: Logic in AI
AI: Logic in AIAI: Logic in AI
AI: Logic in AI
 
MachineLlearning introduction
MachineLlearning introductionMachineLlearning introduction
MachineLlearning introduction
 
Introduction to ml
Introduction to mlIntroduction to ml
Introduction to ml
 
Neural Network in Knowledge Bases
Neural Network in Knowledge BasesNeural Network in Knowledge Bases
Neural Network in Knowledge Bases
 

cogs_100_demo

  • 1. +
  • 3. +
  • 5. +
  • 7. +
  • 9. +
  • 10. obey
  • 12. +
  • 13. care
  • 14. +
  • 15. mate
  • 16. +
  • 17. sear
  • 18. +
  • 19. beat
  • 20. +
  • 21. mine
  • 23. IAM overview ● Three layers of interconnected neurons (units) – Layers: ● Word: each neuron corresponds to a word ● Letter: each neuron corresponds to a letter ● Feature: each neuron corresponds to a “feature” (edges in this case) ● Interactivity: – Activation of the word level alters the lower levels (“top-down information”) ● Lateral-inhibition: – Within each layer, the neurons inhibit one-another resulting in maximal activity for only one neuron (“winner-takes-all”)
  • 24. Spreading Activation ● Each neuron either inhibits or activates its neighbours ● Same with the interconnected layers
  • 25. From data to theory IAM was originially devised to help account for the so-called word-superiority effect, which is the observation that letters are recognized faster within words in comparison to nonwords
  • 27. Orthographic neighbours ● High – Care – Mate – Sear – Beat – Mine ● Low – Void – Evil – Echo – Obey – levy
  • 28. Hypotheses ● At least two hypotheses can be derrived from the model – Different levels of abstraction interact ● interactivity assumption – Neurons within each layer compete ● Lateral inhibition assumption
  • 29. From theory to data ● Lexical decision task: – Subjects are presented with either words or non- words on the screen, and asked to decide if each item was a word or a non-word ● They are instructed to respond as fast as possible ● Response-Time (RT) is the primary measure of performance
  • 30. From theory to data ● Segui & Grainger (1990) tested the interactivity assumption, in several lexical decision experiments ● They used two different versions of the model to simulate the data obtained from human participants – IAM: standard IAM model – NIAM: non-interactive activation model
  • 31. NIAM
  • 33. What now? ● Interactivity might not be necessary, since lateral inhibition alone seems to account for the data ● Suggests that the model might be too complex than it should be (not good) – See also Mewhort & Johns (1988) ● More scrutiny is needed!
  • 34. references ● McClelland, J. L., & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological review, 88(5), 375. ● Jacobs, A. M., & Grainger, J. (1992). Testing a semistochastic variant of the interactive activation model in different word recognition experiments. Journal of Experimental Psychology: Human Perception and Performance, 18(4), 1174. ● Mewhort, D. J. K., & Johns, E. E. (1988). Some tests of the interactive-activation model for word identification. Psychological Research, 50(3), 135-147. Chicago
  • 35. “...the logical 'and' function of a computer can be realized with switches operated by relays, with vacuum tubes, or with transistors. A computer- logic designer does not have to know the physics of transistors to design with components based on transistors. So, something in addition to structural analysis is needed.” From Sparse Distributed Memory, by Pentti Kanerva great book! Kevin Shabahang: k.shabahang@gmail.com BSc Psychology (honours), Queen's Universitry You can usually find me in the Human Information Processing Lab at 315 Humphrey Hall... Dr. D. J. K. Mewhort