Your SlideShare is downloading. ×
Summary of a neural model of human image categorization
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Summary of a neural model of human image categorization

138
views

Published on

original paper is in here: http://compneuro.uwaterloo.ca/publications/hunsberger2013a.html …

original paper is in here: http://compneuro.uwaterloo.ca/publications/hunsberger2013a.html

Summary of one of CogSci 2013 papers. It demonstrates how to examine the computational model as an imitation of human's image categorization.

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
138
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Summary of A Neural Model of Human Image Categorization Methodology of Cognitive Science Jin Hwa Kim Cognitive Science Program Seoul National University
  • 2. What We Will See 1. Computational Neural Model - Leaky integrate-and-fire (LIF) neuron model - Deep autoencoder - Circular convolution 2. How classes of visual objects are represented in the brain? - Prototype-based (Posner & Keele, 1968) - Exemplar-based (Regehr & Brooks, 1993)
  • 3. LIF Neuron Model Leaky integrate-and-fire (LIF) neuron model - One of biological neron models (spiking neuron model) [Gerstner and Kistler, 2002]
  • 4. Deep Autoencoder
  • 5. Deep Autoencoder Principal Component Analysis direction of first principal component i.e. direction of greatest variance
  • 6. Deep Autoencoder Specialized neural network - Try to make the output be the same as the input in a network with a central bottleneck output vector decoding weights semantic pointer code encoding weights input vector
  • 7. Deep Autoencoder Solving optimization problem - Use unsupervised layer-by-layer pre-training. - LIF instead of RBM W1 W2 W3 784 ! 1000 ! 500 ! 250 W4 W1T T W2 T W3 784 " 1000 " 500 " 250 30 linear units T W4 We train a stack of 4 RBM s and then unroll them. Then we fine-tune with gentle backprop. [Hinton & Salakhutdinov, 2006]
  • 8. Circular Convolution Store semantic pointers - Holographic reduced representations using compositional distributed representation circular convolution operator Categorization process [Plate, 2003]
  • 9. Visual Categorization Model
  • 10. Posner & Keele, 1968 Prototype theory - It was designed to test whether human subjects are learning about class prototypes when they only ever see distorted examples. Figure 3: Sample stimuli for Experiment 1, modelling a classic study by Posner & Keele (1968). The dot patterns are created by distorting three randomly drawn prototype images (left) with low (centre) and high (right) levels of noise. Subjects are trained to classify a set of twelve highdistortion patterns and tested without feedback on the same prototypes at different distortion levels.
  • 11. Posner & Keele, 1968 Results Figure 4: Comparison of human and model performance for Experiment 1. The model is able to account for human results when presented with the schema, low distortion (5), and high distortion (7) patterns. Occasional random errors by human subjects may explain the discrepancy on training examples. Error bars indicate 95% confidence intervals. Human data from Posner & Keele (1968).
  • 12. Regehr & Brooks, 1993 Exemplar theory - Analytic vs. Perceptual similarity Figure 5: Sample stimuli for Experiment 2, modelling experiment 1C of Regehr and Brooks (1993). (Left) Images are composed of interchangeable (composite) feature manifestations. (Right) Images expressing the same attributes are drawn in a more coherent (individuated) style. Regehr & Brooks (1993) drew a distinction between good transfer and bad transfer test stimuli. A test stimulus is a good transfer case when the addition or removal of spots matches a training case with the same label, and a bad transfer case if adding or removing spots matches a training case with the opposite label. (Adapted from Regehr & Brooks (1993) Figure 2A).
  • 13. Regehr & Brooks, 1993 Results Figure 6: Comparison of human and model performance for Experiment 2. Our model accounts for the key difference in human performance on the good transfer (GT) versus bad transfer (BT) pairs for the individuated stimuli. Error bars indicate 95% confidence intervals. Human data from Regehr & Brooks (1993).
  • 14. Discussion Biological vs. Artificial neuron model Double-reduced representation - Deep autoencoder - Circular convolution

×