Mechanisms of bottom-up and top-down processing in visual perception
Upcoming SlideShare
Loading in...5
×
 

Mechanisms of bottom-up and top-down processing in visual perception

on

  • 16,369 views

This is a talk given in April 2009 in the Redwood center at UC Berkeley.

This is a talk given in April 2009 in the Redwood center at UC Berkeley.

Statistics

Views

Total Views
16,369
Views on SlideShare
16,224
Embed Views
145

Actions

Likes
9
Downloads
298
Comments
2

5 Embeds 145

http://www.slideshare.net 84
http://web.mit.edu 42
http://www.linkedin.com 12
https://www.linkedin.com 6
http://static.slideshare.net 1

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • As a management instructor I enjoy viewing the function of others. This is among the greatest demonstration on planning I have viewed.
    Sharika
    http://winkhealth.com http://financewink.com
    Are you sure you want to
    Your message goes here
    Processing…
  • pretty impressive work. Very useful. Thanks
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Thank you very much Charles for inviting me. I am delighted to be here and enjoying a weather that we could never hope for in the Spring in Boston... <br />
  • Here is the problem I am trying to solve: You give me an image and I tell you for instance whether or not it contains an animal. <br /> Object recognition is a very hard computational problem. The reason for that is that despite the fact that all of these are images of a giraffe, they look quite different at the pixel level. Objects in the real-world and these animal images in particular can vary drastically in terms of their appearance, shape, texture. <br /> In particular, changes in position and scale can create very large changes in the pattern of activity that they elicit on the retina... Think about that: even just a small shift in position of 2 deg of visual angle corresponds to shifting of the image on the retina of more than 120 photoreceptors! <br /> This is an extremely difficult task and today, no artificial computer vision system can do this task as robustly and accurately as the primate visual system. <br /> However as primates we are extremely good at solving this task despite all these variations... <br />
  • A classical paradigm that has been extensively used to study object recognition and visual perception is what I would call the rapid recognition paradigms. <br /> Here I am flashing images in rapid succession. This paradigm is called RSVP and was introduced by Molly Potter in the 70’s. Images are being presented at a rate of 7/s. At this speed you probably don’t get every details in the image but at the very least you are able to build a coarse description of the scene. For instance most of you should be able to recognize and perhaps memorize objects in these images... <br /> While these types of task do not necessarily reflect natural everyday vision when the visual world moves continuously and you are free to move your eyes and shift your attention. However they are able to isolate the first 100-150 ms of visual processing during which a base representation for images is being formed before more complex visual routines can come into play... <br />
  • A classical paradigm that has been extensively used to study object recognition and visual perception is what I would call the rapid recognition paradigms. <br /> Here I am flashing images in rapid succession. This paradigm is called RSVP and was introduced by Molly Potter in the 70’s. Images are being presented at a rate of 7/s. At this speed you probably don’t get every details in the image but at the very least you are able to build a coarse description of the scene. For instance most of you should be able to recognize and perhaps memorize objects in these images... <br /> While these types of task do not necessarily reflect natural everyday vision when the visual world moves continuously and you are free to move your eyes and shift your attention. However they are able to isolate the first 100-150 ms of visual processing during which a base representation for images is being formed before more complex visual routines can come into play... <br />
  • A classical paradigm that has been extensively used to study object recognition and visual perception is what I would call the rapid recognition paradigms. <br /> Here I am flashing images in rapid succession. This paradigm is called RSVP and was introduced by Molly Potter in the 70’s. Images are being presented at a rate of 7/s. At this speed you probably don’t get every details in the image but at the very least you are able to build a coarse description of the scene. For instance most of you should be able to recognize and perhaps memorize objects in these images... <br /> While these types of task do not necessarily reflect natural everyday vision when the visual world moves continuously and you are free to move your eyes and shift your attention. However they are able to isolate the first 100-150 ms of visual processing during which a base representation for images is being formed before more complex visual routines can come into play... <br />
  • In this talk I will argue that this base representation corresponds to the activation of a hierarchy of image fragments following a single feedforward sweep through the visual system. This bottom-up feedforward sweep rapidly activates specific sub-population of neurons in the ventral stream of the visual cortex that are tuned to image fragments with different levels of selectivity and invariance. <br /> <br /> I will show you that consistent with human psychophysics, a key limitation of this architecture is that it is susceptible to clutter. While it does relatively well on images that contains a single object and limited clutter (like the ones I just showed you), we found that the performance decreases significantly with increased amount of clutter. <br /> <br /> <br />
  • In this talk I will argue that this base representation corresponds to the activation of a hierarchy of image fragments following a single feedforward sweep through the visual system. This bottom-up feedforward sweep rapidly activates specific sub-population of neurons in the ventral stream of the visual cortex that are tuned to image fragments with different levels of selectivity and invariance. <br /> <br /> I will show you that consistent with human psychophysics, a key limitation of this architecture is that it is susceptible to clutter. While it does relatively well on images that contains a single object and limited clutter (like the ones I just showed you), we found that the performance decreases significantly with increased amount of clutter. <br /> <br /> <br />
  • In the second part of my talk I will argue that the way the visual system solves this clutter problem is via cortical feedback and shifts of attention. I will outline an integrated model of object recognition and attention. I will show that the object recognition performance of the model increases significantly when used in conjunction with attentional mechanisms. Using eye movements as a proxy for attention, I will show that the resulting model can account for a significant fraction of human eye movements during search tasks in complex natural images. <br /> <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • We have implemented a computational model (shown on the right) that implement these sets of principles. <br /> <br /> Van Essen on the left. We do not try to account for the whole visual cortex, only the ventral stream of the visual cortex... <br /> <br /> The model is hierarchical with only feedforward connections. <br /> <br />
  • We have implemented a computational model (shown on the right) that implement these sets of principles. <br /> <br /> Van Essen on the left. We do not try to account for the whole visual cortex, only the ventral stream of the visual cortex... <br /> <br /> The model is hierarchical with only feedforward connections. <br /> <br />
  • We have implemented a computational model (shown on the right) that implement these sets of principles. <br /> <br /> Van Essen on the left. We do not try to account for the whole visual cortex, only the ventral stream of the visual cortex... <br /> <br /> The model is hierarchical with only feedforward connections. <br /> <br />
  • Computational considerations suggest that you need two types of operations and therefore functional classes of cells to explain those data. <br /> <br /> By analogy with H&B hierarchical model of processing in the visual cortex, we have called these two classes of cells simple and complex. The scheme that I am going to describe essentially extend their proposal from striate to extra-striate visual areas. <br /> <br /> We have assumed that these two types of functional units implement two types of computations or mathematical operations: Gaussian-like or bell-shape tuning and a max-like operation. <br /> <br /> The gaussian-bell tuning was motivated by a learning algorithm called Radial Basis Function while the max operation was motivated by the standard scanning approach in computer vision and theoretical arguments from signal processing. <br /> <br /> The goal of the simple units is to increase the complexity of the representation. Here on this example by pooling together the activity of afferent units with different orientations via this Gaussian-like tuning. This Gaussian tuning is ubiquitous in the visual cortex from orientation tuning in V1 to tuning for complex objects around certain poses in IT. <br /> <br /> The complex units pool together afferent units with the same preferred stimuli eg vertical bar but slightly different positions and scales. At the complex unit level we thus build some tolerance with respect to the exact position and scale of the stimulus within the receptive field of the unit. <br />
  • We have implemented a computational model (shown on the right) that implement these sets of principles. <br /> <br /> Van Essen on the left. We do not try to account for the whole visual cortex, only the ventral stream of the visual cortex... <br /> <br /> The model is hierarchical with only feedforward connections. <br /> <br />
  • EMPHASIZE AFTER TRAINING: NO DATA FITTING <br /> MENTION CHARLES <br /> <br /> It builds a simple-to-complex cells hierarchies. <br /> <br /> Mimic as closely as possible the tuning properties of neurons in various areas of the ventral stream <br /> Builds on earlier work in the lab by Max Riesenhuber <br /> <br /> <br /> <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • -- I would argue that a key aspect of this model is the learning of a large dictionary of reusable features (I would call them shape components) from V1 to IT. These features represent a basic vocabulary of shape components that can be used to represent any visual input. These features correspond to patches of images which appear with high probability in the natural world. We argue that learning of this dictionary is done UNSUPERVISED during a developmental period. <br /> <br /> -- In this model, the goal of the ventral stream of the visual cortex from V1 to IT is to build a good representation for images, i.e. a representation which is compact and invariant with respect to 2D transformations such as translation and scale. <br /> <br /> -- With a good image representation, learning a new image category is relatively easy. We speculate that this can be done from a handful of labeling examples by training task-specific circuits running from IT to the PFC. <br /> <br /> We showed that it worked well on multiple object categories on standard computer vision databases. <br />
  • for the sake of time I am only going to show you that you can simulate a neurophysiology experiment with this model. You can record from population of random neurons and perform the same exact analysis as in a real experiment. On the bar plot shown here we performed the same exact readout experiment as in the study by Hung et al. What is shown here the classification performance when training in a specific position and scale and evaluating the generalization capability of the classifier to positions and scales not presented during training. This measures the built-in invariance inherited from the response properties of population of neurons and you can see that the fit is quite good. <br /> <br />
  • In parallel we have used this model in real-world computer vision applications. For instance we have developed a computer vision system for the automatic parsing of street scene images. Here are examples of automatic parsing by the system overlaid over the original images. The colors and bounding boxes indicate predictions from the model (eg green for trees etc). <br /> <br /> <br /> The computer vision system shown here is based exclusively on the response properties <br />
  • More recently we have extended the approach for the recognition of human actions such as running, walking, jogging, jumping, waving etc... <br /> <br /> In all cases we have shown that the resulting biologically motivated computer vision systems were performing on par or better than state-of-the-art computer vision systems. <br />
  • <br />
  • The goal of the model was not to explain natural every day vision when you are free to move your eyes and shift your attention but rather was is often called rapid recognition or immediate recognition which corresponds to the first 100-150 ms of visual processing (when an image is briefly presented) ie when the visual system is forced to operate in a feedforward mode before eye movements and shifts of attention. <br /> <br /> Here is an example on the left. Here I flash an image for a couple of ms, you probably don’t have time to get every fine details of this image but most people are able to say whether they contain an animal or not. <br /> <br /> Here we had divided our dataset in 4 subcategories: head... overall both the model and human do about 80% on this very difficult task and you can see that they agree quite well in turns of how they perform for these 4 subcategories... <br /> <br />
  • The goal of the model was not to explain natural every day vision when you are free to move your eyes and shift your attention but rather was is often called rapid recognition or immediate recognition which corresponds to the first 100-150 ms of visual processing (when an image is briefly presented) ie when the visual system is forced to operate in a feedforward mode before eye movements and shifts of attention. <br /> <br /> Here is an example on the left. Here I flash an image for a couple of ms, you probably don’t have time to get every fine details of this image but most people are able to say whether they contain an animal or not. <br /> <br /> Here we had divided our dataset in 4 subcategories: head... overall both the model and human do about 80% on this very difficult task and you can see that they agree quite well in turns of how they perform for these 4 subcategories... <br /> <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • We have seen that in the model and in the visual cortex, when two stimuli fall within the receptive field of a neuron, the two stimuli “compete”, that is they reduce the selectivity of the neurons. I just showed you that at the psychophysical level, the amount of clutter in an image largely determines the performance of the model and of human observers during rapid categorization tasks. <br /> <br />
  • We have seen that in the model and in the visual cortex, when two stimuli fall within the receptive field of a neuron, the two stimuli “compete”, that is they reduce the selectivity of the neurons. I just showed you that at the psychophysical level, the amount of clutter in an image largely determines the performance of the model and of human observers during rapid categorization tasks. <br /> <br />
  • We have seen that in the model and in the visual cortex, when two stimuli fall within the receptive field of a neuron, the two stimuli “compete”, that is they reduce the selectivity of the neurons. I just showed you that at the psychophysical level, the amount of clutter in an image largely determines the performance of the model and of human observers during rapid categorization tasks. <br /> <br />
  • We have seen that in the model and in the visual cortex, when two stimuli fall within the receptive field of a neuron, the two stimuli “compete”, that is they reduce the selectivity of the neurons. I just showed you that at the psychophysical level, the amount of clutter in an image largely determines the performance of the model and of human observers during rapid categorization tasks. <br /> <br />
  • We have seen that in the model and in the visual cortex, when two stimuli fall within the receptive field of a neuron, the two stimuli “compete”, that is they reduce the selectivity of the neurons. I just showed you that at the psychophysical level, the amount of clutter in an image largely determines the performance of the model and of human observers during rapid categorization tasks. <br /> <br />
  • <br />
  • <br /> Using eye movements as correlate of attention. Assumption is that attention gets to an item just before eye moves so if eyes move we an assume that just before that attention was there <br />
  • Here is the original model: we added back-projections to account for these attentional modulations <br /> we assume that feature-based attention acts through a cascade of top-down connections though the ventral stream originating in the PFC where a template of the target object is held in memory all the way down to V4 and possibly lower areas. We also assume a spatial attention modulation originating from the parietal cortex (here I am assuming LIP based on limited experimental evidence). <br /> <br /> This attentional mechanisms can be casted in a probabilistic Bayesian framework whereby the parietal cortex represents Location variables, the ventral stream represents feature variables. These are our image fragments. Variables for the target object are encoded in higher areas such as PFC... <br /> This framework is inspired by an earlier model by Rao to explain spatial attention and is a special case of the computational model of the visual cortex described by David Mumford and that probably most of you know... <br /> <br /> <br />
  • Here is the original model: we added back-projections to account for these attentional modulations <br /> we assume that feature-based attention acts through a cascade of top-down connections though the ventral stream originating in the PFC where a template of the target object is held in memory all the way down to V4 and possibly lower areas. We also assume a spatial attention modulation originating from the parietal cortex (here I am assuming LIP based on limited experimental evidence). <br /> <br /> This attentional mechanisms can be casted in a probabilistic Bayesian framework whereby the parietal cortex represents Location variables, the ventral stream represents feature variables. These are our image fragments. Variables for the target object are encoded in higher areas such as PFC... <br /> This framework is inspired by an earlier model by Rao to explain spatial attention and is a special case of the computational model of the visual cortex described by David Mumford and that probably most of you know... <br /> <br /> <br />
  • Here is the original model: we added back-projections to account for these attentional modulations <br /> we assume that feature-based attention acts through a cascade of top-down connections though the ventral stream originating in the PFC where a template of the target object is held in memory all the way down to V4 and possibly lower areas. We also assume a spatial attention modulation originating from the parietal cortex (here I am assuming LIP based on limited experimental evidence). <br /> <br /> This attentional mechanisms can be casted in a probabilistic Bayesian framework whereby the parietal cortex represents Location variables, the ventral stream represents feature variables. These are our image fragments. Variables for the target object are encoded in higher areas such as PFC... <br /> This framework is inspired by an earlier model by Rao to explain spatial attention and is a special case of the computational model of the visual cortex described by David Mumford and that probably most of you know... <br /> <br /> <br />
  • <br />
  • here the way we implemented that is via belief propagation in polytrees (here the messages are shown for the simplified case of a single feature for clarity). <br /> Within framework, spatial attention can be described as a series of msgs from L to Fil to Fi to O while feature-based attention goes the opposite way. <br /> Thus the model makes specific predictions about the timing of visual areas in the ventral stream and the parietal cortex depending on the task at end. <br /> <br /> Obviously I am leaving a lot of details open unfortunately... <br />
  • <br />
  • We have implemented the approach in the context of our animal search <br /> model mostly improves on medium and far conditions <br />
  • We have implemented the approach in the context of our animal search <br /> model mostly improves on medium and far conditions <br />
  • We have implemented the approach in the context of our animal search <br /> model mostly improves on medium and far conditions <br />
  • We have implemented the approach in the context of our animal search <br /> model mostly improves on medium and far conditions <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • here the way we implemented that is via belief propagation in polytrees (here the messages are shown for the simplified case of a single feature for clarity). <br /> Within framework, spatial attention can be described as a series of msgs from L to Fil to Fi to O while feature-based attention goes the opposite way. <br /> Thus the model makes specific predictions about the timing of visual areas in the ventral stream and the parietal cortex depending on the task at end. <br /> <br /> Obviously I am leaving a lot of details open unfortunately... <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • Unlike artificial search arrays were arbitrary objects are simply randomly placed on a display, natural scenes are highly structured. <br /> This is a point that has been made by Antonio Torralba and Aude Oliva and the fact that global features could provide a good representation of the gist of the scene which is sufficient to associate contextual information from the visual scene to actual object locations like here for instance where you would expect people to be most in these darker regions... <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />
  • <br />

Mechanisms of bottom-up and top-down processing in visual perception Mechanisms of bottom-up and top-down processing in visual perception Presentation Transcript

  • Mechanisms of bottom-up and top-down processing in visual perception Thomas Serre McGovern Institute for Brain Research Department of Brain & Cognitive Sciences Massachusetts Institute of Technology
  • The problem: recognition in natural scenes
  • Rapid recognition: human behavior Potter 1971, 1975 see also Biederman 1972; Thorpe 1996 movie courtesy of Jim DiCarlo
  • Rapid recognition: human behavior Potter 1971, 1975 see also Biederman 1972; Thorpe 1996 movie courtesy of Jim DiCarlo
  • Rapid recognition: human behavior Gist of the scene at 7 images/s from unpredictable random sequence of images No time for eye movements No top-down / expectations Potter 1971, 1975 see also Biederman 1972; Thorpe 1996 movie courtesy of Jim DiCarlo
  • Rapid recognition: human behavior Gist of the scene at 7 images/s from unpredictable random sequence of images No time for eye movements No top-down / expectations Feedforward processing: Coarse / base image representation Potter 1971, 1975 see also Biederman 1972; Thorpe 1996 movie courtesy of Jim DiCarlo
  • Outline 1.Rapid recognition and feedforward processing: Loose hierarchy of image fragments “Clutter problem”
  • Outline 1.Rapid recognition and feedforward processing: Loose hierarchy of image fragments “Clutter problem”
  • Outline 1.Rapid recognition and feedforward processing: Loose hierarchy of image fragments “Clutter problem”
  • Outline 1.Rapid recognition and feedforward processing: Loose hierarchy of image fragments “Clutter problem” 2.Beyond feedforward processing: X X Top-down cortical feedback and attention to solve the “clutter problem” XX Predicting human eye movements
  • Outline 1.Rapid recognition and feedforward processing: Loose hierarchy of image fragments “Clutter problem” 2.Beyond feedforward processing: Top-down cortical feedback and attention to solve the “clutter problem” Predicting human eye movements
  • Object recognition in the visual cortex source: Jim DiCarlo
  • Object recognition in the visual cortex Ventral visual stream source: Jim DiCarlo
  • Object recognition in the visual cortex Hierarchical architecture: Ventral visual stream source: Jim DiCarlo
  • Object recognition in the visual cortex Hierarchical architecture: Latencies Ventral visual stream source: Jim DiCarlo
  • Object recognition in the visual cortex Hierarchical architecture: Latencies Ventral visual stream Anatomy source: Jim DiCarlo
  • Object recognition in the visual cortex Hierarchical architecture: Latencies Ventral visual stream Anatomy Function source: Jim DiCarlo
  • Object recognition in the visual cortex Nobel prize 1981 Hubel & Wiesel 1959, 1962, 1965, 1968
  • Object recognition in the visual cortex gradual increase in complexity of preferred stimulus Kobatake & Tanaka 1994 see also Oram & Perrett 1993; Sheinberg & Logothetis 1996; Gallant et al 1996; Riesenhuber & Poggio 1999
  • Object recognition in the visual cortex Parallel increase in invariance properties (position and scale) of neurons Kobatake & Tanaka 1994 see also Oram & Perrett 1993; Sheinberg & Logothetis 1996; Gallant et al 1996; Riesenhuber & Poggio 1999
  • Model RF sizes Num. layers units Animal vs. Prefrontal 11, task-dependent learning classification 8 46 45 12 10 0 non-animal Cortex 13 units Supervised Increase in complexity (number of subunits), RF size and invariance PG V2,V3,V4,MT,MST LIP,VIP,DP,7a V1 AIT,36,35 PIT, AIT TE 2 o S4 7 10 STP Rostral STS } 36 35 TG o 10 3 C3 7 TPO PGa IPa TEa TEm PG Cortex task-independent learning AIT o 10 3 7 C2b Unsupervised o o 10 4 1.2 - 3.2 S3 PIT VIP LIP 7a PP MSTcMSTp DP FST o o TF 10 7 0.9 - 4.4 S2b o o 10 5 1.1 - 3.0 C2 o o 10 7 0.6 - 2.4 V4 PO V3A MT S2 o o 10 4 0.4 - 1.6 V3 C1 V2 o 0.2o- 1.1 10 6 V1 S1 dorsal stream ventral stream 'where' pathway 'what' pathway Simple cells Complex cells Main routes Tuning Serre Kouh Cadieu Knoblich Kreiman & Poggio 2005 MAX Bypass routes
  • Model RF sizes Num. layers units Animal vs. Prefrontal 11, task-dependent learning classification 8 46 45 12 10 0 non-animal Cortex 13 Large-scale (108 units Supervised Increase in complexity (number of subunits), RF size and invariance PG V2,V3,V4,MT,MST units), spans LIP,VIP,DP,7a V1 AIT,36,35 PIT, AIT several areas of TE 2 o S4 7 10 the visual cortex STP Rostral STS } 36 35 TG o 10 3 C3 7 TPO PGa IPa TEa TEm PG Cortex task-independent learning AIT o 10 3 7 C2b Unsupervised o o 10 4 1.2 - 3.2 S3 PIT VIP LIP 7a PP MSTcMSTp DP FST o o TF 10 7 0.9 - 4.4 S2b o o 10 5 1.1 - 3.0 C2 o o 10 7 0.6 - 2.4 V4 PO V3A MT S2 o o 10 4 0.4 - 1.6 V3 C1 V2 o 0.2o- 1.1 10 6 V1 S1 dorsal stream ventral stream 'where' pathway 'what' pathway Simple cells Complex cells Main routes Tuning Serre Kouh Cadieu Knoblich Kreiman & Poggio 2005 MAX Bypass routes
  • Model RF sizes Num. layers units Animal vs. Prefrontal 11, task-dependent learning classification 8 46 45 12 10 0 non-animal Cortex 13 Large-scale (108 units Supervised Increase in complexity (number of subunits), RF size and invariance PG V2,V3,V4,MT,MST units), spans LIP,VIP,DP,7a V1 AIT,36,35 PIT, AIT several areas of TE 2 o S4 7 10 the visual cortex STP Rostral STS } 36 35 TG o 10 3 C3 7 TPO PGa IPa TEa TEm PG Cortex task-independent learning Combination of AIT o 3 7 10 C2b Unsupervised forward 10 and o o 4 1.2 - 3.2 S3 reverse PIT VIP LIP 7a PP MSTcMSTp DP FST o o TF 7 0.9 - 4.4 10 engineering S2b o o 10 5 1.1 - 3.0 C2 o o 10 7 0.6 - 2.4 V4 PO V3A MT S2 o o 10 4 0.4 - 1.6 V3 C1 V2 o 0.2o- 1.1 10 6 V1 S1 dorsal stream ventral stream 'where' pathway 'what' pathway Simple cells Complex cells Main routes Tuning Serre Kouh Cadieu Knoblich Kreiman & Poggio 2005 MAX Bypass routes
  • Model RF sizes Num. layers units Animal vs. Prefrontal 11, task-dependent learning classification 8 46 45 12 10 0 non-animal Cortex 13 Large-scale (108 units Supervised Increase in complexity (number of subunits), RF size and invariance PG V2,V3,V4,MT,MST units), spans LIP,VIP,DP,7a V1 AIT,36,35 PIT, AIT several areas of TE 2 o S4 7 10 the visual cortex STP Rostral STS } 36 35 TG o 10 3 C3 7 TPO PGa IPa TEa TEm PG Cortex task-independent learning Combination of AIT o 3 7 10 C2b Unsupervised forward 10 and o o 4 1.2 - 3.2 S3 reverse PIT VIP LIP 7a PP MSTcMSTp DP FST o o TF 7 0.9 - 4.4 10 engineering S2b o o 10 5 1.1 - 3.0 C2 Shown to be o o 7 0.6 - 2.4 10 V4 PO V3A MT S2 consistent with o o 4 0.4 - 1.6 10 V3 C1 V2 many1.1 10 experimental o o 6 0.2 - V1 data across areas S1 of visual cortex dorsal stream ventral stream 'where' pathway 'what' pathway (V1, V2, V4, MT and IT) Simple cells Complex cells Main routes Tuning Serre Kouh Cadieu Knoblich Kreiman & Poggio 2005 MAX Bypass routes
  • Two functional classes of cells Simple cells Complex cells Invariance Template matching max-like operation Gaussian-like tuning ~”OR” ~ “AND” Riesenhuber & Poggio 1999 (building on Fukushima 1980 and Hubel & Wiesel 1962)
  • Model RF sizes Num. layers units Animal vs. Prefrontal 11, task-dependent learning classification 8 46 45 12 10 0 non-animal Cortex 13 units Supervised Increase in complexity (number of subunits), RF size and invariance PG V2,V3,V4,MT,MST LIP,VIP,DP,7a V1 AIT,36,35 PIT, AIT TE 2 o S4 7 10 STP Rostral STS } 36 35 TG o 10 3 C3 7 TPO PGa IPa TEa TEm PG Cortex task-independent learning AIT o 10 3 7 C2b Unsupervised o o 10 4 1.2 - 3.2 S3 PIT VIP LIP 7a PP MSTcMSTp DP FST o o TF 10 7 0.9 - 4.4 S2b o o 10 5 1.1 - 3.0 C2 o o 10 7 0.6 - 2.4 V4 PO V3A MT S2 o o 10 4 0.4 - 1.6 V3 C1 V2 o 0.2o- 1.1 10 6 V1 S1 dorsal stream ventral stream 'where' pathway 'what' pathway Simple cells Complex cells Main routes Tuning Serre Kouh Cadieu Knoblich Kreiman & Poggio 2005 MAX Bypass routes
  • Hierarchy of image fragments see also Ullman et al 2002
  • Hierarchy of image fragments Unsupervised learning of frequent image fragments during development see also Ullman et al 2002
  • Hierarchy of image fragments Unsupervised learning of frequent image fragments during development Reusable fragments shared across categories see also Ullman et al 2002
  • Hierarchy of image fragments Unsupervised learning of frequent image fragments during development Reusable fragments shared across categories Large redundant vocabulary for implicit geometry see also Ullman et al 2002
  • Hierarchy of image fragments Unsupervised learning of frequent image fragments IT during development Reusable fragments shared across categories Large redundant vocabulary for implicit geometry V1 see also Ullman et al 2002
  • Hierarchy of image fragments Unsupervised learning of frequent image fragments IT during development Reusable fragments shared across categories Large redundant vocabulary for implicit geometry V1 see also Ullman et al 2002
  • Hierarchy of image fragments Unsupervised learning of frequent image fragments IT during development Reusable fragments shared across categories Large redundant vocabulary for implicit geometry V1 see also Ullman et al 2002
  • Hierarchy of image fragments category selective units linear perceptron Unsupervised learning of frequent image fragments IT during development Reusable fragments shared across categories Large redundant vocabulary for implicit geometry V1 see also Ullman et al 2002
  • Model vs. IT 1 IT Model 0.8 Classification performance 0.6 0.4 0.2 0 Size: 3.4o 3.4o 1.7o 6.8o 3.4o 3.4o center 2ohorz. 4ohorz. Position: center center center TRAIN Model data: Serre Kouh Cadieu Knoblich Kreiman & Poggio 2005 Experimental data: Hung* Kreiman* Poggio & DiCarlo 2005
  • Is this model sufficient to explain performance in rapid categorization tasks? Image Interval Image-Mask Mask 1/f noise 20 ms 30 ms ISI 80 ms Animal present or not ? Thorpe et al 1996; Van Rullen & Koch 2003; Bacon-Mace et al 2005
  • Rapid categorization Serre Oliva & Poggio 2007
  • Rapid categorization Head Close-body Medium-body Far-body Animals Natural distractors Artificial distractors Serre Oliva & Poggio 2007
  • Rapid categorization Serre Oliva & Poggio 2007
  • Rapid categorization Head Close-body Medium-body Far-body Animals Natural distractors Serre Oliva & Poggio 2007
  • Rapid categorization 2.6 2.4 Performance (d') 1.8 1.4 Model (82% correct) Human observers (80% correct) 1.0 Head Close-body Medium-body Far-body Head Close- Medium- Far- body body body Animals Natural distractors Serre Oliva & Poggio 2007
  • “Clutter effect” Limitation of feedforward model compatible with reduced selectivity in V4 (Reynolds et al 1999) and IT in the presence of clutter (Zoccolan et al 2005, 2007; Rolls et al 2003) Meyers Freiwald Embark Kreiman Serre Poggio in prep
  • “Clutter effect” Recording site in monkey’s IT Limitation of feedforward model compatible with reduced selectivity in V4 Model (Reynolds et al 1999) and IT in the presence of clutter IT neurons (Zoccolan et al 2005, 2007; Rolls et al 2003) fMRI Meyers Freiwald Embark Kreiman Serre Poggio in prep
  • Summary I Rapid categorization seems compatible with model based on feedforward hierarchy of image fragments Consistent with psychophysics, key limitation of architecture is recognition in clutter How does the visual system overcome such limitation?
  • Outline 1.Rapid recognition and feedforward processing: Loose hierarchy of image fragments “Clutter problem” 2.Beyond feedforward processing: X X Top-down cortical feedback and attention to solve the “clutter problem” XX Predicting human eye movements
  • Spatial attention solves the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others
  • Spatial attention solves the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others foreground
  • Spatial attention solves the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others background foreground
  • Spatial attention solves the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others background foreground X X XX
  • Spatial attention solves the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others background foreground X X XX Problem: How to know where to attend?
  • Spatial attention solves X X XX the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others Science 22 April 2005: Vol. 308. no. 5721, pp. 529 - 534 Parallel and Serial Neural Mechanisms for Visual Search in Macaque Area V4 Narcisse P. Bichot, Andrew F. Rossi, Robert Desimone
  • Spatial attention solves X X XX the “clutter problem” see also Broadbent 1952 1954; Treisman 1960; Treisman & Gelade 1980; Duncan & Desimone 1995; Wolfe, 1997; and many others Science 22 April 2005: Vol. 308. no. 5721, pp. 529 - 534 Parallel and Serial Neural Mechanisms for Visual Search in Macaque Area V4 Narcisse P. Bichot, Andrew F. Rossi, Robert Desimone Answer: Parallel feature-based attention
  • Parallel feature-based X X XX attention modulation normalized spike activity 2 1 0 0 100 200 0 100 200 time from fixation (ms)
  • Serial spatial attention X X XX modulation Test for serial (spatial) selection 2 attend within RF normalized spike activity 1 FIX attend away from RF RF 0 0 100 200 RF stimulus is SACCADE: target of saccade ruary 18, 2009 time from fixation (ms) vs. RF stimulus is not SACCADE: target of saccade Fig. 4. Illustration of the saccade enhancement analysis. We compared neuronal measures when the monkey made a saccade to an RF stimulus versus a saccade away from the RF. In this dis-
  • Attention as Bayesian inference PFC IT V4/PIT V2 Chikkerur Serre & Poggio in prep see also Rao 2005; Lee & Mumford 2003
  • Attention as Bayesian inference PFC feature-based attention IT V4/PIT V2 Chikkerur Serre & Poggio in prep see also Rao 2005; Lee & Mumford 2003
  • Attention as Bayesian inference PFC feature-based attention IT FEF/LIP V4/PIT spatial attention V2 Chikkerur Serre & Poggio in prep see also Rao 2005; Lee & Mumford 2003
  • Attention as Bayesian inference O PFC feature-based object priors attention Fi IT L FEF/LIP Fli V4/PIT location priors spatial attention N I V2 Chikkerur Serre & Poggio in prep see also Rao 2005; Lee & Mumford 2003
  • Attention as Bayesian inference PFC O LIP IT Fi L V4 Fli N V2 I Chikkerur Serre & Poggio in prep
  • Attention as Bayesian inference feature-based PFC O attention belief propagation: FEF/LIP = P (L) mLIP →V 4 IT Fi = P (F i |O) mIT →V 4 = P (Fli |F, L)P (L)P (I|Fli ) mV 4→IT L L Fli = P (Fli |F, L)P (F i |O)P (I|Fli ) mV 4→LIP V4 Fli Fi Fli N Where is at object O? V2 I Chikkerur Serre & Poggio in prep see also Rao 2005; Lee & Mumford 2003
  • Attention as Bayesian inference spatial attention PFC O belief propagation: FEF/LIP = P (L) mLIP →V 4 IT Fi = P (F i |O) mIT →V 4 = P (Fli |F, L)P (L)P (I|Fli ) mV 4→IT L L Fli = P (Fli |F, L)P (F i |O)P (I|Fli ) mV 4→LIP V4 Fli Fi Fli N What is at location L? V2 I Chikkerur Serre & Poggio in prep see also Rao 2005; Lee & Mumford 2003
  • Model performance improves with attention performance (d’) one shift of no attention attention Model Humans Chikkerur Serre & Poggio in prep
  • Model performance improves with attention 3 performance (d’) 2 1 0 one shift of no attention attention Model Humans Chikkerur Serre & Poggio in prep
  • Model performance improves with attention 3 performance (d’) 2 1 0 one shift of no attention attention Model Humans Chikkerur Serre & Poggio in prep
  • Model performance improves with attention 3 performance (d’) 2 1 0 one shift of no attention attention Model Humans Chikkerur Serre & Poggio in prep
  • Model performance improves with attention mask no mask 3 performance (d’) 2 1 0 one shift of no attention attention Model Humans Chikkerur Serre & Poggio in prep
  • Agreement with neurophysiology data Feature-based attention: Differential modulation for preferred vs. non-preferred stimulus (Bichot et al’ 05) Spatial attention: Gain modulation on neuron’s tuning curves (McAdams & Maunsell’99) Competitive mechanisms in V2 and V4 (Reynolds et al’ 99) Improved readout in clutter (being tested in collaboration with the Desimone lab)
  • IT readout improves with attention train readout classifier on + isolated object Zhang Meyers Serre Bichot Desimone Poggio in prep
  • IT readout improves with attention + Zhang Meyers Serre Bichot Desimone Poggio in prep
  • IT readout improves with attention + Zhang Meyers Serre Bichot Desimone Poggio in prep
  • IT readout improves with attention + Zhang Meyers Serre Bichot Desimone Poggio in prep
  • IT readout improves with attention cue transient change 7 attention on object Average rank attention away 8 + from object object not shown 9 0 500 1000 1500 2000 Time (ms) n=34 Zhang Meyers Serre Bichot Desimone Poggio in prep
  • IT readout improves with attention cue transient change 7 attention on object Average rank attention away 8 + from object object not shown 9 0 500 1000 1500 2000 Time (ms) n=34 Zhang Meyers Serre Bichot Desimone Poggio in prep
  • Could these attentional mechanisms also explain search strategies in complex natural images?
  • Matching human eye movements Dataset: 100 street-scenes images with cars & pedestrians and 20 without Experiment 8 participants asked to count the number of cars/pedestrians Blocks/randomized presentations Each image presented twice Eye movements recorded using an infra-red eye tracker Eye movements as proxy for attention Chikkerur Tan Serre & Poggio in sub
  • Matching human eye movements Car search Pedestrian search Chikkerur Tan Serre & Poggio in sub
  • Matching human eye movements Car search Pedestrian search Chikkerur Tan Serre & Poggio in sub
  • Attention as Bayesian inference PFC O FEF/LIP IT Fi L V4 Fli N V2 I Chikkerur Serre & Poggio in prep
  • Matching human eye movements
  • Matching human eye 100% movements fraction fixations 75% 50% 25% 10% 20% 30% % image covered by saliency maps
  • Matching human eye 100% area movements fraction fixations 75% under 50% ROC 25% curve 10% 20% 30% % image covered by saliency maps
  • Results ROC area Humans Bottom-up Top-down (feature-based) Chikkerur Tan Serre & Poggio in sub
  • Results 1 ROC area 0.75 0.5 0.25 0 car pedestrian Humans Bottom-up Top-down (feature-based) Chikkerur Tan Serre & Poggio in sub
  • Results 1 ROC area 0.75 0.5 0.25 0 car pedestrian Humans Bottom-up Top-down (feature-based) Chikkerur Tan Serre & Poggio in sub
  • Results 1 ROC area 0.75 0.5 0.25 0 car pedestrian Humans Bottom-up Top-down (feature-based) Chikkerur Tan Serre & Poggio in sub
  • Results 1 ROC area 0.75 0.5 0.25 0 car pedestrian Humans Bottom-up Top-down (feature-based) Chikkerur Tan Serre & Poggio in sub
  • Figure 1. The structure of objects and their backgrounds. In this illustration, each image has been created by averaging hundreds of pictures containing a particular object Local vs. global contextual in the center (a face, keyboard and fire hydrant) at a fixed scale and pose. Images come from the LabelMe dataset [65]. Before averaging, each picture is translated and scaled so that the target object is in the center. No other transformations are applied. The averages reveal the regularities existing in the intensity patterns across all the images. The background of many objects does not average to a uniform field, showing that an object extends its influence beyond its own boundaries, and this property is heavily used by the visual system. recent work, these rules have been integrated into a pairing was inappropriate (e.g. a kitchen counter and bass cues common framework of contextual influences [11,18–20], drum). In a recent study, Davenport and Potter [3] in which context provides a robust estimate of the prob- observed that consistency information influences percep- ability of an object’s presence, position and scale. tion of both the object and the scene background if a scene The most documented effect of context on object is presented briefly (80 ms), which suggests a recurrent recognition is the scene consistency–inconsistency effect processing framework, in which objects and their settings [1,5,7,8]. Palmer [7] found that observers’ accuracy at an influence each other mutually [21]. object-categorization task was facilitated if the target (e.g. A natural way of representing the context of an object is a loaf of bread) was presented after an appropriate scene in terms of its relationship to other objects (Figure 3). (e.g. a kitchen counter) and impaired if the scene–object Learning statistical contingencies between objects can cause the perception of one object or scene to generate strong expectations about the probable presence and location of other objects (see Box 1 for a description of current theories about the mechanisms involved in con- textual inference). Chun and Jiang [22] showed that people can learn the contingencies between novel objects, predict- ing the presence of one object on the basis of another, over the course of only 30 min. Contextual interactions between objects can be sensitive to subtle visual aspects. Green and Hummel [23] found that mechanisms of object perception are sensitive to the relative pose of pairs of objects. In a priming design, observers were presented with a prime object (e.g. a pitcher) for 50 ms, followed by a target image (e.g. a glass) for 50 ms. Crucially, the accuracy of target recognition was significantly higher if the prime object was oriented to interact with the target object in a consistent manner (e.g. a pitcher facing a glass) than if the pair interacted in an inconsistent manner (e.g. a pitcher oriented away from a glass). In most of these studies, the participants did not learn any new contextual rules. All the experiments were designed to prove the effects of contextual rules previously learnt by the observers in the real world, and the tests were performed on realistic images or line drawings depicting real scenes. In the next section, we review recent work that Figure 2. The strength of context. The visual system makes assumptions regarding shows that human observers have a remarkable ability to object identities according to their size and location in the scene. In this picture, observers describe the scene as containing a car and pedestrian in the street. learn contextual associations. Observers do not need to be However, the pedestrian is in fact the same shape as the car, except for a 908 explicitly aware of contextual associations to benefit from rotation. The atypicality of this orientation for a car within the context defined by Torralba ’01; Torralba & Oliva ’02 ’03; Torralba ’03; them. the street scene causes the car to be recognized as a pedestrian. Torralba Oliva et al ’06; Oliva & Torralba ’06 ‘07 www.sciencedirect.com
  • Figure 1. The structure of objects and their backgrounds. In this illustration, each image has been created by averaging hundreds of pictures containing a particular object Local vs. global contextual in the center (a face, keyboard and fire hydrant) at a fixed scale and pose. Images come from the LabelMe dataset [65]. Before averaging, each picture is translated and scaled so that the target object is in the center. No other transformations are applied. The averages reveal the regularities existing in the intensity patterns across all the images. The background of many objects does not average to a uniform field, showing that an object extends its influence beyond its own boundaries, and this property is heavily used by the visual system. recent work, these rules have been integrated into a pairing was inappropriate (e.g. a kitchen counter and bass cues common framework of contextual influences [11,18–20], drum). In a recent study, Davenport and Potter [3] in which context provides a robust estimate of the prob- observed that consistency information influences percep- ability of an object’s presence, position and scale. tion of both the object and the scene background if a scene The most documented effect of context on object is presented briefly (80 ms), which suggests a recurrent recognition is the scene consistency–inconsistency effect processing framework, in which objects and their settings [1,5,7,8]. Palmer [7] found that observers’ accuracy at an influence each other mutually [21]. object-categorization task was facilitated if the target (e.g. A natural way of representing the context of an object is a loaf of bread) was presented after an appropriate scene in terms of its relationship to other objects (Figure 3). (e.g. a kitchen counter) and impaired if the scene–object Learning statistical contingencies between objects can cause the perception of one object or scene to generate strong expectations about the probable presence and location of other objects (see Box 1 for a description of current theories about the mechanisms involved in con- textual inference). Chun and Jiang [22] showed that people can learn the contingencies between novel objects, predict- ing the presence of one object on the basis of another, over the course of only 30 min. Contextual interactions between objects can be sensitive to subtle visual aspects. Green and Hummel [23] found that mechanisms of object perception are sensitive to the relative pose of pairs of objects. In a priming design, observers were presented with a prime object (e.g. a pitcher) for 50 ms, followed by a target image (e.g. a glass) for 50 ms. Crucially, the accuracy of target recognition was significantly higher if the prime object was oriented to interact with the target object in a consistent manner (e.g. a pitcher facing a glass) than if the pair interacted in an inconsistent manner (e.g. a pitcher oriented away from a glass). In most of these studies, the participants did not learn any new contextual rules. All the experiments were designed to prove the effects of contextual rules previously learnt by the observers in the real world, and the tests were performed on realistic images or line drawings depicting real scenes. In the next section, we review recent work that Figure 2. The strength of context. The visual system makes assumptions regarding shows that human observers have a remarkable ability to object identities according to their size and location in the scene. In this picture, observers describe the scene as containing a car and pedestrian in the street. learn contextual associations. Observers do not need to be However, the pedestrian is in fact the same shape as the car, except for a 908 explicitly aware of contextual associations to benefit from rotation. The atypicality of this orientation for a car within the context defined by Torralba ’01; Torralba & Oliva ’02 ’03; Torralba ’03; them. the street scene causes the car to be recognized as a pedestrian. Torralba Oliva et al ’06; Oliva & Torralba ’06 ‘07 www.sciencedirect.com
  • Local vs. global contextual cues O object priors Fi L Fli location priors N I Chikkerur Tan Serre & Poggio in sub
  • Local vs. global contextual cues O object priors Fi L Fli location priors N I Chikkerur Tan Serre & Poggio in sub
  • Local vs. global contextual cues O S object priors global scene priors Fi L Fli location priors N I Chikkerur Tan Serre & Poggio in sub
  • Integrating (local) feature-based + (global) context-based cues Results accounts for 92% of inter- subject agreement! 1 ROC area 0.75 0.5 0.25 0 car pedestrian Humans Bottom-up Top-down (feature-based) Feaure-based + contextual cues similar (independent) results by Chikkerur Tan Serre & Poggio in sub Ehinger, Hidalgo Torralba & Oliva (in press)
  • Summary II
  • Summary II Attentional mechanisms may help solve the “clutter problem”
  • Summary II Attentional mechanisms may help solve the “clutter problem” Model combining (local) feature-based attentional cues with (global) scene contextual priors for object locations accounts for significant amount of human eye movements during complex visual searches
  • Acknowledgments Tomaso Poggio Aude Oliva (rapid categ. / psychophysics) Sharat Chikkerur (model of attention) Cheston Tan (eye tracking / psychophysics Ethan Meyers (attention neural data analysis) Other: Minjoon Kouh Gabriel Kreiman Narcisse Bichot Timothee Masquelier Stan Bileschi Leila Reddy Charles Cadieu David Sheinberg Robert Desimone Jed Singer Jim DiCarlo Andrew Steele Michelle Fabre-Thorpe Simon Thorpe Winrich Freiwald Nao Tsuchyia Estibaliz Garrote Lior Wolf Hueihan Jhuang Ying Zhang Ulf Knoblich Christof Koch