Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Reading the mind’s eye: Decoding object information during mental imagery from fMRI patterns

1,799 views

Published on

This is a presentation done at the Vision Science Society in May 2009.

Published in: Technology, Business
  • As a management instructor I enjoy viewing the work of others. This is probably the best presentation on planning I have viewed.
    Sharika
    http://financeadded.com http://traveltreble.com
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Reading the mind’s eye: Decoding object information during mental imagery from fMRI patterns

  1. 1. Reading the mind’s eye: Decoding object information during mental imagery from fMRI patterns T. Serre*1, L. Reddy*2, N. Tsuchyia3, T. Poggio1, M. Fabre-Thorpe2, C. Koch3 * equal contribution 1 McGovern Institute, MIT, Cambridge, MA 2 CerCo-CNRS, Universite Paul Sabatier, Toulouse, France 3 Computation & Neural Systems, California Institute of Technology, Pasadena, CA
  2. 2. Category readout from MVPs of fMRI activity
  3. 3. Category readout from MVPs of fMRI activity Robust category readout from MVPs of fMRI activity for visually presented objects in ventral temporal cortex Haxby et al ’01; Cox & Savoy ‘03; Carlson et al ’03; Mitchell et al ’03; Kamitani & Tong’05; Haynes & Rees ’05; Davatzikos et al ‘05; LaConte et al ‘05; Mourao-Miranda et al ’05; Kriegeskorte et al ’06 ’07; Reddy & Kanwisher ’07
  4. 4. Category readout from MVPs of fMRI activity Robust category readout from MVPs of fMRI activity for visually presented objects in ventral temporal cortex Haxby et al ’01; Cox & Savoy ‘03; Carlson et al ’03; Mitchell et al ’03; Kamitani & Tong’05; Haynes & Rees ’05; Davatzikos et al ‘05; LaConte et al ‘05; Mourao-Miranda et al ’05; Kriegeskorte et al ’06 ’07; Reddy & Kanwisher ’07 Is readout still possible in the complete absence of bottom-inputs during mental visual imagery?
  5. 5. I and P share the same representation?
  6. 6. I and P share the same representation? I and P activations overlap: Early visual areas for orientation (Kosslyn et al ’95; Ganis et al ’04) FFA and PPA during imagery of faces and houses (O’Craven & Kanwisher ’00) LOC for simple letter shapes (Stokes et al ’09)
  7. 7. I and P share the same representation? I and P activations overlap: Early visual areas for orientation (Kosslyn et al ’95; Ganis et al ’04) FFA and PPA during imagery of faces and houses (O’Craven & Kanwisher ’00) LOC for simple letter shapes (Stokes et al ’09) I and P activations differ: Negative BOLD differentiates I and P (Amedi et al ’05) Spatial overlap between I and P much larger in frontal than in ventral temporal cortex (Ganis et al ’04) I elicits weaker BOLD response (O’Craven & Kanwisher ’00; Ishai et al ’00)
  8. 8. Univariate vs. Multivariate pattern analysis
  9. 9. Univariate vs. Multivariate pattern analysis Same ROI average but different patterns of activity: Overestimate similarity of I and P mechanisms
  10. 10. Univariate vs. Multivariate pattern analysis Same ROI average but P I different patterns of activity: average Overestimate similarity of I bold and P mechanisms
  11. 11. Univariate vs. Multivariate pattern analysis Same ROI average but P I different patterns of activity: average Overestimate similarity of I bold and P mechanisms Visually responsive vs. category selective: Underestimate similarity of I and P mechanisms
  12. 12. Univariate vs. Multivariate pattern analysis Same ROI average but P I different patterns of activity: average Overestimate similarity of I bold and P mechanisms Visually responsive vs. P I category selective: Underestimate similarity of I and P mechanisms XX XX XX XX
  13. 13. Experimental paradigm n=10
  14. 14. given region of interest (ROI) were indistinguishable between perception and imager f a complete spatial overlap of activated regions was observed, these measures coul Experimental paradigm be consistent with the two processes evoking different patterns of representation a level of individual voxels. Indeed, as several recent studies have shown, two activatio erns that are very different on a voxel-by-voxel basis can produce similar averag LD signal responses [1, 6]. Perception blocks (P) n=10 re 1: Experimental Design. The experiment consisted of two conditions. A). In the visual perception (P ition subjects viewed different exemplars of 4 categories of objects (famous faces, famous building
  15. 15. given region of interest (ROI) were indistinguishable between perception and imager f a complete spatial overlap of activated regions was observed, these measures coul Experimental paradigm be consistent with the two processes evoking different patterns of representation a level of individual voxels. Indeed, as several recent studies have shown, two activatio erns that are very different on a voxel-by-voxel basis can produce similar averag LD signal responses [1, 6]. Perception blocks (P) Faces Buildings Food Tools n=10 re 1: Experimental Design. The experiment consisted of two conditions. A). In the visual perception (P ition subjects viewed different exemplars of 4 categories of objects (famous faces, famous building
  16. 16. given region of interest (ROI) were indistinguishable between perception and imager f a complete spatial overlap of activated regions was observed, these measures coul Experimental paradigm be consistent with the two processes evoking different patterns of representation a level of individual voxels. Indeed, as several recent studies have shown, two activatio erns that are very different on a voxel-by-voxel basis can produce similar averag LD signal responses [1, 6]. Perception blocks (P) n=10 re 1: Experimental Design. The experiment consisted of two conditions. A). In the visual perception (P ition subjects viewed different exemplars of 4 categories of objects (famous faces, famous building
  17. 17. given regionregion of interest (ROI)indistinguishable between perception and imageri in a given of interest (ROI) were were indistinguishable between perception and f aor if a complete spatial overlap of activated regions was observed, measures coul complete spatial overlap of activated regions was observed, these these measure Experimental paradigm bestill be consistentthe two processes evoking different patterns of representation a consistent with with the two processes evoking different patterns of represent level of individual voxels. Indeed, as severalseveral recent studiesshown, two activatio the level of individual voxels. Indeed, as recent studies have have shown, two ac erns that are very different on a voxel-by-voxel basis can produce similarsimilar patterns that are very different on a voxel-by-voxel basis can produce averag LD signal responses [1, 6]. [1, 6]. BOLD signal responses Perception blocks (P) Imagery blocks (I) n=10 re Figure 1: Experimental Design. The experiment consisted conditions. A). In the visual perception (P 1: Experimental Design. The experiment consisted of two of two conditions. A). In the visual perce ition subjectssubjects viewed different exemplars of 4 categories of objects (famousfamous famous b condition viewed different exemplars of 4 categories of objects (famous faces, faces, building
  18. 18. MVPA of fMRI data
  19. 19. MVPA of fMRI data Independent localizers object responsive (OR) voxels
  20. 20. MVPA of fMRI data Independent localizers object responsive (OR) voxels Linear SVM classifier (OVA, 4AFC)
  21. 21. MVPA of fMRI data Independent localizers object responsive (OR) voxels Linear SVM classifier (OVA, 4AFC) Leave-one-run-out
  22. 22. MVPA of fMRI data Independent localizers object responsive (OR) voxels Linear SVM classifier (OVA, 4AFC) Leave-one-run-out run 1 ... run n-1 run n voxels TRAINING TEST
  23. 23. MVPA of fMRI data P-P P P I-I I I run 1 ... run n-1 run n voxels TRAINING TEST
  24. 24. MVPA of fMRI data P P-I P Generalization between I and P? see also (Stock et al ’09) I I-P I run 1 ... run n-1 run n voxels TRAINING TEST
  25. 25. predicted actual 1 0
  26. 26. 67% P P chance: 25%
  27. 27. 67% 50% P P I I chance: 25%
  28. 28. 67% 50% 47% 52% P P I I P I I P chance: 25%
  29. 29. Object Responsive areas intact readout performance (% correct) scrambled voxels scrambled labels chance level 4AFC
  30. 30. Object Responsive areas 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  31. 31. Object Responsive areas 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  32. 32. Object Responsive areas 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  33. 33. Object Responsive areas 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  34. 34. Object Responsive areas 75 o intact readout performance (% correct) scrambled voxels scrambled labels 50 * * * chance 25 level 4AFC 0 P-P I-I P-I I-P
  35. 35. Retinotopic areas (V1+V2) intact readout performance (% correct) scrambled voxels scrambled labels chance level 4AFC
  36. 36. Retinotopic areas (V1+V2) 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  37. 37. Retinotopic areas (V1+V2) 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  38. 38. Retinotopic areas (V1+V2) 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  39. 39. Retinotopic areas (V1+V2) 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  40. 40. Retinotopic areas (V1+V2) 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 chance 25 level 4AFC 0 P-P I-I P-I I-P
  41. 41. Retinotopic areas (V1+V2) 75 intact readout performance (% correct) scrambled voxels scrambled labels 50 * chance 25 level 4AFC 0 P-P I-I P-I I-P
  42. 42. Similarity between P/I classifiers OR P classifiers 0.2 0.1 0.0 -0.1 I classifiers
  43. 43. Similarity between P/I classifiers OR Ret P classifiers 0.2 0.1 0.0 -0.1 I classifiers
  44. 44. Similarity between P/I classifiers OR subject AH017 P classifiers subject I classifiers JL003
  45. 45. Similarity between P/I classifiers VVIQ score W −B P-I sim = In answering items 1 to 4, think of some relative or friend whom you frequently see W +B (but who is not with you at present) and consider carefully the picture that comes before your mind’s eye. 1 The exact contour of face, head, shoulders and body. 2 Characteristic poses of head, attitudes of body etc. 3 The precise carriage, length of step, etc. in walking. 4 The different colors worn in some familiar clothes. Visualize the rising sun. Consider carefully the picture that comes before your mind’s eye. 5 The sun is rising above the horizon into a hazy sky. 6 The sky clears and surrounds the sun with blueness. 7 Clouds. A storm blows up, with flashes of lightening. 8 A rainbow appears. Think of the front of a shop, which you often go to. Consider the picture that comes before your mind’s eye. 9 The overall appearance of the shop from the opposite side of the road. 10 A window display including colors, shape and details of individual items for sale. 11 You are near the entrance. The color, shape and details of the door. 12 You enter the shop and go to the counter. The counter assistant serves you. Money changes hands. Finally, think of a country scene, which involves trees, mountains and a lake. Consider the picture that comes before your mind’s eye. 13 The contours of the landscape. 14 The color and shape of the trees. 15 The color and shape of the lake. 16 A strong wind blows on the tree and on the lake causing waves. Marks, 1973; Amedi et al ’05; Cui et al ’07
  46. 46. Predicting “vividness of imagery” from fMRI signal 5.0 4.0 R² = 0.7045 3.0 VVIQ score 2.0 1.0 0 1.85 1.90 1.95 2.00 2.05 P-I similarity
  47. 47. Summary Robust readout of mental imagery Robust generalization between I and P suggesting that I and P elicit very similar patterns of neural activity Preliminary data suggest that this similarity predicts the subjective vividness of imagery in individual subjects
  48. 48. perception-fixation imagery-fixation perception-imagery (p<0.01) (p<0.01) (p<0.01)
  49. 49. Object responsive (OR) Separate localizer runs for faces, scenes, objects vs. scrambled images (p < 10-4, uncorrected) Distributed voxels in ventral temporal cortex (see Haxby et al ’01) Includes LOC, FFA and PPA
  50. 50. d at anel the line as a Feedback more diffuse than tion feedforward connections no- Shmuel et al. • Functional Organization of Feedback Projections V1 rre- nta- The was sot- rox- that tion anel as a defi- the reen V2 V1 re- mea- rom mal Schmuel et al ’05
  51. 51. Significant main effects: Category Classification condition Interaction effect Performance for faces and buildings > than for food and tools Very similar results after excluding FFA and PPA
  52. 52. Importance maps
  53. 53. Importance maps perception imagery
  54. 54. for OR and Figure 6B for the retinotopic ROI. A 2-way repeated measures ANOVA weights (within category/across category) x ROI revealed a significant main effec Importance maps weights (F(1,9) = 54.22; p < 0.0001), a significant main effect of ROI (F(1,9) = 52.67; 0.0001) and a significant interaction effect (F(1,9) = 48.94; p < 0.0005). Post- Bonferonni corrected comparisons revealed that the weight map overlap within cate was higher than across category, and higher in OR than in retinotopic voxels. perception imagery
  55. 55. Subjective measure of vividness Sample question from the VVIQ: “Visualize the sun rising above a rocky mountain range into a bright sky. How vivid is your mental picture on a scale from 1 to 5, where 1 is akin to a photograph, and 5 is a pictureless concept?” Marks, 1973; McKelvie & Demers ‘79; McKelvie ’94; Hatakeyama ’97; Amedi et al ’05; Cui et al ’07
  56. 56. Vividness of imagery Different people report experiencing different levels of vividness of imagery Could the similarity between the P-I conditions explain the subjective vividness of mental imagery across participants?

×