Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Article overview: Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream

873 views

Published on

The article presents the comparison of the complexity of the representation of visual features in the deep convolutional neural network and in our brain. DNN activity layer-by-layer is used to predict voxel activations and it is shown that lower layers of DNN are better at predicting V1,V2 and that higher layers of DNN are better in predicting activity in LO and higher areas of ventral stream. The result effectively demonstrates that layer-by-layer complexity of visual features we see in DNN is also present in the visual cortex.

Published in: Science
  • Be the first to comment

  • Be the first to like this

Article overview: Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream

  1. 1. Article overview by Ilya Kuzovkin Umut Güclü and Marcel A. J. van Gerven Computational Neuroscience Seminar University of Tartu 2015 Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  2. 2. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  3. 3. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream … pixels classes Linear “spider” “cat”
  4. 4. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream … pixels classes … hidden layer Non-linear “cat” “spider”
  5. 5. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream … pixels classes … hidden layer … hidden layer Deep “cat” “spider”
  6. 6. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream “spider” “cat” important feature
  7. 7. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream “spider” important feature RUN! “cat”
  8. 8. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream “spider” important feature RUN! Convolutional filter “cat”
  9. 9. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream Convolutional (and pooling) layer
  10. 10. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream … pixels classes … hidden layer … hidden layer … convolutional layer Deep Convolutional Neural Network “cat” “spider”
  11. 11. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  12. 12. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  13. 13. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  14. 14. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream Matthew D. Zeiler, Rob Fergus Visualizing and Understanding Convolutional Networks 2013
  15. 15. Matthew D. Zeiler, Rob Fergus Visualizing and Understanding Convolutional Networks 2013 Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  16. 16. Matthew D. Zeiler, Rob Fergus Visualizing and Understanding Convolutional Networks 2013 Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  17. 17. Matthew D. Zeiler, Rob Fergus Visualizing and Understanding Convolutional Networks 2013 Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  18. 18. Matthew D. Zeiler, Rob Fergus Visualizing and Understanding Convolutional Networks 2013 Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  19. 19. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  20. 20. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream Two-stream hypothesis
  21. 21. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  22. 22. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  23. 23. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  24. 24. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  25. 25. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  26. 26. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  27. 27. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream ?
  28. 28. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  29. 29. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  30. 30. 96x37x37=131,424
  31. 31. 96x37x37=131,424
  32. 32. 96x37x37=131,424256x17x17=73,984
  33. 33. 96x37x37=131,424256x17x17=73,984
  34. 34. 96x37x37=131,424256x17x17=73,984
  35. 35. 96x37x37=131,424256x17x17=73,984
  36. 36. 96x37x37=131,424256x17x17=73,984 Train linear regression model
  37. 37. 96x37x37=131,424256x17x17=73,984 Train linear regression model Test it
  38. 38. 96x37x37=131,424256x17x17=73,984 Train linear regression model Test it r = 0.22
  39. 39. 96x37x37=131,424256x17x17=73,984 Train linear regression model Test it r = 0.22 Train linear regression model Test it
  40. 40. 96x37x37=131,424256x17x17=73,984 Train linear regression model Test it r = 0.22 Train linear regression model Test it r = 0.67
  41. 41. 96x37x37=131,424256x17x17=73,984 Train linear regression model Test it r = 0.22 Train linear regression model Test it r = 0.67
  42. 42. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream
  43. 43. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set
  44. 44. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . 1888
  45. 45. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . 1888
  46. 46. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888
  47. 47. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution .
  48. 48. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution . Low Mid High • blob • contrast • edge • contour • shape • texture • pattern • object • object part human-assigned to 9 categories
  49. 49. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution . Low Mid High • blob • contrast • edge • contour • shape • texture • pattern • object • object part human-assigned to 9 categories 1. Divide 1888 neurons into 9 categories
  50. 50. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution . Low Mid High • blob • contrast • edge • contour • shape • texture • pattern • object • object part human-assigned to 9 categories 1. Divide 1888 neurons into 9 categories ! 2. Predict activity of each voxel from group-by-group
  51. 51. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution . Low Mid High • blob • contrast • edge • contour • shape • texture • pattern • object • object part human-assigned to 9 categories 1. Divide 1888 neurons into 9 categories ! 2. Predict activity of each voxel from group-by-group ! 3. For each voxel find the group, which best predicts voxel’s activity
  52. 52. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution . Low Mid High • blob • contrast • edge • contour • shape • texture • pattern • object • object part human-assigned to 9 categories 1. Divide 1888 neurons into 9 categories ! 2. Predict activity of each voxel from group-by-group ! 3. For each voxel find the group, which best predicts voxel’s activity ! 4. Assign each of 1888 DNN neurons to a visual layer: V1, V2, V4, LO
  53. 53. NEXT COOL THING: CATEGORIES OF FEATURES … ImageNet validation set ... . . 1888 deconvolution . Low Mid High • blob • contrast • edge • contour • shape • texture • pattern • object • object part human-assigned to 9 categories 1. Divide 1888 neurons into 9 categories ! 2. Predict activity of each voxel from group-by-group ! 3. For each voxel find the group, which best predicts voxel’s activity ! 4. Assign each of 1888 DNN neurons to a visual layer: V1, V2, V4, LO ! 5. Map visual layers to categories
  54. 54. NEXT COOL THING: CATEGORIES OF FEATURES
  55. 55. OTHER RESULTS Correlation between predicted responses between pairs of voxel groups
  56. 56. OTHER RESULTS Selectivity of visual areas to feature maps of varying complexity
  57. 57. OTHER RESULTS Distribution of the receptive field centers
  58. 58. OTHER RESULTS Biclustering of voxels and feature maps
  59. 59. SUMMARY
  60. 60. An intracranial dataset we have. How to repeat the result?
  61. 61. An intracranial dataset we have. How to repeat the result? vs.

×