P01 introduction cvpr2012 deep learning methods for vision
Upcoming SlideShare
Loading in...5
×
 

P01 introduction cvpr2012 deep learning methods for vision

on

  • 890 views

 

Statistics

Views

Total Views
890
Views on SlideShare
890
Embed Views
0

Actions

Likes
0
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-ShareAlike LicenseCC Attribution-ShareAlike License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • All I am going to say about Neuroscience, although techniques do have strong connections.
  • Make clear that classic methods, e.g.convnets are purely supervised.
  • Need to bring outdiffereceswrt to existing ML stuff, mainly unsupervised learning part. Make use of unlabaled data (lots of it).
  • Restructure to bigger emphasis on unsupervised.Make clear that classic methods, e.g.convnets are purely supervised.
  • Winder and Brown paper. Slightly smoothed view of things.
  • Selection instead of normalization?
  • Note pooling is across space, not across Gabor channelNormalization is really nonlinear (small elements not rescaled)
  • Non-maximal suppression across VW. Like an L-InfnormalizationMax = k-means
  • Graph not clear. Explain better. Y-axis is change in value
  • Mention Leonardis & Fidler paper
  • Too far for labels to trickle down (vanishing gradients)Only information from layer below.Input is supervision.
  • Add overall energy
  • Not separate operations Do it at the same
  • Chriswilliams oral link
  • Occlusion mask: bootom right quad for sofa interpretationCan’t decide locally If you knew solution, would know what features to extract.
  • DPM is shape hierarchical HOG templates
  • DPM is shape hierarchical HOG templates
  • Song Chun ‘s clock

P01 introduction cvpr2012 deep learning methods for vision P01 introduction cvpr2012 deep learning methods for vision Presentation Transcript

  • Deep Learning &Feature LearningMethods for Vision ’
  • Tutorial Overview
  • Overview• –• – – –•
  • Existing Recognition Approach••
  • Motivation•• –•
  • What Limits Current Performance?• –•
  • Hand-Crafted Features• β –• –•
  • Mid-Level Representations•“ ”•• 
  • Why Learn Features?•• – – –• – –
  • Why Hierarchy? fi
  • Hierarchies in Vision• –• –
  • Hierarchies in Vision••
  • Learning a Hierarchy of Feature Extractors•• ••
  • Multistage Hubel-Wiesel Architecture•••••••
  • Classic Approach to Training• – – –• – –
  • Deep Learning••••
  • Single Layer Architecture
  • Example Feature Learning Architectures
  • SIFT Descriptor
  • Spatial Pyramid Matching
  • Filtering• –
  • Filtering• – –  . . .
  • Translation Equivariance•  – –
  • Filtering• – –
  • Filtering• – – – –
  • Normalization• •
  • Normalization• –  –
  • Normalization• – –
  • Role of Normalization• – “ ” – –• |.|1 |.|1 |.|1 |.|1
  • Pooling• – – –
  • Role of Pooling• – –
  • Role of Pooling• • • •
  • Unsupervised Learning••• – – –
  • Auto-Encoder
  • Auto-Encoder Example 1• σ(WTz) σ(Wx) σ σ
  • Auto-Encoder Example 2• Dz σ(Wx) σ
  • Auto-Encoder Example 2• Dz σ(Wx) σ
  • Taxonomy of Approaches• – – –• – –• –
  • Stacked Auto-Encoders
  • At Test Time••••
  • Information Flow in Vision Models•• – –• –
  • Deep Boltzmann Machines
  • Why is Top-Down important?•••
  • Multi-Scale Models• • • HOG Pyramid
  • Hierarchical Model• Input Image/ Features Input Image/ Features
  • Multi-scale vs Hierarchical Feature Pyramid Input Image/ Features
  • Structure Spectrum• – – –• – –
  • Structure Spectrum• – –
  • Structure Spectrum••
  • Structure Spectrum• – –
  • Structure Spectrum•••
  • Structure Spectrum• – –
  • Structure Spectrum• –• –•
  • Structure Spectrum• – – –
  • Structure Spectrum• – – –
  • Performance of Deep Learning•• – • – •• –• –•
  • Summary• –•••
  • Further Resources•••• –•
  • References••••••• ••••
  • References••••••••
  • References••••••••
  • References•••••••••
  • References•••••••
  • References••••••••••
  • References•••••••••
  • References••••