Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Learning to compare: relation network for few shot learning

270 views

Published on

PPT for paper reading.
Titlt: Learning to compare: relation network for few shot learning

Published in: Education
  • Be the first to comment

Learning to compare: relation network for few shot learning

  1. 1. 孙奕帆 Few-shot learning Meta-learning
  2. 2. Few-shot learning Must train a classifier to recognize new classes given few examples from each. Train set (Relatively large) Test set Support set (Relatively small)
  3. 3. Few-shot learning Must train a classifier to recognize new classes given few examples from each. Train set (Relatively large) Test set Support set (Relatively small) Disjoint label space Same label space Same label space
  4. 4. Relation network--classifying by comparison Relation network (RN) compares support images and query images and makes classification according to the returned “relation scores” Test set Support set (Relatively small) Query images Support images Relation scores Discussion: Converting classification task into retrieval task?
  5. 5. Relation network--classifying by comparison Relation network (RN) learn to compare with meta-learning: to mimic the comparison procedure on the training set and learn the model. Query set Sample set Query images Support images Relation scores Train set In each episode
  6. 6. Relation network--classifying by comparison Relation network (RN) learn to compare with meta-learning: to mimic the comparison procedure on the training set and learn the model. Query set Sample set Query images Support images Relation scores Train set In each episode C-way K-shot
  7. 7. Relation network--structure
  8. 8. Relation network--structure Embedding module relation module 𝑟𝑖,𝑗 is bounded between (0,1) by sigmoid function
  9. 9. Relation network--structure Embedding module relation module Optimization target:
  10. 10. Relation network--structure A detail for K-shot: K embedded features are pooled by pixel-wise sum operation
  11. 11. Relation network--structure Detailed structure Extension to 0-shot learning: different embedding module for sample and query images Semantic vector images
  12. 12. Experiments
  13. 13. Why does RN work? Both deep feature embedding and deep distance metric are learnable. (the concatenation operation is in relatively bottom layers ) When training a siamese network or a triplet network, we apply metric constraint on a specified feature and then use the Euclidean distance (or other fixed metric) for metric for inference.
  14. 14. Why does RN work? 2D data space
  15. 15. Why does RN work? The feature embeddings are difficult to separate. The relation module pair representations are linearly separable
  16. 16. Why does RN work? My guess 1)Feature concatenation operation in very early stage (bottom layers) 2)K sample images to mimic the K-shot 3) Converting classification to ”comparison”, which is a semi-parameter model approach.

×