Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Deep randomized embedding

33 views

Published on

yifan sun

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Deep randomized embedding

  1. 1. ECCV2018, sota on CARS196, 9 pages
  2. 2. The number of embeddings: L The dimension of each embedding: D D equals the number of meta-classes Proxy NCA loss for training each embedding
  3. 3. The dimension of each embedding: D The number of embeddings: L
  4. 4. The dimension of each embedding: D The number of embeddings: L D L Higher than our SSR with 48 models ensemble
  5. 5. Discussion1: Do we really need attribute to enhance feature learning? Samples within a meta class can be viewed as sharing a latent attribute. So meta classes corresponds to randomized attributes
  6. 6. Discussion2: In hidden layers, we may expect some clusters within the dataset. A cluster may be viewed as a meta class. employing meta class = enforcing diversity of clustering? Discussion3: Encoding the original one-hot label into a sequential label. Using L-2 loss (or KLDiv loss, etc.) for learning the embedding brings about a similar improvement?

×