Successfully reported this slideshow.
Your SlideShare is downloading. ×

Deep randomized embedding

Ad

ECCV2018, sota on CARS196, 9 pages

Ad

The number of embeddings: L
The dimension of each embedding: D
D equals the number of meta-classes
Proxy NCA loss for trai...

Ad

The dimension of each embedding: D The number of embeddings: L

Ad

Ad

Ad

Ad

Loading in …3
×

Check these out next

1 of 7 Ad
1 of 7 Ad
Advertisement

More Related Content

Advertisement

Deep randomized embedding

  1. 1. ECCV2018, sota on CARS196, 9 pages
  2. 2. The number of embeddings: L The dimension of each embedding: D D equals the number of meta-classes Proxy NCA loss for training each embedding
  3. 3. The dimension of each embedding: D The number of embeddings: L
  4. 4. The dimension of each embedding: D The number of embeddings: L D L Higher than our SSR with 48 models ensemble
  5. 5. Discussion1: Do we really need attribute to enhance feature learning? Samples within a meta class can be viewed as sharing a latent attribute. So meta classes corresponds to randomized attributes
  6. 6. Discussion2: In hidden layers, we may expect some clusters within the dataset. A cluster may be viewed as a meta class. employing meta class = enforcing diversity of clustering? Discussion3: Encoding the original one-hot label into a sequential label. Using L-2 loss (or KLDiv loss, etc.) for learning the embedding brings about a similar improvement?

×