Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Upcoming SlideShare
What to Upload to SlideShare
Next
Download to read offline and view in fullscreen.

0

Share

Download to read offline

Deep randomized embedding

Download to read offline

yifan sun

Related Books

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Deep randomized embedding

  1. 1. ECCV2018, sota on CARS196, 9 pages
  2. 2. The number of embeddings: L The dimension of each embedding: D D equals the number of meta-classes Proxy NCA loss for training each embedding
  3. 3. The dimension of each embedding: D The number of embeddings: L
  4. 4. The dimension of each embedding: D The number of embeddings: L D L Higher than our SSR with 48 models ensemble
  5. 5. Discussion1: Do we really need attribute to enhance feature learning? Samples within a meta class can be viewed as sharing a latent attribute. So meta classes corresponds to randomized attributes
  6. 6. Discussion2: In hidden layers, we may expect some clusters within the dataset. A cluster may be viewed as a meta class. employing meta class = enforcing diversity of clustering? Discussion3: Encoding the original one-hot label into a sequential label. Using L-2 loss (or KLDiv loss, etc.) for learning the embedding brings about a similar improvement?

yifan sun

Views

Total views

77

On Slideshare

0

From embeds

0

Number of embeds

0

Actions

Downloads

1

Shares

0

Comments

0

Likes

0

×