Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Swwae ruijie

9 views

Published on

Ruijie Quan

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Swwae ruijie

  1. 1. STACKED WHAT-WHERE AUTO-ENCODERS Ruijie Quan 2018/08/05
  2. 2. STACKED WHAT-WHERE AUTO-ENCODERS 1 Stacked WHAT-WHERE Auto-encoders 09.08.2018 Existing popular approaches: • To pre-train auto-encoders in a layer-wise fashion, and subsequently fine-tune the entire stack of encoders in a supervised discriminative manner • The deep boltzmann machine (DBM) model  Integrates discriminative and generative pathways  Provides a unified approach to supervised, semi-supervised and unsupervised learning without relying on sampling during training. provide a unified mechanism to unsupervised and supervised learning exhibit poor convergence and mixing properties ultimately due to the reliance on sampling during training
  3. 3. STACKED WHAT-WHERE AUTO-ENCODERS 2 Stacked WHAT-WHERE Auto-encoders 09.08.2018 • what variables inform the next layer about the content with incomplete information about position • where variables inform the corresponding feed-back decoder about where interesting (dominant) features are located
  4. 4. MODEL ARCHITECTURE 3 Stacked WHAT-WHERE Auto-encoders 09.08.2018 Loss function of SWWAE: WHAT: WHERE:
  5. 5. MODEL ARCHITECTURE 4 Stacked WHAT-WHERE Auto-encoders 09.08.2018 Switching between three modalities: • supervised learning • unsupervised learning • semi-supervised learning mask out the entire Deconvnet pathway by setting λL2∗ to 0 and the SWWAE falls back to vanilla Convnet nullify the fully-connected layers on top of Convnet together with softmax classifier by setting λNLL = 0. In this setting, the SWWAE is equivalent to a deep convolutional auto-encoder. all three terms of the loss are active. The gradient contributions from the Deconvnet can be interpreted as an information preserving regularizer
  6. 6. EXPERIMENTS 09.08.2018 5 1. “where” is critical information demanded by reconstructing; one can barely obtain well reconstructed images without preserving “where”. 2. this experiment can also be considered as an example using SWWAE for generative purpose. Stacked WHAT-WHERE Auto-encoders  NECESSITY OF “WHERE”
  7. 7. EXPERIMENTS 09.08.2018 6 1. “where”learns highly localized representation. Each element in the “where” has an approximately linear response to the pixel-level translation on either horizontal/ vertical direction and learns to be invariant to another. 2. “what” learns to be locally stable that exhibits strong invariance to the input-level translation. Stacked WHAT-WHERE Auto-encoders  INVARIANCE AND EQUIVARIANCE (a): “what” of horizontally translated digits versus original digits; (b): “where” of horizontally translated digits versus original digits; (c): “what” of vertically translated digits versus original digits; (d): “where” of vertically translated digits versus original digits.
  8. 8. EXPERIMENTS 09.08.2018 7 Stacked WHAT-WHERE Auto-encoders  INVARIANCE AND EQUIVARIANCE
  9. 9. EXPERIMENTS 09.08.2018 8 Stacked WHAT-WHERE Auto-encoders  CLASSIFICATION PERFORMANCE
  10. 10. EXPERIMENTS 09.08.2018 9 Stacked WHAT-WHERE Auto-encoders
  11. 11. EXPERIMENTS 09.08.2018 10 Stacked WHAT-WHERE Auto-encoders  LARGE SCALE EXPERIMENTS (To compare with results from other approaches, we perform the experiments in the common experimental setting that only adopts contrast normalization, small translation and horizontal mirroring for data preprocessing.)
  12. 12. Thank you for your attention.

×