SlideShare a Scribd company logo
Challenging Common Assumptions
in the Unsupervised Learning of
Disentangled Representations
(ICML 2019 Best Paper)
2019.07.17.
Sangwoo Mo
1
Outline
• Quick Review
• What is disentangled representation (DR)?
• Prior work on the unsupervised learning of DR
• Theoretical Results
• Unsupervised learning of DR is impossible without inductive biases
• Empirical Results
• Q1. Which method should be used?
• Q2. How to choose the hyperparameters?
• Q3. How to select the best model from a set of trained models?
2
Quick Review
• Disentangled representation: Learn a representation 𝑧 from the data 𝑥 s.t.
• Contain all the information of 𝑥 in a compact and interpretable structure
• Currently no single formal definition L (many definitions for the factor of variation)
3* Image from BetaVAE (ICLR 2017)
Quick Review: Prior Methods
• BetaVAE (ICLR 2017)
• Use 𝛽 > 1 for the VAE objective (force to the factorized Gaussian prior)
4
Quick Review: Prior Methods
• BetaVAE (ICLR 2017)
• Use 𝛽 > 1 for the VAE objective (force to the factorized Gaussian prior)
• FactorVAE (ICML 2018) & 𝜷-TCVAE (NeurIPS 2018)
• Penalize the total correlation of the representation, which is estimated1 by
adversarial learning (FactorVAE) or (biased) mini-batch approximation (𝛽-TCVAE)
51. It requires the aggregated posterior 𝑞(𝒛)
Quick Review: Prior Methods
• BetaVAE (ICLR 2017)
• Use 𝛽 > 1 for the VAE objective (force to the factorized Gaussian prior)
• FactorVAE (ICML 2018) & 𝜷-TCVAE (NeurIPS 2018)
• Penalize the total correlation of the representation, which is estimated1 by
adversarial learning (FactorVAE) or (biased) mini-batch approximation (𝛽-TCVAE)
• DIP-VAE (ICLR 2018)
• Match 𝑞(𝒛) to the disentangled prior 𝑝(𝒛), where 𝐷 is a (tractable) moment matching
61. It requires the aggregated posterior 𝑞(𝒛)
Quick Review: Evaluation Metrics
• Many heuristics are proposed to quantitatively evaluate the disentanglement
• Basic idea: Factors and representation should have 1-1 correspondence
7
Quick Review: Evaluation Metrics
• Many heuristics are proposed to quantitatively evaluate the disentanglement
• Basic idea: Factors and representation should have 1-1 correspondence
• BetaVAE (ICLR 2017) & FactorVAE (ICML 2018) metric
• Given a factor 𝑐., generate two (simulation) data 𝑥, 𝑥′ with same 𝑐. but different 𝑐1.,
then train a classifier to predict 𝑐. using the difference of the representation |𝑧 − 𝑧4|
• Indeed, the classifier will map the zero-valued index of |𝑧 − 𝑧4
| to the factor 𝑐.
8
Quick Review: Evaluation Metrics
• Many heuristics are proposed to quantitatively evaluate the disentanglement
• Basic idea: Factors and representation should have 1-1 correspondence
• BetaVAE (ICLR 2017) & FactorVAE (ICML 2018) metric
• Given a factor 𝑐., generate two (simulation) data 𝑥, 𝑥′ with same 𝑐. but different 𝑐1.,
then train a classifier to predict 𝑐. using the difference of the representation |𝑧 − 𝑧4|
• Indeed, the classifier will map the zero-valued index of |𝑧 − 𝑧4
| to the factor 𝑐.
• Mutual Information Gap (NeurIPS 2018)
• Compute the mutual information between each factor 𝑐. and each dimension of 𝑧5
• For the highest and second highest dimensions 𝑖7 and 𝑖8 of the mutual information,
measure the difference between them: 𝐼 𝑐., 𝑧5:
− 𝐼(𝑐., 𝑧5;
)
9
Theoretical Results
• “Unsupervised learning of disentangled representations is fundamentally impossible
without inductive biases on both the models and the data”
10
Theoretical Results
• “Unsupervised learning of disentangled representations is fundamentally impossible
without inductive biases on both the models and the data”
• Theorem. For 𝑝 𝒛 = ∏5>7
?
𝑝(𝑧5), there exists an infinite family of bijective functions 𝑓 s.t.
• 𝒛 and 𝑓(𝒛) are completely entangled (i.e.,
ABC(𝒖)
AEF
≠ 0 a.e. for all 𝑖, 𝑗)
• 𝒛 and 𝑓(𝒛) have same marginal distribution (i.e., 𝑃 𝒛 ≤ 𝒖 = 𝑃(𝑓 𝒛 ≤ 𝒖) for all 𝒖)
11
Theoretical Results
• “Unsupervised learning of disentangled representations is fundamentally impossible
without inductive biases on both the models and the data”
• Theorem. For 𝑝 𝒛 = ∏5>7
?
𝑝(𝑧5), there exists an infinite family of bijective functions 𝑓 s.t.
• 𝒛 and 𝑓(𝒛) are completely entangled (i.e.,
ABC(𝒖)
AEF
≠ 0 a.e. for all 𝑖, 𝑗)
• 𝒛 and 𝑓(𝒛) have same marginal distribution (i.e., 𝑃 𝒛 ≤ 𝒖 = 𝑃(𝑓 𝒛 ≤ 𝒖) for all 𝒖)
• Proof sketch. By construction.
• Let 𝑔: supp 𝒛 → 0,1 ?
s.t. 𝑔5 𝒗 = 𝑃(𝑧5 ≤ 𝑣5)
• Let ℎ: 0,1 ? → ℝ? s.t. ℎ5 𝒗 = 𝜓17(𝑣5) where 𝜓 is a c.d.f. of a normal distribution
• Then for any orthogonal matrix 𝑨, the following 𝑓 satisfies the condition:
𝑓 𝒖 = ℎ ∘ 𝑔 17(𝑨 ℎ ∘ 𝑔 𝒖 )
12
Theoretical Results
• “Unsupervised learning of disentangled representations is fundamentally impossible
without inductive biases on both the models and the data”
• Theorem. For 𝑝 𝒛 = ∏5>7
?
𝑝(𝑧5), there exists an infinite family of bijective functions 𝑓 s.t.
• 𝒛 and 𝑓(𝒛) are completely entangled (i.e.,
ABC(𝒖)
AEF
≠ 0 a.e. for all 𝑖, 𝑗)
• 𝒛 and 𝑓(𝒛) have same marginal distribution (i.e., 𝑃 𝒛 ≤ 𝒖 = 𝑃(𝑓 𝒛 ≤ 𝒖) for all 𝒖)
• Corollary. One cannot find the disentangled representation 𝑟(𝒙) (w.r.t. to the generative
model 𝐺(𝒙|𝒛)) as there are two equivalent generative models 𝐺 and 𝐺′ which has same
marginal distribution 𝑝(𝒙) but 𝒛4 = 𝑓(𝒛) is completely entangled w.r.t. 𝒛 (so as 𝑟(𝒙))
• Namely, inferring representation 𝒛 from observation 𝒙 is not a well-defined problem
13
Theoretical Results
• 𝛽-VAE learns some decorrelated features, but they are not semantically decomposed
• E.g., the width is entangled with the leg style in 𝛽-VAE
14* Image from BetaVAE (ICLR 2017)
Empirical Results
• Q1. Which method should be used?
• A. Hyperparameters and random seeds matter more than the choice of the model
15
Empirical Results
• Q2. How to choose the hyperparameters?
• A. Selecting the best hyperparameter is extremely hard due to the randomness
16
Empirical Results
• Q2. How to choose the hyperparameters?
• A. Also, there is no obvious trend over the variation of hyperparameters
17
Empirical Results
• Q2. How to choose the hyperparameters?
• A. Good hyperparameters often can be transferred (e.g., dSprites → color-dSprites)
18
Rank correlation matrix
Empirical Results
• Q3. How to select the best model from a set of trained models?
• A. Unsupervised (training) scores do not correlated to the disentanglement metrics
19
Unsupervised scores vs disentanglement metrics
Summary
• TL;DR: Current unsupervised learning of disentangled representation has a limitation!
• Summary of findings:
• Q1. Which method should be used?
• A. Current methods should be rigorously validated (no significant difference)
20
Summary
• TL;DR: Current unsupervised learning of disentangled representation has a limitation!
• Summary of findings:
• Q1. Which method should be used?
• A. Current methods should be rigorously validated (no significant difference)
• Q2. How to choose the hyperparameters?
• A. No rule of thumb, but transfer across datasets seem to help!
21
Summary
• TL;DR: Current unsupervised learning of disentangled representation has a limitation!
• Summary of findings:
• Q1. Which method should be used?
• A. Current methods should be rigorously validated (no significant difference)
• Q2. How to choose the hyperparameters?
• A. No rule of thumb, but transfer across datasets seem to help!
• Q3. How to select the best model from a set of trained models?
• A. (Unsupervised) model selection remains a key challenge!
22
Following Work & Future Direction
• “Disentangling Factors of Variation Using Few Labels”
(ICLR Workshop 2019, NeurIPS 2019 submission)
• Summary of findings: Using a few labels highly improves the disentanglement!
23
Following Work & Future Direction
• “Disentangling Factors of Variation Using Few Labels”
(ICLR Workshop 2019, NeurIPS 2019 submission)
• Summary of findings: Using a few labels highly improves the disentanglement!
1. Existing disentanglement metrics + few labels perform well on model selection,
even though models are completely trained in an unsupervised manner
24
Following Work & Future Direction
• “Disentangling Factors of Variation Using Few Labels”
(ICLR Workshop 2019, NeurIPS 2019 submission)
• Summary of findings: Using a few labels highly improves the disentanglement!
1. Existing disentanglement metrics + few labels perform well on model selection,
even though models are completely trained in an unsupervised manner
2. One can obtain even better results if one use few labels into the learning processes
(use a simple supervised regularizer)
25
Following Work & Future Direction
• “Disentangling Factors of Variation Using Few Labels”
(ICLR Workshop 2019, NeurIPS 2019 submission)
• Summary of findings: Using a few labels highly improves the disentanglement!
1. Existing disentanglement metrics + few labels perform well on model selection,
even though models are completely trained in an unsupervised manner
2. One can obtain even better results if one use few labels into the learning processes
(use a simple supervised regularizer)
• Take-home message: Future research should be on “how to utilize inductive bias better”
using a few labels, rather than the previous total correlation-like approaches
26

More Related Content

What's hot

PR-409: Denoising Diffusion Probabilistic Models
PR-409: Denoising Diffusion Probabilistic ModelsPR-409: Denoising Diffusion Probabilistic Models
PR-409: Denoising Diffusion Probabilistic Models
Hyeongmin Lee
 
FixMatch:simplifying semi supervised learning with consistency and confidence
FixMatch:simplifying semi supervised learning with consistency and confidenceFixMatch:simplifying semi supervised learning with consistency and confidence
FixMatch:simplifying semi supervised learning with consistency and confidence
LEE HOSEONG
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
Deep Learning JP
 
【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model
【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model
【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model
Deep Learning JP
 
Digital image processing - Image Enhancement (MATERIAL)
Digital image processing  - Image Enhancement (MATERIAL)Digital image processing  - Image Enhancement (MATERIAL)
Digital image processing - Image Enhancement (MATERIAL)
Mathankumar S
 
Generating Diverse High-Fidelity Images with VQ-VAE-2
Generating Diverse High-Fidelity Images with VQ-VAE-2Generating Diverse High-Fidelity Images with VQ-VAE-2
Generating Diverse High-Fidelity Images with VQ-VAE-2
harmonylab
 
Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)
Manohar Mukku
 
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
Masahiro Suzuki
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
Hakky St
 
【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...
【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...
【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...
Deep Learning JP
 
InfoGAN Paper Review
InfoGAN Paper ReviewInfoGAN Paper Review
InfoGAN Paper Review
태엽 김
 
"Getting More from Your Datasets: Data Augmentation, Annotation and Generativ...
"Getting More from Your Datasets: Data Augmentation, Annotation and Generativ..."Getting More from Your Datasets: Data Augmentation, Annotation and Generativ...
"Getting More from Your Datasets: Data Augmentation, Annotation and Generativ...
Edge AI and Vision Alliance
 
03 digital image fundamentals DIP
03 digital image fundamentals DIP03 digital image fundamentals DIP
03 digital image fundamentals DIP
babak danyal
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
Hakka Labs
 
Transfer Learning
Transfer LearningTransfer Learning
Transfer Learning
Hichem Felouat
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Ono Shigeru
 
Basic Generative Adversarial Networks
Basic Generative Adversarial NetworksBasic Generative Adversarial Networks
Basic Generative Adversarial Networks
Dong Heon Cho
 
Human Pose Estimation by Deep Learning
Human Pose Estimation by Deep LearningHuman Pose Estimation by Deep Learning
Human Pose Estimation by Deep Learning
Wei Yang
 
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
NAVER Engineering
 
Domain Transfer and Adaptation Survey
Domain Transfer and Adaptation SurveyDomain Transfer and Adaptation Survey
Domain Transfer and Adaptation Survey
Sangwoo Mo
 

What's hot (20)

PR-409: Denoising Diffusion Probabilistic Models
PR-409: Denoising Diffusion Probabilistic ModelsPR-409: Denoising Diffusion Probabilistic Models
PR-409: Denoising Diffusion Probabilistic Models
 
FixMatch:simplifying semi supervised learning with consistency and confidence
FixMatch:simplifying semi supervised learning with consistency and confidenceFixMatch:simplifying semi supervised learning with consistency and confidence
FixMatch:simplifying semi supervised learning with consistency and confidence
 
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
【DL輪読会】Drag Your GAN: Interactive Point-based Manipulation on the Generative ...
 
【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model
【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model
【DL輪読会】Trajectory Prediction with Latent Belief Energy-Based Model
 
Digital image processing - Image Enhancement (MATERIAL)
Digital image processing  - Image Enhancement (MATERIAL)Digital image processing  - Image Enhancement (MATERIAL)
Digital image processing - Image Enhancement (MATERIAL)
 
Generating Diverse High-Fidelity Images with VQ-VAE-2
Generating Diverse High-Fidelity Images with VQ-VAE-2Generating Diverse High-Fidelity Images with VQ-VAE-2
Generating Diverse High-Fidelity Images with VQ-VAE-2
 
Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)
 
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
 
An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms An overview of gradient descent optimization algorithms
An overview of gradient descent optimization algorithms
 
【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...
【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...
【DL輪読会】Spectral Normalisation for Deep Reinforcement Learning: An Optimisatio...
 
InfoGAN Paper Review
InfoGAN Paper ReviewInfoGAN Paper Review
InfoGAN Paper Review
 
"Getting More from Your Datasets: Data Augmentation, Annotation and Generativ...
"Getting More from Your Datasets: Data Augmentation, Annotation and Generativ..."Getting More from Your Datasets: Data Augmentation, Annotation and Generativ...
"Getting More from Your Datasets: Data Augmentation, Annotation and Generativ...
 
03 digital image fundamentals DIP
03 digital image fundamentals DIP03 digital image fundamentals DIP
03 digital image fundamentals DIP
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
 
Transfer Learning
Transfer LearningTransfer Learning
Transfer Learning
 
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
Goodfellow, Bengio, Couville (2016) "Deep Learning", Chap. 7
 
Basic Generative Adversarial Networks
Basic Generative Adversarial NetworksBasic Generative Adversarial Networks
Basic Generative Adversarial Networks
 
Human Pose Estimation by Deep Learning
Human Pose Estimation by Deep LearningHuman Pose Estimation by Deep Learning
Human Pose Estimation by Deep Learning
 
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
 
Domain Transfer and Adaptation Survey
Domain Transfer and Adaptation SurveyDomain Transfer and Adaptation Survey
Domain Transfer and Adaptation Survey
 

Similar to Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inferenceDowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inference
Amit Sharma
 
Slides for "Do Deep Generative Models Know What They Don't know?"
Slides for "Do Deep Generative Models Know What They Don't know?"Slides for "Do Deep Generative Models Know What They Don't know?"
Slides for "Do Deep Generative Models Know What They Don't know?"
Julius Hietala
 
Machine learning - session 3
Machine learning - session 3Machine learning - session 3
Machine learning - session 3
Luis Borbon
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
Hadrian7
 
Joint contrastive learning with infinite possibilities
Joint contrastive learning with infinite possibilitiesJoint contrastive learning with infinite possibilities
Joint contrastive learning with infinite possibilities
taeseon ryu
 
Bottle sum
Bottle sumBottle sum
Bottle sum
MasatoUmakoshi
 
Mixed Effects Models - Random Intercepts
Mixed Effects Models - Random InterceptsMixed Effects Models - Random Intercepts
Mixed Effects Models - Random Intercepts
Scott Fraundorf
 
Lec16: Medical Image Registration (Advanced): Deformable Registration
Lec16: Medical Image Registration (Advanced): Deformable RegistrationLec16: Medical Image Registration (Advanced): Deformable Registration
Lec16: Medical Image Registration (Advanced): Deformable Registration
Ulaş Bağcı
 
Data Ananlysis lecture 7 Simon Fraser University
Data Ananlysis lecture 7 Simon Fraser UniversityData Ananlysis lecture 7 Simon Fraser University
Data Ananlysis lecture 7 Simon Fraser University
soniyamarghani
 
MACHINE LEARNING.pptx
MACHINE LEARNING.pptxMACHINE LEARNING.pptx
MACHINE LEARNING.pptx
SOURAVGHOSH623569
 
Building useful models for imbalanced datasets (without resampling)
Building useful models for imbalanced datasets (without resampling)Building useful models for imbalanced datasets (without resampling)
Building useful models for imbalanced datasets (without resampling)
Greg Landrum
 
Introduction to simulating data to improve your research
Introduction to simulating data to improve your researchIntroduction to simulating data to improve your research
Introduction to simulating data to improve your research
Dorothy Bishop
 
A new development in the hierarchical clustering of repertory grid data
A new development in the hierarchical clustering of repertory grid dataA new development in the hierarchical clustering of repertory grid data
A new development in the hierarchical clustering of repertory grid data
Mark Heckmann
 
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsPR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
Jinwon Lee
 
Menggunakan AlisJK : Equating
Menggunakan AlisJK : EquatingMenggunakan AlisJK : Equating
Menggunakan AlisJK : Equating
Wildan Maulana
 
Declarative data analysis
Declarative data analysisDeclarative data analysis
Declarative data analysis
South West Data Meetup
 
Spsshelp 100608163328-phpapp01
Spsshelp 100608163328-phpapp01Spsshelp 100608163328-phpapp01
Spsshelp 100608163328-phpapp01Henock Beyene
 
Mixed Effects Models - Fixed Effect Interactions
Mixed Effects Models - Fixed Effect InteractionsMixed Effects Models - Fixed Effect Interactions
Mixed Effects Models - Fixed Effect Interactions
Scott Fraundorf
 
A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...
A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...
A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...
Vahid Taslimitehrani
 
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...
WiMLDSMontreal
 

Similar to Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (20)

Dowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inferenceDowhy: An end-to-end library for causal inference
Dowhy: An end-to-end library for causal inference
 
Slides for "Do Deep Generative Models Know What They Don't know?"
Slides for "Do Deep Generative Models Know What They Don't know?"Slides for "Do Deep Generative Models Know What They Don't know?"
Slides for "Do Deep Generative Models Know What They Don't know?"
 
Machine learning - session 3
Machine learning - session 3Machine learning - session 3
Machine learning - session 3
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
Joint contrastive learning with infinite possibilities
Joint contrastive learning with infinite possibilitiesJoint contrastive learning with infinite possibilities
Joint contrastive learning with infinite possibilities
 
Bottle sum
Bottle sumBottle sum
Bottle sum
 
Mixed Effects Models - Random Intercepts
Mixed Effects Models - Random InterceptsMixed Effects Models - Random Intercepts
Mixed Effects Models - Random Intercepts
 
Lec16: Medical Image Registration (Advanced): Deformable Registration
Lec16: Medical Image Registration (Advanced): Deformable RegistrationLec16: Medical Image Registration (Advanced): Deformable Registration
Lec16: Medical Image Registration (Advanced): Deformable Registration
 
Data Ananlysis lecture 7 Simon Fraser University
Data Ananlysis lecture 7 Simon Fraser UniversityData Ananlysis lecture 7 Simon Fraser University
Data Ananlysis lecture 7 Simon Fraser University
 
MACHINE LEARNING.pptx
MACHINE LEARNING.pptxMACHINE LEARNING.pptx
MACHINE LEARNING.pptx
 
Building useful models for imbalanced datasets (without resampling)
Building useful models for imbalanced datasets (without resampling)Building useful models for imbalanced datasets (without resampling)
Building useful models for imbalanced datasets (without resampling)
 
Introduction to simulating data to improve your research
Introduction to simulating data to improve your researchIntroduction to simulating data to improve your research
Introduction to simulating data to improve your research
 
A new development in the hierarchical clustering of repertory grid data
A new development in the hierarchical clustering of repertory grid dataA new development in the hierarchical clustering of repertory grid data
A new development in the hierarchical clustering of repertory grid data
 
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsPR-231: A Simple Framework for Contrastive Learning of Visual Representations
PR-231: A Simple Framework for Contrastive Learning of Visual Representations
 
Menggunakan AlisJK : Equating
Menggunakan AlisJK : EquatingMenggunakan AlisJK : Equating
Menggunakan AlisJK : Equating
 
Declarative data analysis
Declarative data analysisDeclarative data analysis
Declarative data analysis
 
Spsshelp 100608163328-phpapp01
Spsshelp 100608163328-phpapp01Spsshelp 100608163328-phpapp01
Spsshelp 100608163328-phpapp01
 
Mixed Effects Models - Fixed Effect Interactions
Mixed Effects Models - Fixed Effect InteractionsMixed Effects Models - Fixed Effect Interactions
Mixed Effects Models - Fixed Effect Interactions
 
A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...
A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...
A new CPXR Based Logistic Regression Method and Clinical Prognostic Modeling ...
 
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...
Using Feature Grouping as a Stochastic Regularizer for High Dimensional Noisy...
 

More from Sangwoo Mo

Brief History of Visual Representation Learning
Brief History of Visual Representation LearningBrief History of Visual Representation Learning
Brief History of Visual Representation Learning
Sangwoo Mo
 
Learning Visual Representations from Uncurated Data
Learning Visual Representations from Uncurated DataLearning Visual Representations from Uncurated Data
Learning Visual Representations from Uncurated Data
Sangwoo Mo
 
Hyperbolic Deep Reinforcement Learning
Hyperbolic Deep Reinforcement LearningHyperbolic Deep Reinforcement Learning
Hyperbolic Deep Reinforcement Learning
Sangwoo Mo
 
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...
Sangwoo Mo
 
Self-supervised Learning Lecture Note
Self-supervised Learning Lecture NoteSelf-supervised Learning Lecture Note
Self-supervised Learning Lecture Note
Sangwoo Mo
 
Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)
Sangwoo Mo
 
Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)
Sangwoo Mo
 
Introduction to Diffusion Models
Introduction to Diffusion ModelsIntroduction to Diffusion Models
Introduction to Diffusion Models
Sangwoo Mo
 
Object-Region Video Transformers
Object-Region Video TransformersObject-Region Video Transformers
Object-Region Video Transformers
Sangwoo Mo
 
Deep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural NetworksDeep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural Networks
Sangwoo Mo
 
Learning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat MinimaLearning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat Minima
Sangwoo Mo
 
Sharpness-aware minimization (SAM)
Sharpness-aware minimization (SAM)Sharpness-aware minimization (SAM)
Sharpness-aware minimization (SAM)
Sangwoo Mo
 
Score-Based Generative Modeling through Stochastic Differential Equations
Score-Based Generative Modeling through Stochastic Differential EquationsScore-Based Generative Modeling through Stochastic Differential Equations
Score-Based Generative Modeling through Stochastic Differential Equations
Sangwoo Mo
 
Self-Attention with Linear Complexity
Self-Attention with Linear ComplexitySelf-Attention with Linear Complexity
Self-Attention with Linear Complexity
Sangwoo Mo
 
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit GradientsMeta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
Sangwoo Mo
 
Generative Models for General Audiences
Generative Models for General AudiencesGenerative Models for General Audiences
Generative Models for General Audiences
Sangwoo Mo
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
Sangwoo Mo
 
Deep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingDeep Learning for Natural Language Processing
Deep Learning for Natural Language Processing
Sangwoo Mo
 
Neural Processes
Neural ProcessesNeural Processes
Neural Processes
Sangwoo Mo
 
Improved Trainings of Wasserstein GANs (WGAN-GP)
Improved Trainings of Wasserstein GANs (WGAN-GP)Improved Trainings of Wasserstein GANs (WGAN-GP)
Improved Trainings of Wasserstein GANs (WGAN-GP)
Sangwoo Mo
 

More from Sangwoo Mo (20)

Brief History of Visual Representation Learning
Brief History of Visual Representation LearningBrief History of Visual Representation Learning
Brief History of Visual Representation Learning
 
Learning Visual Representations from Uncurated Data
Learning Visual Representations from Uncurated DataLearning Visual Representations from Uncurated Data
Learning Visual Representations from Uncurated Data
 
Hyperbolic Deep Reinforcement Learning
Hyperbolic Deep Reinforcement LearningHyperbolic Deep Reinforcement Learning
Hyperbolic Deep Reinforcement Learning
 
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...
 
Self-supervised Learning Lecture Note
Self-supervised Learning Lecture NoteSelf-supervised Learning Lecture Note
Self-supervised Learning Lecture Note
 
Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)Deep Learning Theory Seminar (Chap 3, part 2)
Deep Learning Theory Seminar (Chap 3, part 2)
 
Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)Deep Learning Theory Seminar (Chap 1-2, part 1)
Deep Learning Theory Seminar (Chap 1-2, part 1)
 
Introduction to Diffusion Models
Introduction to Diffusion ModelsIntroduction to Diffusion Models
Introduction to Diffusion Models
 
Object-Region Video Transformers
Object-Region Video TransformersObject-Region Video Transformers
Object-Region Video Transformers
 
Deep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural NetworksDeep Implicit Layers: Learning Structured Problems with Neural Networks
Deep Implicit Layers: Learning Structured Problems with Neural Networks
 
Learning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat MinimaLearning Theory 101 ...and Towards Learning the Flat Minima
Learning Theory 101 ...and Towards Learning the Flat Minima
 
Sharpness-aware minimization (SAM)
Sharpness-aware minimization (SAM)Sharpness-aware minimization (SAM)
Sharpness-aware minimization (SAM)
 
Score-Based Generative Modeling through Stochastic Differential Equations
Score-Based Generative Modeling through Stochastic Differential EquationsScore-Based Generative Modeling through Stochastic Differential Equations
Score-Based Generative Modeling through Stochastic Differential Equations
 
Self-Attention with Linear Complexity
Self-Attention with Linear ComplexitySelf-Attention with Linear Complexity
Self-Attention with Linear Complexity
 
Meta-Learning with Implicit Gradients
Meta-Learning with Implicit GradientsMeta-Learning with Implicit Gradients
Meta-Learning with Implicit Gradients
 
Generative Models for General Audiences
Generative Models for General AudiencesGenerative Models for General Audiences
Generative Models for General Audiences
 
Bayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-LearningBayesian Model-Agnostic Meta-Learning
Bayesian Model-Agnostic Meta-Learning
 
Deep Learning for Natural Language Processing
Deep Learning for Natural Language ProcessingDeep Learning for Natural Language Processing
Deep Learning for Natural Language Processing
 
Neural Processes
Neural ProcessesNeural Processes
Neural Processes
 
Improved Trainings of Wasserstein GANs (WGAN-GP)
Improved Trainings of Wasserstein GANs (WGAN-GP)Improved Trainings of Wasserstein GANs (WGAN-GP)
Improved Trainings of Wasserstein GANs (WGAN-GP)
 

Recently uploaded

Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
Alpen-Adria-Universität
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
DianaGray10
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
Rohit Gautam
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
Zilliz
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Vladimir Iglovikov, Ph.D.
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
SOFTTECHHUB
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
James Anderson
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
Kumud Singh
 
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofszkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
Alex Pruden
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
Neo4j
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
Neo4j
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
Aftab Hussain
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
danishmna97
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
Matthew Sinclair
 

Recently uploaded (20)

Video Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the FutureVideo Streaming: Then, Now, and in the Future
Video Streaming: Then, Now, and in the Future
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1Communications Mining Series - Zero to Hero - Session 1
Communications Mining Series - Zero to Hero - Session 1
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
 
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIEnchancing adoption of Open Source Libraries. A case study on Albumentations.AI
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AI
 
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
Mind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AIMind map of terminologies used in context of Generative AI
Mind map of terminologies used in context of Generative AI
 
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofszkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex Proofs
 
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
GraphSummit Singapore | Graphing Success: Revolutionising Organisational Stru...
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
GraphSummit Singapore | Neo4j Product Vision & Roadmap - Q2 2024
 
Removing Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software FuzzingRemoving Uninteresting Bytes in Software Fuzzing
Removing Uninteresting Bytes in Software Fuzzing
 
How to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptxHow to Get CNIC Information System with Paksim Ga.pptx
How to Get CNIC Information System with Paksim Ga.pptx
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
20240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 202420240607 QFM018 Elixir Reading List May 2024
20240607 QFM018 Elixir Reading List May 2024
 

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

  • 1. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations (ICML 2019 Best Paper) 2019.07.17. Sangwoo Mo 1
  • 2. Outline • Quick Review • What is disentangled representation (DR)? • Prior work on the unsupervised learning of DR • Theoretical Results • Unsupervised learning of DR is impossible without inductive biases • Empirical Results • Q1. Which method should be used? • Q2. How to choose the hyperparameters? • Q3. How to select the best model from a set of trained models? 2
  • 3. Quick Review • Disentangled representation: Learn a representation 𝑧 from the data 𝑥 s.t. • Contain all the information of 𝑥 in a compact and interpretable structure • Currently no single formal definition L (many definitions for the factor of variation) 3* Image from BetaVAE (ICLR 2017)
  • 4. Quick Review: Prior Methods • BetaVAE (ICLR 2017) • Use 𝛽 > 1 for the VAE objective (force to the factorized Gaussian prior) 4
  • 5. Quick Review: Prior Methods • BetaVAE (ICLR 2017) • Use 𝛽 > 1 for the VAE objective (force to the factorized Gaussian prior) • FactorVAE (ICML 2018) & 𝜷-TCVAE (NeurIPS 2018) • Penalize the total correlation of the representation, which is estimated1 by adversarial learning (FactorVAE) or (biased) mini-batch approximation (𝛽-TCVAE) 51. It requires the aggregated posterior 𝑞(𝒛)
  • 6. Quick Review: Prior Methods • BetaVAE (ICLR 2017) • Use 𝛽 > 1 for the VAE objective (force to the factorized Gaussian prior) • FactorVAE (ICML 2018) & 𝜷-TCVAE (NeurIPS 2018) • Penalize the total correlation of the representation, which is estimated1 by adversarial learning (FactorVAE) or (biased) mini-batch approximation (𝛽-TCVAE) • DIP-VAE (ICLR 2018) • Match 𝑞(𝒛) to the disentangled prior 𝑝(𝒛), where 𝐷 is a (tractable) moment matching 61. It requires the aggregated posterior 𝑞(𝒛)
  • 7. Quick Review: Evaluation Metrics • Many heuristics are proposed to quantitatively evaluate the disentanglement • Basic idea: Factors and representation should have 1-1 correspondence 7
  • 8. Quick Review: Evaluation Metrics • Many heuristics are proposed to quantitatively evaluate the disentanglement • Basic idea: Factors and representation should have 1-1 correspondence • BetaVAE (ICLR 2017) & FactorVAE (ICML 2018) metric • Given a factor 𝑐., generate two (simulation) data 𝑥, 𝑥′ with same 𝑐. but different 𝑐1., then train a classifier to predict 𝑐. using the difference of the representation |𝑧 − 𝑧4| • Indeed, the classifier will map the zero-valued index of |𝑧 − 𝑧4 | to the factor 𝑐. 8
  • 9. Quick Review: Evaluation Metrics • Many heuristics are proposed to quantitatively evaluate the disentanglement • Basic idea: Factors and representation should have 1-1 correspondence • BetaVAE (ICLR 2017) & FactorVAE (ICML 2018) metric • Given a factor 𝑐., generate two (simulation) data 𝑥, 𝑥′ with same 𝑐. but different 𝑐1., then train a classifier to predict 𝑐. using the difference of the representation |𝑧 − 𝑧4| • Indeed, the classifier will map the zero-valued index of |𝑧 − 𝑧4 | to the factor 𝑐. • Mutual Information Gap (NeurIPS 2018) • Compute the mutual information between each factor 𝑐. and each dimension of 𝑧5 • For the highest and second highest dimensions 𝑖7 and 𝑖8 of the mutual information, measure the difference between them: 𝐼 𝑐., 𝑧5: − 𝐼(𝑐., 𝑧5; ) 9
  • 10. Theoretical Results • “Unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data” 10
  • 11. Theoretical Results • “Unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data” • Theorem. For 𝑝 𝒛 = ∏5>7 ? 𝑝(𝑧5), there exists an infinite family of bijective functions 𝑓 s.t. • 𝒛 and 𝑓(𝒛) are completely entangled (i.e., ABC(𝒖) AEF ≠ 0 a.e. for all 𝑖, 𝑗) • 𝒛 and 𝑓(𝒛) have same marginal distribution (i.e., 𝑃 𝒛 ≤ 𝒖 = 𝑃(𝑓 𝒛 ≤ 𝒖) for all 𝒖) 11
  • 12. Theoretical Results • “Unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data” • Theorem. For 𝑝 𝒛 = ∏5>7 ? 𝑝(𝑧5), there exists an infinite family of bijective functions 𝑓 s.t. • 𝒛 and 𝑓(𝒛) are completely entangled (i.e., ABC(𝒖) AEF ≠ 0 a.e. for all 𝑖, 𝑗) • 𝒛 and 𝑓(𝒛) have same marginal distribution (i.e., 𝑃 𝒛 ≤ 𝒖 = 𝑃(𝑓 𝒛 ≤ 𝒖) for all 𝒖) • Proof sketch. By construction. • Let 𝑔: supp 𝒛 → 0,1 ? s.t. 𝑔5 𝒗 = 𝑃(𝑧5 ≤ 𝑣5) • Let ℎ: 0,1 ? → ℝ? s.t. ℎ5 𝒗 = 𝜓17(𝑣5) where 𝜓 is a c.d.f. of a normal distribution • Then for any orthogonal matrix 𝑨, the following 𝑓 satisfies the condition: 𝑓 𝒖 = ℎ ∘ 𝑔 17(𝑨 ℎ ∘ 𝑔 𝒖 ) 12
  • 13. Theoretical Results • “Unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data” • Theorem. For 𝑝 𝒛 = ∏5>7 ? 𝑝(𝑧5), there exists an infinite family of bijective functions 𝑓 s.t. • 𝒛 and 𝑓(𝒛) are completely entangled (i.e., ABC(𝒖) AEF ≠ 0 a.e. for all 𝑖, 𝑗) • 𝒛 and 𝑓(𝒛) have same marginal distribution (i.e., 𝑃 𝒛 ≤ 𝒖 = 𝑃(𝑓 𝒛 ≤ 𝒖) for all 𝒖) • Corollary. One cannot find the disentangled representation 𝑟(𝒙) (w.r.t. to the generative model 𝐺(𝒙|𝒛)) as there are two equivalent generative models 𝐺 and 𝐺′ which has same marginal distribution 𝑝(𝒙) but 𝒛4 = 𝑓(𝒛) is completely entangled w.r.t. 𝒛 (so as 𝑟(𝒙)) • Namely, inferring representation 𝒛 from observation 𝒙 is not a well-defined problem 13
  • 14. Theoretical Results • 𝛽-VAE learns some decorrelated features, but they are not semantically decomposed • E.g., the width is entangled with the leg style in 𝛽-VAE 14* Image from BetaVAE (ICLR 2017)
  • 15. Empirical Results • Q1. Which method should be used? • A. Hyperparameters and random seeds matter more than the choice of the model 15
  • 16. Empirical Results • Q2. How to choose the hyperparameters? • A. Selecting the best hyperparameter is extremely hard due to the randomness 16
  • 17. Empirical Results • Q2. How to choose the hyperparameters? • A. Also, there is no obvious trend over the variation of hyperparameters 17
  • 18. Empirical Results • Q2. How to choose the hyperparameters? • A. Good hyperparameters often can be transferred (e.g., dSprites → color-dSprites) 18 Rank correlation matrix
  • 19. Empirical Results • Q3. How to select the best model from a set of trained models? • A. Unsupervised (training) scores do not correlated to the disentanglement metrics 19 Unsupervised scores vs disentanglement metrics
  • 20. Summary • TL;DR: Current unsupervised learning of disentangled representation has a limitation! • Summary of findings: • Q1. Which method should be used? • A. Current methods should be rigorously validated (no significant difference) 20
  • 21. Summary • TL;DR: Current unsupervised learning of disentangled representation has a limitation! • Summary of findings: • Q1. Which method should be used? • A. Current methods should be rigorously validated (no significant difference) • Q2. How to choose the hyperparameters? • A. No rule of thumb, but transfer across datasets seem to help! 21
  • 22. Summary • TL;DR: Current unsupervised learning of disentangled representation has a limitation! • Summary of findings: • Q1. Which method should be used? • A. Current methods should be rigorously validated (no significant difference) • Q2. How to choose the hyperparameters? • A. No rule of thumb, but transfer across datasets seem to help! • Q3. How to select the best model from a set of trained models? • A. (Unsupervised) model selection remains a key challenge! 22
  • 23. Following Work & Future Direction • “Disentangling Factors of Variation Using Few Labels” (ICLR Workshop 2019, NeurIPS 2019 submission) • Summary of findings: Using a few labels highly improves the disentanglement! 23
  • 24. Following Work & Future Direction • “Disentangling Factors of Variation Using Few Labels” (ICLR Workshop 2019, NeurIPS 2019 submission) • Summary of findings: Using a few labels highly improves the disentanglement! 1. Existing disentanglement metrics + few labels perform well on model selection, even though models are completely trained in an unsupervised manner 24
  • 25. Following Work & Future Direction • “Disentangling Factors of Variation Using Few Labels” (ICLR Workshop 2019, NeurIPS 2019 submission) • Summary of findings: Using a few labels highly improves the disentanglement! 1. Existing disentanglement metrics + few labels perform well on model selection, even though models are completely trained in an unsupervised manner 2. One can obtain even better results if one use few labels into the learning processes (use a simple supervised regularizer) 25
  • 26. Following Work & Future Direction • “Disentangling Factors of Variation Using Few Labels” (ICLR Workshop 2019, NeurIPS 2019 submission) • Summary of findings: Using a few labels highly improves the disentanglement! 1. Existing disentanglement metrics + few labels perform well on model selection, even though models are completely trained in an unsupervised manner 2. One can obtain even better results if one use few labels into the learning processes (use a simple supervised regularizer) • Take-home message: Future research should be on “how to utilize inductive bias better” using a few labels, rather than the previous total correlation-like approaches 26