SlideShare a Scribd company logo
Explaining in Style:
Training a GAN to explain a classifier in StyleSpace
딥러닝 논문 읽기 모임
이미지처리팀 : 김상현, 고형권, 허다운, 조경진, 김준철, 전선영(발표자)
2021.10.31
[이미지처리팀]
Oran Lang, Yossi Gandelsman, Michal Yarom
Google Research
Contents
Abstract
StylEx
- Visualize the effect of changing multiple attributes per image.
- Provide image-specific explanations.
Explaining a classifier
Abstract
Dog vs Cat classifier
- We want to explain a dog vs cat classifier.
- We want to understand why this specific image was classified as a cat.
Heat-map
Limitation: cannot visualize/explain well attributes that are not spatially
localized, like size, color, etc.
Related Work
Counter factual explanation
“ If input X been X’ ➞ classifier output Y would change to Y’ ”
Limitation : their visualization changes all relevant attributes at once
Our Approach
Automatically discover disentangled attributes
➞ generate counterfactual examples
Method
StylEx architecture
Method
1. To explain the classifier output on any given input image.
2. To ensure that the generative model captures classifier-related attributes
by using the Classifier-Loss.
StylEx architecture
Method
StylEx architecture
Method
Based on the standard GAN training procedure, but adds to it several
modifications
1. to train the generator G and an adversarial discriminator D simultaneously.
2. train an encoder E with the generator G, using a reconstruction loss.
3. Incorporate the classifier into the StyleGAN training procedure
➞ to obtain a classifier-specific StyleSpace.
Results
StylEx
We propose the StylEx model for classifier-based training of a StyleGAN2,
thus driving its StyleSpace To capture classifier-specific attributes.
• To discover classifier-related attributes in StyleSpace coordinates, and
use these for counterfactual explanations.
• To explain a large variety of classifiers and real-world complex domains.
Conclusion

More Related Content

More from taeseon ryu

Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories
taeseon ryu
 
RL_UpsideDown
RL_UpsideDownRL_UpsideDown
RL_UpsideDown
taeseon ryu
 
Packed Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation ExtractionPacked Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation Extraction
taeseon ryu
 
MOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement LearningMOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement Learning
taeseon ryu
 
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language ModelsScaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
taeseon ryu
 
Visual prompt tuning
Visual prompt tuningVisual prompt tuning
Visual prompt tuning
taeseon ryu
 
mPLUG
mPLUGmPLUG
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdfvariBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
taeseon ryu
 
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdfReinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
taeseon ryu
 
The Forward-Forward Algorithm
The Forward-Forward AlgorithmThe Forward-Forward Algorithm
The Forward-Forward Algorithm
taeseon ryu
 
Towards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural NetworksTowards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural Networks
taeseon ryu
 
BRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive SummarizationBRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive Summarization
taeseon ryu
 
ProximalPolicyOptimization
ProximalPolicyOptimizationProximalPolicyOptimization
ProximalPolicyOptimization
taeseon ryu
 
Dream2Control paper review
Dream2Control paper reviewDream2Control paper review
Dream2Control paper review
taeseon ryu
 
Online Continual Learning on Class Incremental Blurry Task Configuration with...
Online Continual Learning on Class Incremental Blurry Task Configuration with...Online Continual Learning on Class Incremental Blurry Task Configuration with...
Online Continual Learning on Class Incremental Blurry Task Configuration with...
taeseon ryu
 
[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation
[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation
[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation
taeseon ryu
 
Unsupervised Neural Machine Translation for Low-Resource Domains
Unsupervised Neural Machine Translation for Low-Resource DomainsUnsupervised Neural Machine Translation for Low-Resource Domains
Unsupervised Neural Machine Translation for Low-Resource Domains
taeseon ryu
 
PaLM Scaling Language Modeling with Pathways - 230219 (1).pdf
PaLM Scaling Language Modeling with Pathways - 230219 (1).pdfPaLM Scaling Language Modeling with Pathways - 230219 (1).pdf
PaLM Scaling Language Modeling with Pathways - 230219 (1).pdf
taeseon ryu
 
Distributional RL via Moment Matching
Distributional RL via Moment MatchingDistributional RL via Moment Matching
Distributional RL via Moment Matching
taeseon ryu
 
Deep Reinforcement Learning from Human Preferences
Deep Reinforcement Learning from Human PreferencesDeep Reinforcement Learning from Human Preferences
Deep Reinforcement Learning from Human Preferences
taeseon ryu
 

More from taeseon ryu (20)

Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories
 
RL_UpsideDown
RL_UpsideDownRL_UpsideDown
RL_UpsideDown
 
Packed Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation ExtractionPacked Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation Extraction
 
MOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement LearningMOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement Learning
 
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language ModelsScaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
 
Visual prompt tuning
Visual prompt tuningVisual prompt tuning
Visual prompt tuning
 
mPLUG
mPLUGmPLUG
mPLUG
 
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdfvariBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
 
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdfReinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
 
The Forward-Forward Algorithm
The Forward-Forward AlgorithmThe Forward-Forward Algorithm
The Forward-Forward Algorithm
 
Towards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural NetworksTowards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural Networks
 
BRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive SummarizationBRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive Summarization
 
ProximalPolicyOptimization
ProximalPolicyOptimizationProximalPolicyOptimization
ProximalPolicyOptimization
 
Dream2Control paper review
Dream2Control paper reviewDream2Control paper review
Dream2Control paper review
 
Online Continual Learning on Class Incremental Blurry Task Configuration with...
Online Continual Learning on Class Incremental Blurry Task Configuration with...Online Continual Learning on Class Incremental Blurry Task Configuration with...
Online Continual Learning on Class Incremental Blurry Task Configuration with...
 
[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation
[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation
[2023] Cut and Learn for Unsupervised Object Detection and Instance Segmentation
 
Unsupervised Neural Machine Translation for Low-Resource Domains
Unsupervised Neural Machine Translation for Low-Resource DomainsUnsupervised Neural Machine Translation for Low-Resource Domains
Unsupervised Neural Machine Translation for Low-Resource Domains
 
PaLM Scaling Language Modeling with Pathways - 230219 (1).pdf
PaLM Scaling Language Modeling with Pathways - 230219 (1).pdfPaLM Scaling Language Modeling with Pathways - 230219 (1).pdf
PaLM Scaling Language Modeling with Pathways - 230219 (1).pdf
 
Distributional RL via Moment Matching
Distributional RL via Moment MatchingDistributional RL via Moment Matching
Distributional RL via Moment Matching
 
Deep Reinforcement Learning from Human Preferences
Deep Reinforcement Learning from Human PreferencesDeep Reinforcement Learning from Human Preferences
Deep Reinforcement Learning from Human Preferences
 

Explaining in style training a gan to explain a classifier in style space 전선영

  • 1. Explaining in Style: Training a GAN to explain a classifier in StyleSpace 딥러닝 논문 읽기 모임 이미지처리팀 : 김상현, 고형권, 허다운, 조경진, 김준철, 전선영(발표자) 2021.10.31 [이미지처리팀] Oran Lang, Yossi Gandelsman, Michal Yarom Google Research
  • 3. Abstract StylEx - Visualize the effect of changing multiple attributes per image. - Provide image-specific explanations.
  • 4. Explaining a classifier Abstract Dog vs Cat classifier - We want to explain a dog vs cat classifier. - We want to understand why this specific image was classified as a cat.
  • 5. Heat-map Limitation: cannot visualize/explain well attributes that are not spatially localized, like size, color, etc. Related Work Counter factual explanation “ If input X been X’ ➞ classifier output Y would change to Y’ ” Limitation : their visualization changes all relevant attributes at once
  • 6. Our Approach Automatically discover disentangled attributes ➞ generate counterfactual examples Method
  • 7. StylEx architecture Method 1. To explain the classifier output on any given input image. 2. To ensure that the generative model captures classifier-related attributes by using the Classifier-Loss.
  • 9. StylEx architecture Method Based on the standard GAN training procedure, but adds to it several modifications 1. to train the generator G and an adversarial discriminator D simultaneously. 2. train an encoder E with the generator G, using a reconstruction loss. 3. Incorporate the classifier into the StyleGAN training procedure ➞ to obtain a classifier-specific StyleSpace.
  • 11. StylEx We propose the StylEx model for classifier-based training of a StyleGAN2, thus driving its StyleSpace To capture classifier-specific attributes. • To discover classifier-related attributes in StyleSpace coordinates, and use these for counterfactual explanations. • To explain a large variety of classifiers and real-world complex domains. Conclusion