SlideShare a Scribd company logo
1 of 10
Exploring Patch-wise Semantic Relation for Contrastive Learning
in Image-to-Image Translation Tasks
Chanyong Jung*, Gihyun Kwon*, Jong Chul Ye (*: co-author)
Bio Imaging, Signal Processing & Learning Lab, KAIST
CVPR 2022
Introduction
 Heterogeneous semantic relation
1. Patch-wise semantic relation should be preserved, to enhance
spatial correspondence.
2. Negative samples for contrastive loss should be treated
differently, since they have heterogeneous semantics.
We claim:
Patch-wise heterogenous semantic relation Proposed method
 Consistency of semantic relation
 Contrastive loss using hard negative mining by the semantic relation
Shared Encoder Embedding Space
𝑧𝑘 𝑘=1
𝐾
𝑤𝑘 𝑘=1
𝐾
𝑧2
𝑧3
𝑧4 𝑧1
𝑧1
𝑧2
𝑧3
𝑧4
𝑧5
𝑤1
𝑤2
𝑤3
𝑤4
𝑤5
𝑧5
𝑤2
𝑤3
𝑤4 𝑤1 𝑤5
Input & Output
: Consistency of Contrastive Semantic Relation
with Hard Negative mining
We propose:
Patches from horse Patches from
background
: Semantically Unrelated
: Semantically Related
𝑧𝑘
𝑧1
𝑧2
𝑧3
𝑧4
𝑧5
Method
1. Consistency of Semantic relation distribution
Consistency
Similarity Distribution 𝑃𝑘
𝑧𝑘
𝑃𝑘
𝑧1 𝑧2 𝑧3 𝑧4
𝑖
Similarity Distribution 𝑄𝑘
𝑤𝑘
𝑄𝑘
𝑤1 𝑤2 𝑤3 𝑤4
𝑖
Input
Output
𝑃𝑘 𝑖 =
exp 𝑧𝑘
⊤
𝑧𝑖
𝑗=1
𝐾
exp 𝑧𝑘
⊤
𝑧𝑗
𝑄𝑘 𝑖 =
exp 𝑤𝑘
⊤
𝑤𝑖
𝑗=1
𝐾
exp 𝑤𝑘
⊤
𝑤𝑗
𝐿𝑆𝑅𝐶 =
𝑘=1
𝐾
𝐽𝑆𝐷(𝑃𝑘||𝑄𝑘)
Semantic relation of 𝑖-th patch for 𝑘-th
patch is defined as:
Input
Output
Jensen-Shannon divergence(JSD) between 𝑃𝑘, 𝑄𝑘 is
minimized for the semantic relation consistency (SRC) :
Method
2. Contrastive loss with Hard negatives mining
Sampling negatives 𝑧−
by query 𝑧 is modeled as
the von Mises Fisher distribution
𝑧−
∼ 𝑞𝑧− 𝑧−
; 𝑧, 𝛾 =
1
𝑁𝑞
exp 𝛾 𝑧⊤
𝑧−
𝑝𝑍(𝑧−
)
: Hard negatives
: Negative samples
𝑧
: Query point
Embedding space of input image 𝒳
We use the contrastive loss by decoupled infoNCE (DCE)
with hard negatives (hDCE)
𝐿ℎ𝐷𝐶𝐸 𝛾, 𝜏 =
exp 𝑤⊤
𝑧
𝐸𝑞 exp 𝑤⊤𝑧−
=
exp 𝑤⊤
𝑧
Ep[exp 𝛾 𝑧⊤𝑧− exp(𝑤⊤𝑧−)]
𝜏: Temperature parameter
𝛾: Hardness of the negatives
 Negatives are weighted by semantic closeness, exp{𝛾 𝑧⊤
𝑧−
}
 Hardness of the negatives is explicitly controlled by 𝛾
: We train networks by curriculum learning with varying 𝛾
For positive pair (𝑤, 𝑧) and negative pair (𝑤, 𝑧−
) :
Results
𝐺𝑒𝑛𝑐−𝑑𝑒𝑐
Input
𝐺𝑒𝑛𝑐
𝐹
𝐹
hDCE +SRC
Output
𝐺𝑒𝑛𝑐−𝑑𝑒𝑐
Input
AdaIN
𝐺𝑒𝑛𝑐
𝐹
𝐹
hDCE +SRC
Output
𝐹
hDCE
+SRC
𝐺𝑡𝑒𝑎𝑐ℎ𝑒𝑟
𝐺𝑠𝑡𝑢𝑑𝑒𝑛𝑡 𝐹
fixed
Input Output
𝑧𝑘 𝑘=1
𝐾 𝑤𝑘 𝑘=1
𝐾
𝑧𝑘 𝑘=1
𝐾
𝑤𝑘 𝑘=1
𝐾
𝑧𝑘 𝑘=1
𝐾
𝑤𝑘 𝑘=1
𝐾
(a) Single-modal translation (c) GAN Compression
(b) Multi-modal translation
 Three tasks for the experiments:
Results
1. Single-modal translation
Source
 Improvement of output by retaining
patch-wise semantic relation
Source Ours
Results
2. Multi-modal translation
Latent-guided translation
Reference-guided translation
Source Ours
 Improved output by retaining
patch-wise semantic relation
Diverse outputs by random style codes of each class
Input Spring Summer Autumn Winter
Results
3. GAN Compression
Input Teacher Ours Baseline
 Our student inherits the patch-wise
semantic relation from the teacher.
 The output shows improved
correspondence with the teacher
Horse-to-Zebra Map-to-Satellite Cityscapes
Results
- Similarity Map
Input
Output
: Query point
 Semantic relation consistency (SRC) enhances the input-output correspondence
 Hard negative mining (Hneg) sharpens the semantic relations
Input &
Query point
DCE
DCE +SRC
DCE+Hneg
+SRC
InfoNCE
Thank You
Jong Chul Ye
E-mail:
jong.ye@kaist.ac.kr
Gihyun Kwon
E-mail:
cyclomon@kaist.ac.kr
Chanyong Jung
E-mail:
jcy@kaist.ac.kr

More Related Content

Similar to Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks

pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
butest
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
butest
 

Similar to Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks (19)

GDC2019 - SEED - Towards Deep Generative Models in Game Development
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentGDC2019 - SEED - Towards Deep Generative Models in Game Development
GDC2019 - SEED - Towards Deep Generative Models in Game Development
 
slides (1).pptx
slides (1).pptxslides (1).pptx
slides (1).pptx
 
Variational Autoencoders For Image Generation
Variational Autoencoders For Image GenerationVariational Autoencoders For Image Generation
Variational Autoencoders For Image Generation
 
Learning group em - 20171025 - copy
Learning group   em - 20171025 - copyLearning group   em - 20171025 - copy
Learning group em - 20171025 - copy
 
Tutorial Equivariance in Imaging ICMS 23.pptx
Tutorial Equivariance in Imaging ICMS 23.pptxTutorial Equivariance in Imaging ICMS 23.pptx
Tutorial Equivariance in Imaging ICMS 23.pptx
 
Patch-wise Deep Metric Learning for Unsupervised Low-Dose CT Denoising
Patch-wise Deep Metric Learning for Unsupervised Low-Dose CT DenoisingPatch-wise Deep Metric Learning for Unsupervised Low-Dose CT Denoising
Patch-wise Deep Metric Learning for Unsupervised Low-Dose CT Denoising
 
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Cl...
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Cl...Stochastic Gradient Descent with Exponential Convergence Rates of Expected Cl...
Stochastic Gradient Descent with Exponential Convergence Rates of Expected Cl...
 
Mini-Batch Consistent Slot Set Encoder For Scalable Set Encoding
Mini-Batch Consistent Slot Set Encoder For Scalable Set EncodingMini-Batch Consistent Slot Set Encoder For Scalable Set Encoding
Mini-Batch Consistent Slot Set Encoder For Scalable Set Encoding
 
Jsai final final final
Jsai final final finalJsai final final final
Jsai final final final
 
Joint contrastive learning with infinite possibilities
Joint contrastive learning with infinite possibilitiesJoint contrastive learning with infinite possibilities
Joint contrastive learning with infinite possibilities
 
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
 
Elements of Statistical Learning 読み会 第2章
Elements of Statistical Learning 読み会 第2章Elements of Statistical Learning 読み会 第2章
Elements of Statistical Learning 読み会 第2章
 
2021 03-01-on the relationship between self-attention and convolutional layers
2021 03-01-on the relationship between self-attention and convolutional layers2021 03-01-on the relationship between self-attention and convolutional layers
2021 03-01-on the relationship between self-attention and convolutional layers
 
Learning a nonlinear embedding by preserving class neibourhood structure 최종
Learning a nonlinear embedding by preserving class neibourhood structure   최종Learning a nonlinear embedding by preserving class neibourhood structure   최종
Learning a nonlinear embedding by preserving class neibourhood structure 최종
 
Variational Autoencoder Tutorial
Variational Autoencoder Tutorial Variational Autoencoder Tutorial
Variational Autoencoder Tutorial
 
RNN and sequence-to-sequence processing
RNN and sequence-to-sequence processingRNN and sequence-to-sequence processing
RNN and sequence-to-sequence processing
 
深層意味表現学習 (Deep Semantic Representations)
深層意味表現学習 (Deep Semantic Representations)深層意味表現学習 (Deep Semantic Representations)
深層意味表現学習 (Deep Semantic Representations)
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
 

Recently uploaded

Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
FIDO Alliance
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
FIDO Alliance
 

Recently uploaded (20)

Frisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdf
Frisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdfFrisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdf
Frisco Automating Purchase Orders with MuleSoft IDP- May 10th, 2024.pptx.pdf
 
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
TEST BANK For, Information Technology Project Management 9th Edition Kathy Sc...
 
Working together SRE & Platform Engineering
Working together SRE & Platform EngineeringWorking together SRE & Platform Engineering
Working together SRE & Platform Engineering
 
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...Hyatt driving innovation and exceptional customer experiences with FIDO passw...
Hyatt driving innovation and exceptional customer experiences with FIDO passw...
 
Top 10 CodeIgniter Development Companies
Top 10 CodeIgniter Development CompaniesTop 10 CodeIgniter Development Companies
Top 10 CodeIgniter Development Companies
 
Syngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdfSyngulon - Selection technology May 2024.pdf
Syngulon - Selection technology May 2024.pdf
 
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptxCyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
Cyber Insurance - RalphGilot - Embry-Riddle Aeronautical University.pptx
 
Portal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russePortal Kombat : extension du réseau de propagande russe
Portal Kombat : extension du réseau de propagande russe
 
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
TrustArc Webinar - Unified Trust Center for Privacy, Security, Compliance, an...
 
Introduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptxIntroduction to FIDO Authentication and Passkeys.pptx
Introduction to FIDO Authentication and Passkeys.pptx
 
TopCryptoSupers 12thReport OrionX May2024
TopCryptoSupers 12thReport OrionX May2024TopCryptoSupers 12thReport OrionX May2024
TopCryptoSupers 12thReport OrionX May2024
 
The Metaverse: Are We There Yet?
The  Metaverse:    Are   We  There  Yet?The  Metaverse:    Are   We  There  Yet?
The Metaverse: Are We There Yet?
 
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
Event-Driven Architecture Masterclass: Engineering a Robust, High-performance...
 
Vector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptxVector Search @ sw2con for slideshare.pptx
Vector Search @ sw2con for slideshare.pptx
 
ADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptxADP Passwordless Journey Case Study.pptx
ADP Passwordless Journey Case Study.pptx
 
Long journey of Ruby Standard library at RubyKaigi 2024
Long journey of Ruby Standard library at RubyKaigi 2024Long journey of Ruby Standard library at RubyKaigi 2024
Long journey of Ruby Standard library at RubyKaigi 2024
 
Intro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptxIntro to Passkeys and the State of Passwordless.pptx
Intro to Passkeys and the State of Passwordless.pptx
 
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider  Progress from Awareness to Implementation.pptxTales from a Passkey Provider  Progress from Awareness to Implementation.pptx
Tales from a Passkey Provider Progress from Awareness to Implementation.pptx
 
JavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate GuideJavaScript Usage Statistics 2024 - The Ultimate Guide
JavaScript Usage Statistics 2024 - The Ultimate Guide
 
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
Observability Concepts EVERY Developer Should Know (DevOpsDays Seattle)
 

Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks

  • 1. Exploring Patch-wise Semantic Relation for Contrastive Learning in Image-to-Image Translation Tasks Chanyong Jung*, Gihyun Kwon*, Jong Chul Ye (*: co-author) Bio Imaging, Signal Processing & Learning Lab, KAIST CVPR 2022
  • 2. Introduction  Heterogeneous semantic relation 1. Patch-wise semantic relation should be preserved, to enhance spatial correspondence. 2. Negative samples for contrastive loss should be treated differently, since they have heterogeneous semantics. We claim: Patch-wise heterogenous semantic relation Proposed method  Consistency of semantic relation  Contrastive loss using hard negative mining by the semantic relation Shared Encoder Embedding Space 𝑧𝑘 𝑘=1 𝐾 𝑤𝑘 𝑘=1 𝐾 𝑧2 𝑧3 𝑧4 𝑧1 𝑧1 𝑧2 𝑧3 𝑧4 𝑧5 𝑤1 𝑤2 𝑤3 𝑤4 𝑤5 𝑧5 𝑤2 𝑤3 𝑤4 𝑤1 𝑤5 Input & Output : Consistency of Contrastive Semantic Relation with Hard Negative mining We propose: Patches from horse Patches from background : Semantically Unrelated : Semantically Related 𝑧𝑘 𝑧1 𝑧2 𝑧3 𝑧4 𝑧5
  • 3. Method 1. Consistency of Semantic relation distribution Consistency Similarity Distribution 𝑃𝑘 𝑧𝑘 𝑃𝑘 𝑧1 𝑧2 𝑧3 𝑧4 𝑖 Similarity Distribution 𝑄𝑘 𝑤𝑘 𝑄𝑘 𝑤1 𝑤2 𝑤3 𝑤4 𝑖 Input Output 𝑃𝑘 𝑖 = exp 𝑧𝑘 ⊤ 𝑧𝑖 𝑗=1 𝐾 exp 𝑧𝑘 ⊤ 𝑧𝑗 𝑄𝑘 𝑖 = exp 𝑤𝑘 ⊤ 𝑤𝑖 𝑗=1 𝐾 exp 𝑤𝑘 ⊤ 𝑤𝑗 𝐿𝑆𝑅𝐶 = 𝑘=1 𝐾 𝐽𝑆𝐷(𝑃𝑘||𝑄𝑘) Semantic relation of 𝑖-th patch for 𝑘-th patch is defined as: Input Output Jensen-Shannon divergence(JSD) between 𝑃𝑘, 𝑄𝑘 is minimized for the semantic relation consistency (SRC) :
  • 4. Method 2. Contrastive loss with Hard negatives mining Sampling negatives 𝑧− by query 𝑧 is modeled as the von Mises Fisher distribution 𝑧− ∼ 𝑞𝑧− 𝑧− ; 𝑧, 𝛾 = 1 𝑁𝑞 exp 𝛾 𝑧⊤ 𝑧− 𝑝𝑍(𝑧− ) : Hard negatives : Negative samples 𝑧 : Query point Embedding space of input image 𝒳 We use the contrastive loss by decoupled infoNCE (DCE) with hard negatives (hDCE) 𝐿ℎ𝐷𝐶𝐸 𝛾, 𝜏 = exp 𝑤⊤ 𝑧 𝐸𝑞 exp 𝑤⊤𝑧− = exp 𝑤⊤ 𝑧 Ep[exp 𝛾 𝑧⊤𝑧− exp(𝑤⊤𝑧−)] 𝜏: Temperature parameter 𝛾: Hardness of the negatives  Negatives are weighted by semantic closeness, exp{𝛾 𝑧⊤ 𝑧− }  Hardness of the negatives is explicitly controlled by 𝛾 : We train networks by curriculum learning with varying 𝛾 For positive pair (𝑤, 𝑧) and negative pair (𝑤, 𝑧− ) :
  • 5. Results 𝐺𝑒𝑛𝑐−𝑑𝑒𝑐 Input 𝐺𝑒𝑛𝑐 𝐹 𝐹 hDCE +SRC Output 𝐺𝑒𝑛𝑐−𝑑𝑒𝑐 Input AdaIN 𝐺𝑒𝑛𝑐 𝐹 𝐹 hDCE +SRC Output 𝐹 hDCE +SRC 𝐺𝑡𝑒𝑎𝑐ℎ𝑒𝑟 𝐺𝑠𝑡𝑢𝑑𝑒𝑛𝑡 𝐹 fixed Input Output 𝑧𝑘 𝑘=1 𝐾 𝑤𝑘 𝑘=1 𝐾 𝑧𝑘 𝑘=1 𝐾 𝑤𝑘 𝑘=1 𝐾 𝑧𝑘 𝑘=1 𝐾 𝑤𝑘 𝑘=1 𝐾 (a) Single-modal translation (c) GAN Compression (b) Multi-modal translation  Three tasks for the experiments:
  • 6. Results 1. Single-modal translation Source  Improvement of output by retaining patch-wise semantic relation Source Ours
  • 7. Results 2. Multi-modal translation Latent-guided translation Reference-guided translation Source Ours  Improved output by retaining patch-wise semantic relation Diverse outputs by random style codes of each class Input Spring Summer Autumn Winter
  • 8. Results 3. GAN Compression Input Teacher Ours Baseline  Our student inherits the patch-wise semantic relation from the teacher.  The output shows improved correspondence with the teacher Horse-to-Zebra Map-to-Satellite Cityscapes
  • 9. Results - Similarity Map Input Output : Query point  Semantic relation consistency (SRC) enhances the input-output correspondence  Hard negative mining (Hneg) sharpens the semantic relations Input & Query point DCE DCE +SRC DCE+Hneg +SRC InfoNCE
  • 10. Thank You Jong Chul Ye E-mail: jong.ye@kaist.ac.kr Gihyun Kwon E-mail: cyclomon@kaist.ac.kr Chanyong Jung E-mail: jcy@kaist.ac.kr

Editor's Notes

  1. Hi, I’m chanyong Jung. I would like to introduce our work, investigating patch-wise relation for image translation tasks.
  2. The motivation of the work is the heterogeneous semantic relations between the patches of the single image. We claim that the semantic relation should be preserved in the image translation procedure, And the negative samples for the patch-wise contrastive loss should be treated differently. Following the claim, our method have two parts. In the first part, we enhance the consistency of the semantic relation between the input and the output. Next, we introduce the contrastive loss using the hard negatives, sampled by the semantic relation.
  3. We first impose the consistency to enhance the spatial correspondence between the input and the output. The figure shows the semantic relation between the k-th patch and the other patches. Z and w indicates the embedding vectors from the input and the outputs. The semantic relational distribution is defined as the similarity distribution. We denote the distribution P_k for the input, and Q_k for the output. We minimize the Jensen-Shannon divergence between the distributions, to enhance the consistency.
  4. Next, we introduce the contrastive loss with hard negative mining, considering the semantic relation. We sample the hard negatives by the von Mises Fisher distribution, as shown in the figure. Then, the contrastive loss is defined by the decoupled infoNCE with hard negatives. If you see the loss function, the hardness of the negative mining can be controlled by the gamma. Using gamma, we applied the curriculum learning by the progressive increase of the hardness of the training.
  5. We verified our method by three tasks, which are single- and multi-modal translation, and GAN compression. For GAN compression, we distill the patch-wise relational knowledge, to enhance the spatial correspondence between the teacher and the student.
  6. For single-modal translation, we verified our method by horse-to-zebra dataset and the cityscapes dataset. Our method improved the output for both qualitative and quantitative evaluation. Specifically, the consistency of the semantic relation enhances the spatial correspondence between the images, And results the output images with better visual quality.
  7. For the multi-modal translation, the visual quality and the evaluation metrics also verified the improvement by our method. Similarily to the single-modal translation, the correspondence between the input and the output is enhanced, which results the satisfactory visual quality of the output images. We also demonstrate the diverse outputs by the random style codes for each class.
  8. In case of the GAN compression, we applied our method to enhance the correspondence between the teacher and the student. In our method, the student model additionally receive the patch-wise relational knowledge, which results to better performance. The visual assessment also verifies the enhance of the correspondence between the teacher and the student. The quantitative scores also demonstrate the improvement.
  9. Lastly, we demonstrate the consistency of the semantic relation, by showing the similarity maps. As expected, the SRC loss enhanced the consistency of the semantic relation. Also, the proposed hard negative mining sharpens the semantic relation, reducing the redundant similarity.
  10. Thank you for the attention