2. What’s domain adaptation?
2
Train dataset Test dataset
Get robustness in different data generation distribution
Source domain Target domain
Domain adaptation
3. Types of domain adaptation[1]
3
Source domain
Target domain
for train
Situation
Supervised learning — Data & Label Afford the cost
Unsupervised learning Data & Label Only Data
Unlabeled data is
accessible
Domain generalization Data & Label No Data New user/subject
4. Learning Transferable Features with Deep Adaptation Networks[4]
4
Discrepancy loss for latter layers
where phi is feature mapping function, k is kernel function and H_k is reproducing
kernel Hilbert space
Multi-kernel maximum mean discrepancy(MK-MMD) is defined as
5. Domain-Adversarial Training of Neural Networks[2]
5
Gradient reversal layer prevents feature extractor from
learning domain-specific feature
6. Domain-Adversarial Training of Neural Networks[2]
6
In the feature space, domains are inseparable,
which means domain-invariant feature is learnt
8. Adversarial Discriminative Domain Adaptation[3]
8
1. Train source CNN and classifier
2. Fixing source CNN weights, train target CNN and Discriminator
3. Use target CNN and pre-trained classifier when testing
Target CNN is intended to learn similar feature
representation with source CNN
9. Unsupervised Domain Adaptation with Residual Transfer Networks[5]
9
1. By MMD, close the distance in encoded space
2. Source classifier(fs) has residual block, which are target classifier(ft)
and residual(Δf).
3. ft is trained also with entropy minimization
This network handles the different P(Y|Z) in each domains.
10. Asymmetric Tri-training for Unsupervised Domain Adaptation[6]
10
1. After training encoder(F) and two source classifiers(F1 and F2), make
pseudo label to target domain
2. Train F and target classifer(Ft) with pseudo label
11. Maximum Classifier Discrepancy for Unsupervised Domain Adaptation[7]
11
In previous methods, class decision surface was lost by
domain adaptation due to not considering it.
12. Maximum Classifier Discrepancy for Unsupervised Domain Adaptation[7]
12
A. Train Generator(G) and
classifiers(F1 and F2) on
source dataset
B. Fixing G, train F1 and F2
with minus discrepancy
loss on target dataset
C. Fixing F1 and F2, train G
with discrepancy loss on
target dataset
13. Maximum Classifier Discrepancy for Unsupervised Domain Adaptation[7]
13
Step B Step CStep A
Using two task-specific classifiers, and training generators and them
adversarially, decision boundaries don’t lose domain info and task info