발표자 – Vision AI 김민하
Introduction
DA의 단점 (target label을 알아야함)과 DG의 단점(unseen target data는 활용하지
않음)을 지적하며, unseen target data를 활용하여 domain adaptive한 학습방법을
제안함 → novelty
저자들은 Inference time에서의 Target data를 이용한 self-domain adaptation
framework를 제안
Contribution: Adaptor learning based on Meta-learning
Training → . Multiple source domain + Meta-learning 기반의 Adaptor learning
을 적용함
Test → Unlabeled target dataset을 이용해 Adaptor 학습
Comparison with DG and DA
r
Meta train on Train-samples
Meta train
Meta train on Valid-samples
To prevent the feature mode collapse
Meta Test on Valid-samples
Adapt Step - only train ‘Adaptor’
Given the well-initialized adaptor,
during testing, we first optimize the
adaptor using only the unlabeled
test domain data with all the other
parameters fixed using Equation 7.
Experiments
Experiments
● Finally, their conclusions are..
● This paper introduces meta-learning-based adaptor learning for adaptor
initialization.
● This paper introduces effective adaptor loss to learn the adaptor under
unsupervised learning.
Conclusion
Opinion..
● Unlabelled Target dataset이 존재한다는 조건하에 진행되는 방식
● 예측하고싶은 target dataset이 있을 때 마다 새로 학습해야 하는지
● RGB x HSV channel → 6 channel 학습 : 공평한 실험조건인지
● 여러개 target domain도 가능할지

[AAAI21] Self-Domain Adaptation for Face Anti-Spoofing

  • 1.
  • 2.
    Introduction DA의 단점 (targetlabel을 알아야함)과 DG의 단점(unseen target data는 활용하지 않음)을 지적하며, unseen target data를 활용하여 domain adaptive한 학습방법을 제안함 → novelty 저자들은 Inference time에서의 Target data를 이용한 self-domain adaptation framework를 제안 Contribution: Adaptor learning based on Meta-learning Training → . Multiple source domain + Meta-learning 기반의 Adaptor learning 을 적용함 Test → Unlabeled target dataset을 이용해 Adaptor 학습
  • 3.
  • 4.
  • 5.
    Meta train onTrain-samples
  • 6.
    Meta train Meta trainon Valid-samples To prevent the feature mode collapse
  • 7.
    Meta Test onValid-samples
  • 8.
    Adapt Step -only train ‘Adaptor’ Given the well-initialized adaptor, during testing, we first optimize the adaptor using only the unlabeled test domain data with all the other parameters fixed using Equation 7.
  • 9.
  • 10.
  • 11.
    ● Finally, theirconclusions are.. ● This paper introduces meta-learning-based adaptor learning for adaptor initialization. ● This paper introduces effective adaptor loss to learn the adaptor under unsupervised learning. Conclusion
  • 12.
    Opinion.. ● Unlabelled Targetdataset이 존재한다는 조건하에 진행되는 방식 ● 예측하고싶은 target dataset이 있을 때 마다 새로 학습해야 하는지 ● RGB x HSV channel → 6 channel 학습 : 공평한 실험조건인지 ● 여러개 target domain도 가능할지

Editor's Notes

  • #5 three main steps: adaptor learning with meta learning, adaptor optimizing at inference and final testing.
  • #7 To prevent the feature mode collapse, they use the Spectral Restricted Isometry property regularization sigma : spectral normalization
  • #10 Regularized Fine-grained Meta Face Anti-spoofing (arxiv.org) RFM utilizes it to learn domain invariant features, while we utilize meta-learning to learn a domain adaptor which can adapt to the target domain efficiently. We believe that the two approaches are complementary, and combination of them can further improve the performance. We leave it in the future work
  • #11  Ours wo/meta denotes our method without the first step: adaptor learning with meta learning, which initializes the parameters of the adaptor randomly and directly optimizes the adaptor at inference using the proposed adaptor loss. Ours wo/adapt denotes our methods without the second step: adaptor optimizing, which neglects the information of the test domain and directly predicts results on the test domain using the adaptor learned by the first step. Baseline denotes learning the model using the source domain data without the adaptor and directly predicts results on the test domain. Specifically, the results of Ours wo/meta verify that the pre-learned adaptor through meta-learning during training benefits the adaptation at inference. The results of Ours wo/adapt verify that further optimizing the adaptor to leverage the distribution of the test domain is important to further improve the performance.
  • #12 d
  • #13 hsv : 색상 채도 명도