Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

[Pr12] dann jaejun yoo

662 views

Published on

Introduction to domain adversarial training of neural network.
(Kor) video : https://www.youtube.com/watch?v=n2J7giHrS-Y&t=1s
Papers: A survey on transfer learning, SJ Pan 2009 / A theory of learning from different domains, S Ben-David et al. 2010 / Domain-Adversarial Training of Neural Networks, Y Ganin 2016
Slides I refered:
http://www.di.ens.fr/~germain/talks/nips2014_dann_slides.pdf
http://john.blitzer.com/talks/icmltutorial_2010.pdf (DA theory part)
https://epat2014.sciencesconf.org/conference/epat2014/pages/slides_DA_epat_17.pdf (DA theory part)
https://www.slideshare.net/butest/ppt-3860159 (DA theory part)

Published in: Technology
  • Be the first to comment

[Pr12] dann jaejun yoo

  1. 1. Domain Adversarial Training of Neural Network PR12와 함께 이해하는 * Domain Adversarial Training of Neural Network, Y. Ganin et al. 2016를 바탕으로 작성한 리뷰 Jaejun Yoo Ph.D. Candidate @KAIST PR12 4TH MAY, 2017
  2. 2. Usually we try to… Test (target) Training (source)
  3. 3. For simplicity, let’s consider the binary classification problem
  4. 4. 일반적인 supervised learning setting: Training 과 test의 domain이 같다고 가정.
  5. 5. TAXONOMY OF TRANSFER LEARNING
  6. 6. 전자기기 고객평가 (X) / 긍정 혹은 부정 라벨 (Y)
  7. 7. 전자기기 고객평가 (X) / 긍정 혹은 부정 라벨 (Y) 비디오 게임 고객평가 (X)
  8. 8. 전자기기 고객평가 (X) / 긍정 혹은 부정 라벨 (Y) 비디오 게임 고객평가 (X) NN으로 표현되는 H 함수 공간으로부터….
  9. 9. 전자기기 고객평가 (X) / 긍정 혹은 부정 라벨 (Y) 비디오 게임 고객평가 (X) Classifier h를 학습하는데, target의 label을 모르지만 source(X,Y)와 target(X) 두 도메인 모두에서 잘 label 을 찾는 h를 찾고 싶다. NN으로 표현되는 H 함수 공간으로부터….
  10. 10. DANN
  11. 11. DANN TRY TO CLASSIFY WELL WITH THE EXTRACTED FEATURE! Ordinary classification POSITIVE NEGATIVE 고객 평가 댓글
  12. 12. DANN Ordinary classification Domain Classification 전자기기 비디오 게임 TRY TO CLASSIFY WELL WITH THE EXTRACTED FEATURE! POSITIVE NEGATIVE 고객 평가 댓글
  13. 13. DANN Ordinary classification Domain Classification 전자기기 비디오 게임 TRY TO CLASSIFY WELL WITH THE EXTRACTED FEATURE! POSITIVE NEGATIVE 고객 평가 댓글 TRY TO EXTRACT DOMAIN INDEPENDENT FEATURE!
  14. 14. DANN Ordinary classification Domain Classification 전자기기 비디오 게임 TRY TO CLASSIFY WELL WITH THE EXTRACTED FEATURE! POSITIVE NEGATIVE 고객 평가 댓글 TRY TO EXTRACT DOMAIN INDEPENDENT FEATURE! e.g. f : compact, sharp, blurry → easy to discriminate the domain ⇓ f : good, excited, nice, never buy, …
  15. 15. • Combining DA and feature learning within one training process • Principled way to learn a good representation based on the generalization guarantee : minimize the H divergence directly (no heuristic) “When or when not the DA algorithm works.” “Why it works.” DANN
  16. 16. 기존 전략: 최대한 적은 parameter로 training error가 최소인 model을 찾자
  17. 17. 이제는 training domain (source)과 testing domain (target)이 서로 다르다 기존의 전략 외에 다른 전략이 추가로 필요하다.
  18. 18. PREREQUISITE Different distances Slide courtesy of Sungbin Lim, DeepBio, 2017
  19. 19. = 0
  20. 20. A Bound on the Adaptation Error 1. Difference across all measurable subsets cannot be estimated from finite samples 2. We’re only interested in differences related to classification error
  21. 21. Idea: Measure subsets where hypotheses in disagree Subsets A are error sets of one hypothesis wrt another 1. Always lower than L1 2. computable from finite unlabeled samples. (Kifer et al. 2004) 3. train classifier to discriminate between source and target data
  22. 22. A Computable Adaptation Bound Divergence estimation complexity Dependent on number of unlabeled samples
  23. 23. The optimal joint hypothesis is the hypothesis with minimal combined error is that error
  24. 24. THANKS TO GENERALIZATION GUARANTEE
  25. 25. THEORETICAL RESULTS
  26. 26. THEORETICAL RESULTS 𝒉 ∈ 𝑯 ⟺ 𝟏 − 𝒉 ∈ 𝑯
  27. 27. THEORETICAL RESULTS
  28. 28. THEORETICAL RESULTS
  29. 29. DANN
  30. 30. DANN
  31. 31. DANN
  32. 32. DANN
  33. 33. DANN ↔
  34. 34. DANN ↔
  35. 35. DANN
  36. 36. SHALLOW DANN
  37. 37. SHALLOW DANN
  38. 38. tSNE RESULTS
  39. 39. REFERENCE PAPERS 1. A survey on transfer learning, SJ Pan 2009 2. A theory of learning from different domains, S Ben-David et al. 2010 3. Domain-Adversarial Training of Neural Networks, Y Ganin 2016 BLOG 1. http://jaejunyoo.blogspot.com/2017/01/domain-adversarial-training-of-neural.html 2. https://github.com/jaejun-yoo/tf-dann-py35 3. https://github.com/jaejun-yoo/shallow-DANN-two-moon-dataset SLIDES 1. http://www.di.ens.fr/~germain/talks/nips2014_dann_slides.pdf 2. http://john.blitzer.com/talks/icmltutorial_2010.pdf (DA theory part) 3. https://epat2014.sciencesconf.org/conference/epat2014/pages/slides_DA_epat_17.pdf (DA theory part) 4. https://www.slideshare.net/butest/ppt-3860159 (DA theory part) VIDEO 1. https://www.youtube.com/watch?v=h8tXDbywcdQ (Terry Um 딥러닝 토크) 2. https://www.youtube.com/watch?v=F2OJ0fAK46Q (DA theory part) 3. https://www.youtube.com/watch?v=uc6K6tRHMAA&index=13&list=WL&t=2570s (DA theory part)

×