Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

[PR12] intro. to gans jaejun yoo

2,808 views

Published on

Introduction to Generative Adversarial Networks. TensorFlowKR PR12 (paper readers 12) #1 presentation.
video: https://www.youtube.com/watch?v=kLDuxRtxGD8&t=1s
blog(KOR): http://jaejunyoo.blogspot.com

Published in: Technology
  • Be the first to comment

[PR12] intro. to gans jaejun yoo

  1. 1. GANs PR12와 함께 이해하는 * Generative Adversarial Nets Ian Goodfellow et al. 2014를 바탕으로 작성한 리뷰 Jaejun Yoo Ph.D. Candidate @KAIST PR12 16th Apr, 2017
  2. 2. 안녕하세요 저는 유재준 - Ph.D. Candidate - Medical Image Reconstruction, - http://jaejunyoo.blogspot.com/ Topological Data Analysis, EEG
  3. 3. Generative Adversarial Network
  4. 4. Generative Adversarial Network
  5. 5. PREREQUISITES Generative Models “FACE IMAGES”
  6. 6. PREREQUISITES Generative Models * Figure adopted from BEGAN paper released at 31. Mar. 2017 David Berthelot et al. Google (link) Generated Images by Neural Network
  7. 7. PREREQUISITES Generative Models “What I cannot create, I do not understand”
  8. 8. PREREQUISITES Generative Models “What I cannot create, I do not understand” If the network can learn how to draw cat and dog separately, it must be able to classify them, i.e. feature learning follows naturally.
  9. 9. PREREQUISITES Taxonomy of Machine Learning From Yann Lecun, (NIPS 2016)From David silver, Reinforcement learning (UCL course on RL, 2015)
  10. 10. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017) y = f(x)
  11. 11. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  12. 12. PREREQUISITES Taxonomy of Machine Learning From Yann Lecun, (NIPS 2016)From David silver, Reinforcement learning (UCL course on RL, 2015)
  13. 13. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  14. 14. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  15. 15. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  16. 16. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  17. 17. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  18. 18. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  19. 19. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  20. 20. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  21. 21. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  22. 22. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017) * Figure adopted from NIPS 2016 Tutorial: GAN paper, Ian Goodfellow 2016
  23. 23. Generative Adversarial Network
  24. 24. Generative Adversarial Network
  25. 25. SCHEMATIC OVERVIEW z G D x Real or Fake? Diagram of Standard GAN Gaussian noise as an input for G
  26. 26. z G D x Real or Fake? Diagram of Standard GAN 지폐위조범 경찰 SCHEMATIC OVERVIEW
  27. 27. z G D x Real or Fake? Diagram of Standard GAN 지폐위조범 경찰 QP SCHEMATIC OVERVIEW
  28. 28. Diagram of Standard GAN Data distribution Model distribution Discriminator SCHEMATIC OVERVIEW * Figure adopted from Generative Adversarial Nets, Ian Goodfellow et al. 2014
  29. 29. Minimax problem of GAN THEORETICAL RESULTS Show that… 1. The minimax problem of GAN has a global optimum at 𝒑𝒑𝒈𝒈 = 𝒑𝒑𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅 2. The proposed algorithm can find that global optimum TWO STEP APPROACH
  30. 30. THEORETICAL RESULTS Proposition 1.
  31. 31. THEORETICAL RESULTS Proposition 1.
  32. 32. THEORETICAL RESULTS Main Theorem
  33. 33. THEORETICAL RESULTS Convergence of the proposed algorithm
  34. 34. THEORETICAL RESULTS Convergence of the proposed algorithm "The subderivatives of a supremum of convex functions include the derivative of the function at the point where the maximum is attained."
  35. 35. RESULTS * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do?
  36. 36. RESULTS What can GAN do? Vector arithmetic (e.g. word2vec)
  37. 37. RESULTS What can GAN do? Vector arithmetic (e.g. word2vec)
  38. 38. RESULTS What can GAN do? Vector arithmetic (e.g. word2vec) * Figure adopted from DCGAN, Alec Radford et al. 2016 (link)
  39. 39. RESULTS “We want to get a disentangled representation space EXPLICITLY.” Neural network understanding “Rotation” * Figure adopted from DCGAN, Alec Radford et al. 2016 (link)
  40. 40. DIFFICULTIES
  41. 41. DIFFICULTIES
  42. 42. DIFFICULTIES CONVERGENCE OF THE MODEL
  43. 43. DIFFICULTIES CONVERGENCE OF THE MODEL
  44. 44. DIFFICULTIES HOW TO EVALUATE THE QUALITY?
  45. 45. DIFFICULTIES HOW TO EVALUATE THE QUALITY?
  46. 46. DIFFICULTIES MODE COLLAPSE (SAMPLE DIVERSITY) * Slide adopted from NIPS 2016 Tutorial, Ian Goodfellow
  47. 47. RELATED WORKS * Unrolled GAN Luke Metz et al. 2016
  48. 48. RELATED WORKS * Unrolled GAN Luke Metz et al. 2016
  49. 49. RELATED WORKS * CycleGAN Jun-Yan Zhu et al. 2017 * SRGAN Christian Ledwig et al. 2017 Super-resolution Img2Img Translation
  50. 50. RELATED WORKS * infoGAN Xi Chen et al. 2016 Find a CODE
  51. 51. RELATED WORKS Find a CODE * infoGAN Xi Chen et al. 2016
  52. 52. RELATED WORKS “The information in the latent code c should not be lost in the generation process.” c z G D x I Real or Fake? Mutual Info. infoGAN : maximize I(c,G(z,c)) Diagram of infoGAN Impose an extra constraint to learn disentangled feature space
  53. 53. THANK YOU  jaejun.yoo@kaist.ac.kr

×