Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Variants of GANs - Jaejun Yoo

3,384 views

Published on

Picked-up lists of GAN variants which provided insights to the community. (GANs-Improved GANs-DCGAN-Unrolled GAN-InfoGAN-f-GAN-EBGAN-WGAN)
After short introduction to GANs, we look through the remaining difficulties of standard GANs and their temporary solutions (Improved GANs). By following the slides, we can see the other solutions which tried to resolve the problems in various ways, e.g. careful architecture selection (DCGAN), slight change in update (Unrolled GAN), additional constraint (InfoGAN), generalization of the loss function using various divergence (f-GAN), providing new framework of energy based model (EBGAN), another step of generalization of the loss function (WGAN).

Published in: Technology
  • Hello! Get Your Professional Job-Winning Resume Here - Check our website! https://vk.cc/818RFv
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

Variants of GANs - Jaejun Yoo

  1. 1. VARIANTS OF GANs Jaejun Yoo Ph.D. Candidate @KAIST 13th May, 2017 초짜 대학원생의 입장에서 이해하는
  2. 2. 안녕하세요 저는 유재준 - Ph.D. Candidate - Medical Image Reconstruction, - http://jaejunyoo.blogspot.com/ - CVPR 2017 NTIRE Challenge: Ranked 3rd Topological Data Analysis, EEG
  3. 3. 이 강의의 목표 1. GAN에 대한 더 깊은 이해 2. 이후 GAN 연구의 흐름을 따라가기 위한 기반 다지기 • 기존의 GAN이 가지고 있는 문제점과 그 이유에 대한 이해 • Variants of GAN을 소개하되 주요 문제점을 해결하거나 큰 틀에서 새 로운 방향을 제시한 논문들 위주 소개
  4. 4. BACKGROUND
  5. 5. PREREQUISITES Generative Models “FACE IMAGES”
  6. 6. PREREQUISITES Generative Models * Figure adopted from BEGAN paper released at 31. Mar. 2017 David Berthelot et al. Google (link) Generated Images by Neural Network
  7. 7. PREREQUISITES Generative Models “What I cannot create, I do not understand”
  8. 8. PREREQUISITES Generative Models “What I cannot create, I do not understand” If the network can learn how to draw cat and dog separately, it must be able to classify them, i.e. feature learning follows naturally.
  9. 9. PREREQUISITES Taxonomy of Machine Learning From Yann Lecun, (NIPS 2016)From David silver, Reinforcement learning (UCL course on RL, 2015)
  10. 10. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017) y = f(x)
  11. 11. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  12. 12. PREREQUISITES Taxonomy of Machine Learning From Yann Lecun, (NIPS 2016)From David silver, Reinforcement learning (UCL course on RL, 2015)
  13. 13. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  14. 14. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  15. 15. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  16. 16. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  17. 17. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  18. 18. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  19. 19. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  20. 20. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  21. 21. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017)
  22. 22. PREREQUISITES Slide adopted from Namju Kim, Kakao brain (SlideShare, AI Forum, 2017) * Figure adopted from NIPS 2016 Tutorial: GAN paper, Ian Goodfellow 2016
  23. 23. GAN
  24. 24. SCHEMATIC OVERVIEW z G D x Real or Fake? Diagram of Standard GAN Gaussian noise as an input for G
  25. 25. z G D x Real or Fake? Diagram of Standard GAN 지폐위조범 경찰 SCHEMATIC OVERVIEW
  26. 26. z G D x Real or Fake? Diagram of Standard GAN 지폐위조범 경찰 QP SCHEMATIC OVERVIEW
  27. 27. Diagram of Standard GAN Data distribution Model distribution Discriminator SCHEMATIC OVERVIEW * Figure adopted from Generative Adversarial Nets, Ian Goodfellow et al. 2014
  28. 28. Minimax problem of GAN THEORETICAL RESULTS Show that… 1. The minimax problem of GAN has a global optimum at 𝒑𝒑𝒈𝒈 = 𝒑𝒑𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅 2. The proposed algorithm can find that global optimum TWO STEP APPROACH
  29. 29. THEORETICAL RESULTS Proposition 1.
  30. 30. THEORETICAL RESULTS Main Theorem
  31. 31. THEORETICAL RESULTS Convergence of the proposed algorithm
  32. 32. SUMMARY • Supervised / Unsupervised / Reinforcement Learning • Generative Models • Variational Inference Technique • Adversarial Training • Reduce the gap between 𝑸𝑸 𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎 and 𝑷𝑷𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅 TAKE HOME KEYPOINTS QP
  33. 33. RELATED WORKS (APPLICATIONS) * CycleGAN Jun-Yan Zhu et al. 2017 * SRGAN Christian Ledwig et al. 2017 Super-resolution Domain Adaptation Img2Img Translation
  34. 34. DIFFICULTIES
  35. 35. DIFFICULTIES
  36. 36. DIFFICULTIES CONVERGENCE OF THE MODEL
  37. 37. DIFFICULTIES MODE COLLAPSE (SAMPLE DIVERSITY) * Slide adopted from NIPS 2016 Tutorial, Ian Goodfellow
  38. 38. DIFFICULTIES
  39. 39. DIFFICULTIES TEMPORARY SOLUTION
  40. 40. DIFFICULTIES HOW TO EVALUATE THE QUALITY?
  41. 41. DIFFICULTIES HOW TO EVALUATE THE QUALITY?
  42. 42. DIFFICULTIES HOW TO EVALUATE THE QUALITY? TEMPORARY SOLUTION
  43. 43. SUMMARY “TRAINING GAN IS HARD” • Power balance (NO learning) • Convergence (oscillation) • Mode collapse • Evaluation (GAN training loss is intractable)
  44. 44. SUMMARY “TRAINING GAN IS HARD” • Power balance (NO learning) • Convergence (oscillation) • Mode collapse • Evaluation (GAN training loss is intractable) HOW TO SOLVE THESE PROBLEMS?
  45. 45. DCGAN LET’S CAREFULLY SELECT THE ARCHITECTURE!
  46. 46. MOTIVATION TRAINING IS TOOOO HARD… (Ahhh…it just does not work…orz…)
  47. 47. SCHEMATIC OVERVIEW Guideline for stable learning
  48. 48. SCHEMATIC OVERVIEW Guideline for stable learning “However, after extensive model exploration we identified a family of architectures that resulted in stable training across a range of datasets and allowed for higher resolution and deeper generative models.”
  49. 49. SCHEMATIC OVERVIEW Guideline for stable learning “However, after extensive model exploration we identified a family of architectures that resulted in stable training across a range of datasets and allowed for higher resolution and deeper generative models.” "Most GANs today are at least loosely based on the DCGAN architecture." - NIPS 2016 Tutorial by Ian Goodfellow
  50. 50. Okay, learning is finished and the model converged. Then… KEYPOINTS “How to show that our network or generator learned A MEANINGFUL FUNCTION?”
  51. 51. KEYPOINTS • The generator DOES NOT MEMORIZED the images. • There are NO SHARP TRANSITION while walking in the latent space. • The generator UNDERSTANDS the feature of the data. Okay, learning is finished and the model converged. Then… Show that
  52. 52. RESULTS * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do?
  53. 53. RESULTS * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do? “Walking in the latent space” z-space
  54. 54. RESULTS * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do?
  55. 55. RESULTS * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do? “Forgetting the feature it learned”
  56. 56. RESULTS What can GAN do? “Vector arithmetic“ (e.g. word2vec)
  57. 57. RESULTS What can GAN do? “Vector arithmetic“ (e.g. word2vec)
  58. 58. RESULTS * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do? “Vector arithmetic“ (e.g. word2vec)
  59. 59. RESULTS Neural network understanding “Rotation” * Figure adopted from DCGAN, Alec Radford et al. 2016 (link) What can GAN do? “Understand the meaning of the data“ (e.g. code: rotation, category, and etc.)
  60. 60. SUMMARY 1. Guideline for stable learning 2. Good analysis on the results • Show that the generator DOES NOT MEMORIZED the images • Show that there are NO SHARP TRANSITION while walking in the latent space • Show that the generator UNDERSTANDS the feature of the data
  61. 61. Unrolled GAN LET’S GIVE EXTRA INFORMATION TO THE NETWORK (allow it to ‘see into the future’)
  62. 62. Convergence of the proposed algorithm MOTIVATION Impossible to achieve in practice
  63. 63. MOTIVATION WHAT HAPPENS? * Figure adopted from NIPS 2016 Tutorial GAN, Ian Goodfellow 2016
  64. 64. MOTIVATION WHAT HAPPENS? * Figure adopted from NIPS 2016 Tutorial GAN, Ian Goodfellow 2016 “GAN procedure normally do not cover the whole distribution, even when targeting a mode covering divergence such as KL”
  65. 65. MOTIVATION WHAT HAPPENS? VS 𝑮𝑮∗ = 𝒎𝒎𝒎𝒎 𝒎𝒎 𝑮𝑮 𝒎𝒎𝒎𝒎𝒎𝒎 𝑫𝑫 𝑽𝑽(𝑮𝑮, 𝑫𝑫) 𝑮𝑮∗ = 𝒎𝒎𝒎𝒎𝒎𝒎 𝑫𝑫 𝒎𝒎𝒎𝒎 𝒎𝒎 𝑮𝑮 𝑽𝑽 𝑮𝑮, 𝑫𝑫
  66. 66. MOTIVATION WHAT HAPPENS? VS 𝑮𝑮∗ = 𝒎𝒎𝒎𝒎 𝒎𝒎 𝑮𝑮 𝒎𝒎𝒎𝒎𝒎𝒎 𝑫𝑫 𝑽𝑽(𝑮𝑮, 𝑫𝑫) 𝑮𝑮∗ = 𝒎𝒎𝒎𝒎𝒎𝒎 𝑫𝑫 𝒎𝒎𝒎𝒎 𝒎𝒎 𝑮𝑮 𝑽𝑽 𝑮𝑮, 𝑫𝑫 * A single output which can fool the current discriminator most
  67. 67. SCHEMATIC OVERVIEW * https://www.youtube.com/watch?v=JmON4S0kl04 “Adversarial games are not guaranteed to converge using gradient descent, e.g. rock, scissor, paper.”
  68. 68. UNROLLED GAN * Figure adopted from Unrolled GAN, Luke Metz et al. 2016 Let’s “UNROLL” the discriminator to see several modes!
  69. 69. UNROLLED GAN * Figure adopted from Unrolled GAN, Luke Metz et al. 2016
  70. 70. UNROLLED GAN * Figure adopted from Unrolled GAN, Luke Metz et al. 2016 * Here, only the “G” part is unrolled (∵In practice, “D” usually over-powers “G”)
  71. 71. UNROLLED GAN The Missing Gradient Term “How the discriminator would react to a change in the generator.” (두 가지 경우를 모두 생각해봅시다. )Trade off between
  72. 72. RESULTS Increased stability in terms of power balance * Figure adopted from Unrolled GAN, Luke Metz et al. 2016
  73. 73. SUMMARY 1. Address the mode collapsing problem 2. Unrolling the optimization problem • Make the discriminator optimal as possible as it can
  74. 74. IMPLEMENTATION * Codes from the jupyter notebook of Ben Poole (2nd Author) : https://github.com/poolio/unrolled_gan
  75. 75. IMPLEMENTATION * Codes from the jupyter notebook of Ben Poole (2nd Author) : https://github.com/poolio/unrolled_gan
  76. 76. IMPLEMENTATION * Codes from the jupyter notebook of Ben Poole (2nd Author) : https://github.com/poolio/unrolled_gan
  77. 77. InfoGAN LET’S USE ADDITIONAL CONSTRAINTS FOR THE GENERATOR
  78. 78. MOTIVATION “We want to get a disentangled representation space EXPLICITLY.” Neural network understanding “Rotation” * Figure adopted from DCGAN paper (link)
  79. 79. MOTIVATION “We want to get a disentangled representation space EXPLICITLY.” Neural network understanding “Digit Type” * Figure adopted from infoGAN paper (link) Code
  80. 80. MOTIVATION * Slide adopted from Takato Horii’s slides in SlideShare (link)
  81. 81. • When Generator studies data representations, infoGAN imposes an extra constraint to make NN learn the feature space in disentangled way. • Unlike standard GAN, Generator takes a pair of variables as an input: 1. Gaussian noise z (source of incompressible noise) 2. latent code c (semantic feature of data distribution) infoGAN SCHEMATIC OVERVIEW
  82. 82. z G D x Real or Fake? Diagram of Standard GAN SCHEMATIC OVERVIEW
  83. 83. c z G D x Real or Fake? add an extra “code” variable Diagram of infoGAN 1. Gaussian noise z (source of incompressible noise) 2. latent code c (semantic feature of data distribution) SCHEMATIC OVERVIEW
  84. 84. c z G D x Real or Fake? add an extra “code” variable Diagram of infoGAN 1. Gaussian noise z (source of incompressible noise) 2. latent code c (semantic feature of data distribution) 𝐜𝐜 ~ 𝐜𝐜𝐜𝐜𝐜𝐜( 𝐊𝐊 − 𝟏𝟏𝟏𝟏, 𝐩𝐩 = 𝟎𝟎. 𝟏𝟏) 1 9 𝟏𝟏 𝟏𝟏𝟏𝟏 0 … … SCHEMATIC OVERVIEW
  85. 85. c z G D x Real or Fake? add an extra “code” variable Diagram of infoGAN 1. Gaussian noise z (source of incompressible noise) 2. latent code c (semantic feature of data distribution) * SCHEMATIC OVERVIEW
  86. 86. c z G D x I Real or Fake? Mutual Info. infoGAN : maximize I(c,G(z,c)) Diagram of infoGAN Impose an extra constraint to learn disentangled feature space SCHEMATIC OVERVIEW
  87. 87. “The information in the latent code c should not be lost in the generation process.” c z G D x I Real or Fake? Mutual Info. infoGAN : maximize I(c,G(z,c)) Diagram of infoGAN Impose an extra constraint to learn disentangled feature space SCHEMATIC OVERVIEW
  88. 88. INFOGAN * Figure adopted from Wikipedia “Mutual Information” Changed Minimax problem: Mutual Information: ∴ We need to minimize the entropy of where .
  89. 89. INFOGAN * Figure adopted from Wikipedia “Mutual Information” Changed Minimax problem: Mutual Information: ∴ We need to minimize the entropy of where . Uncertainty
  90. 90. INFOGAN * Figure adopted from Wikipedia “Mutual Information” Changed Minimax problem: Mutual Information: ∴ We need to minimize the entropy of where . Uncertainty * intractable
  91. 91. VARIATIONAL INFORMATION MAXIMIZATION Changed Minimax problem: Let’s MAXIMIZE the LOWER BOUND which is tractable ! QP
  92. 92. RESULTS MNIST dataset * Figure adopted from infoGAN paper (link)
  93. 93. RESULTS 3D FACE dataset * Figure adopted from infoGAN paper (link)
  94. 94. RESULTS * Figure adopted from infoGAN paper (link)
  95. 95. RESULTS * Figure adopted from infoGAN paper (link)
  96. 96. Q IMPLEMENTATION c z G D x I Diagram of infoGAN Train Q separately
  97. 97. SUMMARY 1. Add an additional constraint to improve the performance • Mutual information • No adding on the computational cost 2. Learn better feature space 3. Unsupervised way to learn implicit features in the dataset 4. Variational method
  98. 98. IMPLEMENTATION
  99. 99. IMPLEMENTATION
  100. 100. 𝒇𝒇-GAN LET’S USE f-DIVERGENCE RATHER THAN FIXING A SINGLE ONE. Here, I have heavily reused the slides from S. Nowozin’s (1st author) NIPS 2016 workshop for GAN. You can easily find the related information (slides) at: http://www.nowozin.net/sebastian/blog/nips-2016-generative-adversarial-training-workshop-talk.html
  101. 101. 𝑄𝑄 𝑃𝑃 𝒫𝒫 MOTIVATION LEARNING PROBABILISTIC MODELS * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  102. 102. 𝑃𝑃 𝑄𝑄 𝒫𝒫 Assumptions on P : • tractable sampling • tractable parameter gradient with respect to sample • tractable likelihood function MOTIVATION LEARNING PROBABILISTIC MODELS * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  103. 103. [Goodfellow et al., 2014] 𝑧𝑧 → Lin 100,1200 → ReLU → Lin 1200,1200 → ReLU → Lin(1200,784) → Sigmoid Random input Generator Output 𝑧𝑧 ~ Uniform100 MOTIVATION Likelihood-free Model
  104. 104. • P: Expectation • Q: Expectation • Structure in ℱ • Examples: • Energy statistic [Szekely, 1997] • Kernel MMD [Gretton et al., 20 12], [Smola et al., 2007] • Wasserstein distance [Cuturi, 20 13] • DISCO Nets [Bouchacourt et al., 2016] Integral Probability Metrics [Müller, 1997] [Sriperumbudur et al., 2010] 𝛾𝛾ℱ 𝑃𝑃, 𝑄𝑄 = sup 𝑓𝑓∈ℱ � 𝑓𝑓d𝑃𝑃 − � 𝑓𝑓d𝑄𝑄 Proper scoring rules [Gneiting and Raftery, 2007] 𝑆𝑆 𝑃𝑃, 𝑄𝑄 = � 𝑆𝑆 𝑃𝑃, 𝑥𝑥 d𝑄𝑄(𝑥𝑥) • P: Distribution • Q: Expectation • Examples: • Log-likelihood [Fisher, 1922], [Good, 1952] • Quadratic score [Bernardo, 1979] f-divergences [Ali and Silvey, 1966] 𝐷𝐷𝑓𝑓 𝑃𝑃 ∥ 𝑄𝑄 = � 𝑞𝑞 𝑥𝑥 𝑓𝑓 𝑝𝑝(𝑥𝑥) 𝑞𝑞(𝑥𝑥) d𝑥𝑥 • P: Distribution • Q: Distribution • Examples: • Kullback-Leibler divergence [Kullback and Leibler, 1952] • Jensen-Shannon divergence • Total variation • Pearson 𝜒𝜒2 LEARNING PROBABILISTIC MODELS SCHEMATIC OVERVIEW
  105. 105. • P: Distribution • Q: Expectation • P: Expectation • Q: Expectation • P: Distribution • Q: Distribution [Nguyen et al., 2010], [Reid and Williamson, 2011], [Goodfellow et a l., 2014] Variational representation of divergences LEARNING PROBABILISTIC MODELS SCHEMATIC OVERVIEW
  106. 106. Neural Sampler samples Training samples How do we measure the distance only based on empirical samples from 𝑷𝑷𝜽𝜽(𝒙𝒙) and 𝐐𝐐(𝒙𝒙)? TRAINING NEURAL SAMPLERS SCHEMATIC OVERVIEW * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  107. 107. “Show that the GAN approach is a special case of an existing more general variational divergence estimation approach.” Let’s generalize the GAN objective to arbitrary 𝒇𝒇-divergences! SCHEMATIC OVERVIEW
  108. 108. Neural Sampler distribution True distribution 𝑃𝑃𝜃𝜃 𝑥𝑥 𝑄𝑄 𝑥𝑥 We can minimize some distance (divergence) between the distributions if we had 𝑷𝑷𝜽𝜽(𝒙𝒙) and 𝑸𝑸(𝑥𝑥) TRAINING NEURAL SAMPLERS SCHEMATIC OVERVIEW * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  109. 109. • Divergence between two distributions 𝑫𝑫𝒇𝒇 𝑸𝑸 ∥ 𝑷𝑷 = � 𝓧𝓧 𝒑𝒑 𝒙𝒙 𝒇𝒇 𝒒𝒒(𝒙𝒙) 𝒑𝒑(𝒙𝒙) 𝒅𝒅𝒅𝒅 • f : generator function, convex, f (1) = 0 [Ali and Silvey, 1966] 𝑓𝑓-DIVERGENCE
  110. 110. TRY WHAT YOU WANT! (골라먹는 재미) 𝑓𝑓-DIVERGENCE
  111. 111. • Divergence between two distributions 𝐷𝐷𝑓𝑓 𝑄𝑄 ∥ 𝑃𝑃 = � 𝒳𝒳 𝑝𝑝 𝑥𝑥 𝑓𝑓 𝑞𝑞(𝑥𝑥) 𝑝𝑝(𝑥𝑥) d𝑥𝑥 • Every convex function 𝑓𝑓 has a Fenchel conjugate 𝑓𝑓∗ so that 𝑓𝑓 𝑢𝑢 = sup 𝑡𝑡∈dom𝑓𝑓∗ 𝑡𝑡𝑢𝑢 − 𝑓𝑓∗ (𝑡𝑡) [Nguyen, Wainwright, Jordan, 2010] “Any convex f can be represented as point-wise max of linear functions” Estimating 𝑓𝑓-divergences from samples 𝑓𝑓-DIVERGENCE
  112. 112. 𝐷𝐷𝑓𝑓 𝑄𝑄 ∥ 𝑃𝑃 = � 𝒳𝒳 𝑝𝑝 𝑥𝑥 𝑓𝑓 𝑞𝑞(𝑥𝑥) 𝑝𝑝(𝑥𝑥) d𝑥𝑥 = � 𝒳𝒳 𝑝𝑝 𝑥𝑥 sup 𝑡𝑡∈dom𝑓𝑓∗ 𝑡𝑡 𝑞𝑞(𝑥𝑥) 𝑝𝑝(𝑥𝑥) − 𝑓𝑓∗(𝑡𝑡) d𝑥𝑥 ≥ sup 𝑇𝑇∈𝒯𝒯 � 𝒳𝒳 𝑞𝑞 𝑥𝑥 𝑇𝑇 𝑥𝑥 d𝑥𝑥 − � 𝒳𝒳 𝑝𝑝 𝑥𝑥 𝑓𝑓∗ 𝑇𝑇 𝑥𝑥 d𝑥𝑥 = sup 𝑇𝑇∈𝒯𝒯 𝔼𝔼𝑥𝑥~𝑄𝑄 𝑇𝑇(𝑥𝑥) − 𝔼𝔼𝑥𝑥~𝑃𝑃[𝑓𝑓∗(𝑇𝑇(𝑥𝑥))] Approximate using: samples from Q samples from P Estimating 𝑓𝑓-divergences from samples (cont) 𝑓𝑓-DIVERGENCE * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  113. 113. • GAN min 𝜃𝜃 max 𝜔𝜔 𝔼𝔼𝑥𝑥~𝑄𝑄[log 𝐷𝐷𝜔𝜔 𝑥𝑥 ] + 𝔼𝔼𝑥𝑥~𝑃𝑃𝜃𝜃 [log(1 − 𝐷𝐷𝜔𝜔(𝑥𝑥))] • 𝑓𝑓-GAN min 𝜃𝜃 max 𝜔𝜔 𝔼𝔼𝑥𝑥~𝑄𝑄 𝑇𝑇𝜔𝜔 (𝑥𝑥) − 𝔼𝔼𝑥𝑥~𝑃𝑃𝜃𝜃 [𝑓𝑓∗ (𝑇𝑇𝜔𝜔(𝑥𝑥))] • GAN discriminator-variational function correspondence: log𝐷𝐷𝜔𝜔 𝑥𝑥 = 𝑇𝑇𝜔𝜔 𝑥𝑥 • GAN minimizes the Jensen-Shannon divergence (which was also pointed out in Goodfellow et al., 2014) 𝑓𝑓-GAN and GAN objectives 𝑓𝑓-GAN * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  114. 114. min 𝜃𝜃 max 𝜔𝜔 𝔼𝔼𝑥𝑥~𝑄𝑄 𝑔𝑔𝑓𝑓(𝑉𝑉𝜔𝜔 𝑥𝑥 ) + 𝔼𝔼𝑥𝑥~𝑃𝑃𝜃𝜃 −𝑓𝑓∗ 𝑔𝑔𝑓𝑓 𝑉𝑉𝜔𝜔 𝑥𝑥 Comparison of the objectives 𝑓𝑓-GAN * Please note that in S. Nowozin’s slides, Q represents the real distribution and P stands for the parametric model we set.
  115. 115. • Double-loop algorithm [Goodfellow et al., 2014] • Algorithm: • Inner loop: tighten divergence lower bound • Outer loop: minimize generator loss • In practice the inner loop is run only for one step (two backprops) • Missing justification for this practice • Single-step algorithm (proposed) • Algorithm: simultaneously take (one backprop) • a positive gradient step w.r.t. variational function 𝑇𝑇𝜔𝜔(𝑥𝑥) • a negative gradient step w.r.t. generator function 𝑃𝑃𝜃𝜃 𝑥𝑥 • Does this converge? THEORETICAL RESULTS Algorithm: Double-Loop versus Single-Step
  116. 116. GENERAL ALGORITHM f-GAN * Please note that in S. Nowozin’s paper, P represents the real distribution and 𝑄𝑄𝜃𝜃 stands for the parametric model we set.
  117. 117. THEORETICAL RESULTS Local convergence of the algorithm 1
  118. 118. • Assumptions • F is locally (strongly) convex with respect to 𝜃𝜃 • F is (strongly) concave with respect to 𝜔𝜔 • Local convergence: Define 𝐽𝐽 𝜃𝜃, 𝜔𝜔 = 1 2 𝛻𝛻𝜃𝜃 𝐹𝐹 2 + 1 2 𝛻𝛻𝜔𝜔 𝐹𝐹 2, then 𝐽𝐽 𝜃𝜃𝑡𝑡 , 𝜔𝜔𝑡𝑡 ≤ 1 − 𝛿𝛿2 𝐿𝐿 𝑡𝑡 𝐽𝐽 𝜃𝜃0 , 𝜔𝜔0 𝛻𝛻2 𝐹𝐹 = 𝛻𝛻𝜃𝜃 2 𝐹𝐹 𝛻𝛻𝜃𝜃 𝛻𝛻𝜔𝜔 𝐹𝐹 𝛻𝛻𝜔𝜔 𝛻𝛻𝜃𝜃 𝐹𝐹 𝛻𝛻𝜔𝜔 2 𝐹𝐹 𝛻𝛻𝜃𝜃 2 𝐹𝐹 ≻ 0, 𝛻𝛻𝜔𝜔 2 𝐹𝐹 ≺ 0 𝛿𝛿: strong convexity parameter, L: smoothness parameter Geometric rate of convergence! THEORETICAL RESULTS Local convergence of the algorithm 1
  119. 119. 𝑽𝑽 𝒙𝒙, 𝒚𝒚 = 𝒙𝒙𝒙𝒙 + 𝜹𝜹 𝟐𝟐 (𝒙𝒙𝟐𝟐 − 𝒚𝒚𝟐𝟐) 𝑽𝑽 𝒙𝒙, 𝒚𝒚 = 𝒙𝒙𝒚𝒚𝟐𝟐 + 𝜹𝜹 𝟐𝟐 (𝒙𝒙𝟐𝟐 − 𝒚𝒚𝟐𝟐) VIDEOS
  120. 120. RESULTS Synthetic 1D Univariate Approximate a mixture of Gaussians by a Gaussian to • Validate the approach • Demonstrate the properties of different divergences [Minka, 2005] Compare the exact optimization of the divergence with the GAN approach * Please note that in S. Nowozin’s paper, P represents the real distribution and 𝑄𝑄𝜃𝜃 stands for the parametric model we set.
  121. 121. RESULTS * Figure adopted from f-GAN paper (link) Synthetic 1D Univariate * Please note that in S. Nowozin’s paper, P represents the real distribution and 𝑄𝑄𝜃𝜃 stands for the parametric model we set.
  122. 122. RESULTS * Figure adopted from f-GAN paper (link)
  123. 123. SUMMARY • Generalize GAN objective to arbitrary 𝒇𝒇-divergences • Simplify GAN algorithm + local convergence proof • Demonstrate different divergences
  124. 124. ETC. WHY GAN GENERATES SHARPER IMAGES? * Figure adopted from NIPS 2016 Tutorial GAN, Ian Goodfellow 2016
  125. 125. ETC. WHY GAN GENERATES SHARPER IMAGES? * Figure adopted from NIPS 2016 Tutorial GAN, Ian Goodfellow 2016
  126. 126. • LSUN experiment: No (visually) • Empirical contradiction to intuition from [Theis et al., 2015], [Huszar, 2015] • Why? • Intuition: strong inductive bias of model class Q ETC. DOES THE DIVERGENCE MATTER?
  127. 127. EBGAN LET’S USE ENERGY-BASED MODEL FOR THE DISCRIMINATOR
  128. 128. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link) We want to learn the data manifold!
  129. 129. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link) We want to learn the data manifold! ⟺ Do not want to give penalty to both 𝒚𝒚 and �𝒚𝒚 .
  130. 130. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link) We want to learn the data manifold! ⟺ Do not want to give penalty to both 𝒚𝒚 and �𝒚𝒚 . ⟺ Pen can fall either way. (there are no wrong way to fall)
  131. 131. MOTIVATION ENERGY-BASED MODEL Energy based models capture dependencies between variables by associating a scalar energy (a measure of compatibility) to each configuration of the variables. Inference, i.e., making a prediction or decision, consists in setting the value of observed variables and finding values of the remaining variables that minimize the energy. Learning consists in finding an energy function that associates low energies to correct values of the remaining variables, and higer energies to incorrect values. A loss functional, minimized during learning, is used to measure the quality of the available energy functions. * Quoted from “A Tutorial on Energy-Based Learning”, Yann Lecun et al., 2006
  132. 132. Energy based models capture dependencies between variables by associating a scalar energy (a measure of compatibility) to each configuration of the variables. Inference, i.e., making a prediction or decision, consists in setting the value of observed variables and finding values of the remaining variables that minimize the energy. Learning consists in finding an energy function that associates low energies to correct values of the remaining variables, and higer energies to incorrect values. A loss functional, minimized during learning, is used to measure the quality of the available energy functions. MOTIVATION ENERGY-BASED MODEL WITHIN the COMMON inference/learning FRAMEWORK, the wide choice of energy functions and loss functionals allows for the design of many types of statistical models, both probabilistic and non-probabilistic. * Quoted from “A Tutorial on Energy-Based Learning”, Yann Lecun et al., 2006
  133. 133. MOTIVATION ENERGY-BASED MODEL * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link) “LET’S USE IT!”
  134. 134. MOTIVATION ENERGY-BASED MODEL * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link) “BUT HOW do we choose where to push up?“
  135. 135. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link)
  136. 136. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link) LIMITATIONS → Limit the space → Every interesting case is intractable → How to pick the point to push up? → Limit the model or space ⋮ ⋮
  137. 137. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link)
  138. 138. MOTIVATION * Figure adopted from Yann Lecun’s slides, NIPS 2016 (link)
  139. 139. SCHEMATIC OVERVIEW * Slides from Yann Lecun’s talk in NIPS 2016 (link)
  140. 140. * Slides from Yann Lecun’s talk in NIPS 2016 (link) SCHEMATIC OVERVIEW
  141. 141. * Slides from Yann Lecun’s talk in NIPS 2016 (link) SCHEMATIC OVERVIEW
  142. 142. Architecture: discriminator is an auto-encoder Loss functions: EBGAN * Slides from Yann Lecun’s talk in NIPS 2016 (link)
  143. 143. Architecture: discriminator is an auto-encoder Loss functions: hinge loss EBGAN * Slides from Yann Lecun’s talk in NIPS 2016 (link)
  144. 144. THEORETICAL RESULTS * Slides from Yann Lecun’s talk in NIPS 2016 (link)
  145. 145. RESULTS * Slides from Yann Lecun’s talk in NIPS 2016 (link)
  146. 146. RESULTS * Slides from Yann Lecun’s talk in NIPS 2016 (link)
  147. 147. SUMMARY 1. New framework using an energy-based model • Discriminator as an energy function • Low values on the data manifold • Higher values everywhere else • Generator produce contrastive samples 2. Stable learning
  148. 148. WGAN
  149. 149. MOTIVATION What does it mean to learn a probability model? : we learned it from 𝒇𝒇-GAN! 𝑄𝑄 𝑃𝑃𝜃𝜃 𝒫𝒫 LEARNING PROBABILISTIC MODELS Assumptions on P : tractable sampling, parameter gradient with respect to sample, likelihood function
  150. 150. MOTIVATION What does it mean to learn a probability model? : we learned it from 𝒇𝒇-GAN! 𝑄𝑄 𝑃𝑃𝜃𝜃 𝒫𝒫 LEARNING PROBABILISTIC MODELS Assumptions on P : tractable sampling, parameter gradient with respect to sample, likelihood function … 𝒎𝒎𝒎𝒎 𝒎𝒎 𝜽𝜽 𝑲𝑲𝑲𝑲[𝑸𝑸| 𝑷𝑷𝜽𝜽 ⟺ 𝒎𝒎𝒎𝒎𝒎𝒎 𝜽𝜽 � 𝒊𝒊=𝟏𝟏 𝑵𝑵 𝒍𝒍𝒍𝒍𝒍𝒍 𝑷𝑷 𝒙𝒙𝒊𝒊 𝜽𝜽 QP
  151. 151. MOTIVATION What if the supports of two distribution does not overlap? 𝑰𝑰𝑰𝑰 𝑲𝑲𝑲𝑲[𝑸𝑸| 𝑷𝑷𝜽𝜽 𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔𝒔 𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅? QP
  152. 152. MOTIVATION What if the supports of two distribution does not overlap? 𝑨𝑨𝑨𝑨𝑨𝑨 𝒂𝒂 𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒕𝒕𝒕𝒕 𝒕𝒕𝒕𝒕𝒕𝒕 𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎 𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅 QP * Note that this is a very rough explanation
  153. 153. MOTIVATION What if the supports of two distribution does not overlap? 𝑨𝑨𝑨𝑨𝑨𝑨 𝒂𝒂 𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏𝒏 𝒕𝒕𝒕𝒕𝒕𝒕𝒕𝒕 𝒕𝒕𝒕𝒕 𝒕𝒕𝒕𝒕𝒕𝒕 𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎𝒎 𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅𝒅 QP TEMPORARY? OR UNSATISFYING SOLUTION * Note that this is a very rough explanation
  154. 154. • P: Expectation • Q: Expectation • Structure in ℱ • Examples: • Energy statistic [Szekely, 1997] • Kernel MMD [Gretton et al., 20 12], [Smola et al., 2007] • Wasserstein distance [Cuturi, 20 13] • DISCO Nets [Bouchacourt et al., 2016] Integral Probability Metrics [Müller, 1997] [Sriperumbudur et al., 2010] 𝛾𝛾ℱ 𝑃𝑃, 𝑄𝑄 = sup 𝑓𝑓∈ℱ � 𝑓𝑓d𝑃𝑃 − � 𝑓𝑓d𝑄𝑄 Proper scoring rules [Gneiting and Raftery, 2007] 𝑆𝑆 𝑃𝑃, 𝑄𝑄 = � 𝑆𝑆 𝑃𝑃, 𝑥𝑥 d𝑄𝑄(𝑥𝑥) • P: Distribution • Q: Expectation • Examples: • Log-likelihood [Fisher, 1922], [Good, 1952] • Quadratic score [Bernardo, 1979] f-divergences [Ali and Silvey, 1966] 𝐷𝐷𝑓𝑓 𝑃𝑃 ∥ 𝑄𝑄 = � 𝑞𝑞 𝑥𝑥 𝑓𝑓 𝑝𝑝(𝑥𝑥) 𝑞𝑞(𝑥𝑥) d𝑥𝑥 • P: Distribution • Q: Distribution • Examples: • Kullback-Leibler divergence [Kullback and Leibler, 1952] • Jensen-Shannon divergence • Total variation • Pearson 𝜒𝜒2 LEARNING PROBABILISTIC MODELS REVIEW!
  155. 155. [Goodfellow et al., 2014] 𝑧𝑧 → Lin 100,1200 → ReLU → Lin 1200,1200 → ReLU → Lin(1200,784) → Sigmoid Random input Generator Output 𝑧𝑧 ~ Uniform100 REVIEW! Likelihood-free Model
  156. 156. [Goodfellow et al., 2014] 𝑧𝑧 → Lin 100,1200 → ReLU → Lin 1200,1200 → ReLU → Lin(1200,784) → Sigmoid Random input Generator Output 𝑧𝑧 ~ Uniform100 REVIEW! WELL KNOWN FOR BEING DELICATE AND UNSTABLE FOR TRAINING Likelihood-free Model
  157. 157. • P: Expectation • Q: Expectation • Structure in ℱ • Examples: • Energy statistic [Szekely, 1997] • Kernel MMD [Gretton et al., 20 12], [Smola et al., 2007] • Wasserstein distance [Cuturi, 20 13] • DISCO Nets [Bouchacourt et al., 2016] Integral Probability Metrics [Müller, 1997] [Sriperumbudur et al., 2010] 𝛾𝛾ℱ 𝑃𝑃, 𝑄𝑄 = sup 𝑓𝑓∈ℱ � 𝑓𝑓d𝑃𝑃 − � 𝑓𝑓d𝑄𝑄 Proper scoring rules [Gneiting and Raftery, 2007] 𝑆𝑆 𝑃𝑃, 𝑄𝑄 = � 𝑆𝑆 𝑃𝑃, 𝑥𝑥 d𝑄𝑄(𝑥𝑥) • P: Distribution • Q: Expectation • Examples: • Log-likelihood [Fisher, 1922], [Good, 1952] • Quadratic score [Bernardo, 1979] f-divergences [Ali and Silvey, 1966] 𝐷𝐷𝑓𝑓 𝑃𝑃 ∥ 𝑄𝑄 = � 𝑞𝑞 𝑥𝑥 𝑓𝑓 𝑝𝑝(𝑥𝑥) 𝑞𝑞(𝑥𝑥) d𝑥𝑥 • P: Distribution • Q: Distribution • Examples: • Kullback-Leibler divergence [Kullback and Leibler, 1952] • Jensen-Shannon divergence • Total variation • Pearson 𝜒𝜒2 LEARNING PROBABILISTIC MODELS REVIEW!
  158. 158. THEORETIC RESULTS
  159. 159. THEORETIC RESULTS Different distances Slide courtesy of Sungbin Lim, DeepBio, 2017
  160. 160. THEORETIC RESULTS Different distances Slide courtesy of Sungbin Lim, DeepBio, 2017
  161. 161. THEORETIC RESULTS Different distances Slide courtesy of Sungbin Lim, DeepBio, 2017
  162. 162. THEORETIC RESULTS Different distances Slide courtesy of Sungbin Lim, DeepBio, 2017
  163. 163. THEORETIC RESULTS Slide courtesy of Sungbin Lim, DeepBio, 2017
  164. 164. THEORETIC RESULTS Slide courtesy of Sungbin Lim, DeepBio, 2017
  165. 165. THEORETIC RESULTS Slide courtesy of Sungbin Lim, DeepBio, 2017
  166. 166. Slide courtesy of Sungbin Lim, DeepBio, 2017 THEORETIC RESULTS
  167. 167. THEORETIC RESULTS Slide courtesy of Sungbin Lim, DeepBio, 2017
  168. 168. WASSERSTEIN DISTANCE
  169. 169. Slide courtesy of Sungbin Lim, DeepBio, 2017 WASSERSTEIN DISTANCE
  170. 170. Slide courtesy of Sungbin Lim, DeepBio, 2017 WASSERSTEIN DISTANCE
  171. 171. THEORETIC RESULTS * Figure adopted from WGAN paper (link)
  172. 172. THEORETIC RESULTS Slide courtesy of Sungbin Lim, DeepBio, 2017
  173. 173. THEORETIC RESULTS Slide courtesy of Sungbin Lim, DeepBio, 2017
  174. 174. THEORETIC RESULTS Wasserstein distance is a continuous function on 𝜽𝜽 under mild assumption!
  175. 175. THEORETIC RESULTS Wasserstein distance is the weakest one
  176. 176. THEORETIC RESULTS Highly intractable! (for all joint dist….)
  177. 177. THEORETIC RESULTS Change the problem with its dual problem which is tractable (somehow)!
  178. 178. THEORETIC RESULTS Okay! We can use the neural network! (up to a constant)
  179. 179. IMPLEMENTATION
  180. 180. RESULTS * Figure adopted from WGAN paper (link)
  181. 181. RESULTS * Figure adopted from WGAN paper (link)
  182. 182. RESULTS * Figure adopted from WGAN paper (link) Stable without Batch normalization!
  183. 183. RESULTS * Figure adopted from WGAN paper (link) Stable without DCGAN structure (generator)!
  184. 184. SUMMARY 1. Provide a comprehensive theoretical analysis of how the EM distance behaves in comparison to the others 2. Introduce Wasserstein-GAN that minimizes a reasonable and efficient approximation of the EM distance. 3. Empirically show that WGANs cure the main training problems of GANs (e.g. stability, power balance, mode-collapsing) 4. Evaluation criteria (learning curve)
  185. 185. IMPLEMENTATION THAT SIMPLE! (after all those mathematics…)
  186. 186. • 거의 모든 GAN에 대한 구현이 Pytorch와 tensorflow 버전으 로 구현되어있는 repo: https://github.com/wiseodd/generative-models IMPLEMENTATION
  187. 187. THANK YOU  jaejun.yoo@kaist.ac.kr
  188. 188. MLE & KL DIVERGENCE
  189. 189. 분포수렴이란?

×