Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

1시간만에 GAN(Generative Adversarial Network) 완전 정복하기

13,025 views

Published on

발표자: 최윤제(고려대 석사과정)
최윤제 (Yunjey Choi)는 고려대학교에서 컴퓨터공학을 전공하였으며, 현재는 석사과정으로 Machine Learning을 공부하고 있는 학생이다. 코딩을 좋아하며 이해한 것을 다른 사람들에게 공유하는 것을 좋아한다. 1년 간 TensorFlow를 사용하여 Deep Learning을 공부하였고 현재는 PyTorch를 사용하여 Generative Adversarial Network를 공부하고 있다. TensorFlow로 여러 논문들을 구현, PyTorch Tutorial을 만들어 Github에 공개한 이력을 갖고 있다.

개요:
Generative Adversarial Network(GAN)은 2014년 Ian Goodfellow에 의해 처음으로 제안되었으며, 적대적 학습을 통해 실제 데이터의 분포를 추정하는 생성 모델입니다. 최근 들어 GAN은 가장 인기있는 연구 분야로 떠오르고 있고 하루에도 수 많은 관련 논문들이 쏟아져 나오고 있습니다.
수 없이 쏟아져 나오고 있는 GAN 논문들을 다 읽기가 힘드신가요? 괜찮습니다. 기본적인 GAN만 완벽하게 이해한다면 새로 나오는 논문들도 쉽게 이해할 수 있습니다.
이번 발표를 통해 제가 GAN에 대해 알고 있는 모든 것들을 전달해드리고자 합니다. GAN을 아예 모르시는 분들, GAN에 대한 이론적인 내용이 궁금하셨던 분들, GAN을 어떻게 활용할 수 있을지 궁금하셨던 분들이 발표를 들으면 좋을 것 같습니다.

발표영상: https://youtu.be/odpjk7_tGY0

Published in: Technology
  • Hi there! I just wanted to share a list of sites that helped me a lot during my studies: .................................................................................................................................... www.EssayWrite.best - Write an essay .................................................................................................................................... www.LitReview.xyz - Summary of books .................................................................................................................................... www.Coursework.best - Online coursework .................................................................................................................................... www.Dissertations.me - proquest dissertations .................................................................................................................................... www.ReMovie.club - Movies reviews .................................................................................................................................... www.WebSlides.vip - Best powerpoint presentations .................................................................................................................................... www.WritePaper.info - Write a research paper .................................................................................................................................... www.EddyHelp.com - Homework help online .................................................................................................................................... www.MyResumeHelp.net - Professional resume writing service .................................................................................................................................. www.HelpWriting.net - Help with writing any papers ......................................................................................................................................... Save so as not to lose
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Sex in your area is here: ❤❤❤ http://bit.ly/2u6xbL5 ❤❤❤
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Follow the link, new dating source: ♥♥♥ http://bit.ly/2u6xbL5 ♥♥♥
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Got a new Iphone 6 in just 7 days completing surveys and offers! Now I'm just a few days away from completing and receiving my samsung tablet! Highly recommended! Definitely the best survey site out there! ♣♣♣ http://t.cn/AieXAuZz
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Hello! Get Your Professional Job-Winning Resume Here - Check our website! https://vk.cc/818RFv
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

1시간만에 GAN(Generative Adversarial Network) 완전 정복하기

  1. 1. 1 Introduction Generative Adversarial Networks Yunjey Choi Korea University DAVIAN LAB
  2. 2. 2 Speaker Introduction Introduction B.S. in Computer Science & Engineering at Korea University M.S. Student in Computer Science & Engineering at Korea University (Current) Interest: Deep Learning,TensorFlow, PyTorch GitHub Link: https://github.com/yunjey
  3. 3. 3 Referenced Slides Introduction • Namju Kim. Generative Adversarial Networks (GAN) https://www.slideshare.net/ssuser77ee21/generative-adversarial-networks-70896091 • Taehoon Kim. 지적 대화를 위한 깊고 넓은 딥러닝 https://www.slideshare.net/carpedm20/ss-63116251
  4. 4. Introduction 01
  5. 5. 5 Branches of ML Introduction Semi-supervised Learning Unsupervised Learning Supervised Learning Reinforcement Learning Machine Learning No labeled data No feedback “find hidden structure” Labeled data Direct feedback No labeled data Delayed feedback Reward signal
  6. 6. 6 Supervised Learning Introduction Discriminative Model woman The discriminative model learns how to classify input to its class. man (1) (1) (2) (2) Input image (64x64x3)
  7. 7. 7 Unsupervised Learning Introduction Generative Model Latent code The generative model learns the distribution of training data. (100) Image (64x64x3)
  8. 8. 8 Generative Model 𝑋 1 2 3 4 5 6 𝑃 𝑋 1 6 1 6 1 6 0 6 1 6 2 6 1 2 3 4 5 6 1 6 2 6 𝑝 𝑥 𝑥 Probability Distribution Introduction Random variable Probability Basics (Review) Probability mass function
  9. 9. 9 Generative Model What if 𝑥 is actual images in the training data? At this point, 𝑥 can be represented as a (for example) 64x64x3 dimensional vector. Probability Distribution Introduction
  10. 10. 10 Generative Model 𝑥 𝑝 𝑑𝑎𝑡𝑎(𝑥) Probability Distribution Introduction There is a 𝑝 𝑑𝑎𝑡𝑎(𝑥) that represents the distribution of actual images. Probability density function
  11. 11. 11 Generative Model 𝑥 𝑝 𝑑𝑎𝑡𝑎(𝑥) Probability Distribution Introduction Let’s take an example with human face image dataset. Our dataset may contain few images of men with glasses. 𝑥1 𝑥2 𝑥3 𝑥4 𝑥1 is a 64x64x3 high dimensional vector representing a man with glasses. The probability density value is low
  12. 12. The probability density value is high 12 Generative Model 𝑥 𝑝 𝑑𝑎𝑡𝑎(𝑥) Probability Distribution Introduction Our dataset may contain many images of women with black hair. 𝑥1 𝑥2 𝑥3 𝑥4 𝑥2 is a 64x64x3 high dimensional vector representing a woman with black hair.
  13. 13. 13 Generative Model 𝑥 𝑝 𝑑𝑎𝑡𝑎(𝑥) Probability Distribution Introduction Our dataset may contain very many images of women with blonde hair. 𝑥1 𝑥2 𝑥3 𝑥4 𝑥3 is a 64x64x3 high dimensional vector representing a woman with blonde hair. The probability density value is very high
  14. 14. 14 Generative Model 𝑥 𝑝 𝑑𝑎𝑡𝑎(𝑥) Probability Distribution Introduction Our dataset may not contain these strange images. 𝑥1 𝑥2 𝑥3 𝑥4 𝑥4 is an 64x64x3 high dimensional vector representing very strange images. The probability density value is almost 0
  15. 15. 15 Generative Model 𝑥 Probability Distribution Introduction The goal of the generative model is to find a 𝑝 𝑚𝑜𝑑𝑒𝑙(𝑥) that approximates 𝑝 𝑑𝑎𝑡𝑎(𝑥) well. 𝑥1 𝑥2 𝑥3 𝑥4 𝑝 𝑚𝑜𝑑𝑒𝑙(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝐷𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑎𝑐𝑡𝑢𝑎𝑙 𝑖𝑚𝑎𝑔𝑒𝑠 𝐷𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑜𝑓 𝑖𝑚𝑎𝑔𝑒𝑠 𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 𝑏𝑦 𝑡ℎ𝑒 𝑚𝑜𝑑𝑒𝑙
  16. 16. Generative Adversarial Networks 02
  17. 17. 17 Intuition in GAN GANs G(z) DGz D(G(z)) D D(x) x Fake image Real image The probability of that x came from the real data (0~1)Discriminator Generator Latent Code May be high May be low Training with real images Training with fake images
  18. 18. 18 Intuition in GAN GANs G(z) DGz D(G(z)) D D(x) x Real image (64x64x3) This value should be close to 1.Discriminator (Neural Network) The discriminator should classify a real image as real.
  19. 19. 19 Intuition in GAN GANs G(z) Dz D(G(z)) D D(x) x Fake image generated by the generator (64x64x3) Generator This value should be close to 0. The discriminator should classify a fake image as fake.
  20. 20. 20 Intuition in GAN GANs G(z) DGz D(G(z)) D D(x) x Generated image (64x64x3) Generator (Neural Network) Latent Code (100) This value should be close to 1. The generator should create an image that is indistinguishable from real to deceive the discriminator
  21. 21. 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺(𝑧))) 21 Objective Function of GAN GANs G(z) DGz D(G(z)) D D(x) x 𝐺 𝐷 Training with real images Training with fake images Maximum when 𝐷(𝑥) = 1 Maximum when 𝐷(𝐺(𝑧)) = 0 Sample 𝑥 from real data distribution Sample latent code 𝑧 from Gaussian distribution Train D to classify fake images as fake Train D to classify real images as real 𝐷 should maximize 𝑉(𝐷, 𝐺) Objective function
  22. 22. 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺(𝑧))) 𝐺 is independent of this part 22 Objective Function of GAN GANs 𝐺 should minimize 𝑉(𝐷, 𝐺) Minimum when 𝐷(𝐺(𝑧)) = 1 G(z) DGz D(G(z)) D D(x) x Train G to deceive D 𝐺 𝐷 Training with real images Training with fake images Objective function
  23. 23. 23 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x Training with real images Training with fake images
  24. 24. Define the discriminator input size: 784 hidden size: 128 output size: 1 24 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x Discriminator Assume x is MNIST (784 dimension) Output probability(1 dimension)
  25. 25. 25 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x Generator Define the generator input size: 100 hidden size: 128 output size: 784 Latent code (100 dimension) Generated image (784 dimension)
  26. 26. 26 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x Binary Cross Entropy Loss (ℎ(𝑥), 𝑦) − 𝑦 log ℎ 𝑥 − (1 − 𝑦) log(1 − ℎ(𝑥))
  27. 27. 27 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x Optimizer for D and G
  28. 28. 28 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x x is a tensor of shape (batch_size, 784). z is a tensor of shape (batch_size, 100).
  29. 29. 29 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x D(x) gets closer to 1. D(G(z)) gets closer to 0 Forward, Backward and Gradient Descent Train the discriminator with real images Train the discriminator with fake images
  30. 30. 30 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x Forward, Backward and Gradient Descent D(G(z)) gets closer to 1 Train the generator to deceive the discriminator
  31. 31. 31 PyTorch Implementation GANs G(z) DGz D(G(z)) D D(x)x The complete code can be found here https://github.com/yunjey/pytorch-tutorial
  32. 32. 32 Objective Function of Generator 𝐺 Objective function of G Non-Saturating Game GANs Images created by the generator at the beginning of training 𝑦 = log(1 − 𝑥) The gradient is relatively small at D(G(z))=0 At the beginning of training, the discriminator can clearly classify the generated image as fake because the quality of the image is very low. This means that D(G(z)) is almost zero at early stages of training. 𝑚𝑖𝑛 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 )
  33. 33. 33 Solution for Poor Gradient 𝐺 𝑚𝑎𝑥 𝐸𝑧~𝑝 𝑧(𝑧) log 𝐷(𝐺 𝑧 ) 𝐺 𝑚𝑖𝑛 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) Non-Saturating Game GANs 𝑦 = log(𝑥) The gradient is very large at D(G(z))=0 Use binary cross entropy loss function with fake label (1) 𝑚𝑖𝑛 𝐸𝑧~𝑝 𝑧(𝑧)[−𝑦 log 𝐷(𝐺 𝑧 ) − 1 − 𝑦 log(1 − 𝐷(𝐺 𝑧 )] 𝑚𝑖𝑛 𝐸𝑧~𝑝 𝑧(𝑧)[− log 𝐷 𝐺 𝑧 ] 𝐺 𝐺 𝑦 = 1 Modification (heuristically motivated) • Practical Usage
  34. 34. 34 Theory in GAN GANs 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝑚𝑖𝑛 𝐽𝑆𝐷(𝑝 𝑑𝑎𝑡𝑎||𝑝 𝑔) 𝐺, 𝐷 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 𝐺 𝐷 𝑥 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝐽𝑆𝐷(𝑃||𝑄) = 1 2 𝐾𝐿(𝑃| 𝑀 + 1 2 𝐾𝐿(𝑄| 𝑀 𝑤ℎ𝑒𝑟𝑒 𝑀 = 1 2 (𝑃 + 𝑄) KL Divergence • Why does GANs work? same Please see Appendix for details Because it actually minimizes the distance between the real data distribution and the model distribution. Objective function of GANs Jenson-Shannon divergence
  35. 35. Variants of GAN 03
  36. 36. 36 DCGAN Variants of GAN Radford et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 2015 • Deep Convolutional GAN(DCGAN), 2015 The authors present a model that is still highly preferred.
  37. 37. 37 DCGAN Variants of GAN DGz D(G(z)) D D(x) x Use convolution, Leaky ReLU Use deconvolution, ReLU • No pooling layer (Instead strided convolution) • Use batch normalization • Adam optimizer(lr=0.0002, beta1=0.5, beta2=0.999)
  38. 38. 38 DCGAN Variants of GAN Radford et al. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 2015 • Latent vector arithmetic
  39. 39. 39 LSGAN Variants of GAN Xudong Mao et al. Least Squares Generative Adversarial Networks, 2016 • Least Squares GAN (LSGAN) Proposed a GAN model that adopts the least squares loss function for the discriminator.
  40. 40. 40 LSGAN Variants of GAN Vanilla GAN LSGAN Remove sigmoid non-linearity in last layer
  41. 41. 41 LSGAN Variants of GAN Vanilla GAN LSGAN Generator is the same as original
  42. 42. 42 LSGAN Variants of GAN Vanilla GAN LSGAN Replace cross entropy loss to least squares loss (L2) D(x) gets closer to 1 D(G(z)) gets closer to 0 (same as original)
  43. 43. 43 LSGAN Variants of GAN Vanilla GAN LSGAN Replace cross entropy loss to least squares loss (L2) D(G(z)) gets closer to 1 (same as original)
  44. 44. 44 LSGAN Variants of GAN • Results (LSUN dataset) Xudong Mao et al. Least Squares Generative Adversarial Networks, 2016
  45. 45. 45 LSGAN Variants of GAN • Results (CelebA)
  46. 46. 46 SGAN Variants of GAN D G D one-hot vector representing 2 Real image latent vector z fake image (1) FC layer with softmax • Semi-Supervised GAN Training with real images Training with fake images 11 dimension (10 classes + fake) (1) (1) one-hot vector representing a fake label one-hot vector representing 5 Augustus Odena et al. Semi-Supervised Learning with Generative Adversarial Netwoks, 2016
  47. 47. 47 SGAN Variants of GAN • Results (Game Character) 1 2 3 4 5 The generator can create an character image that takes a certain pose. one-hot vectors representing class labels The code will be available soon: https://github.com/yunjey
  48. 48. 48 ACGAN Variants of GAN Augustus Odena et al. Conditional Image Synthesis with Auxilary Classifier, 2016 • Auxiliary Classifier GAN(ACGAN), 2016 Proposed a new method for improved training of GANs using class labels.
  49. 49. 49 ACGAN Variants of GAN real or fake? D G real or fake? D one-hot vector representing 2 Real image one-hot vector representing 5 latent vector z fake image one-hot vector representing 2 (1) FC layer with sigmoid (2) FC layer with softmax (1) (2) Discriminator (multi-task learning) (1) (2) • How does it work? Training with real images Training with fake images
  50. 50. Extensions of GAN 04
  51. 51. 51 CycleGAN Extensions • CycleGAN: Unpaired Image-to-Image Translation presents a GAN model that transfer an image from a source domain A to a target domain B in the absence of paired examples. Jun-Yan Zhu et al. Unpaired Image-to-Image Translation using Cycle Consistent Adversarial Networks, 2017
  52. 52. 52 CycleGAN • How does it work? real or fake ? DB GAB Real Image in domain A Fake Image in domain B Real Image in domain B Discriminator for domain B The generator GAB should generates a horse from the zebra to deceive the discriminator DB. Extensions
  53. 53. 53 CycleGAN • How does it work? DB GAB Discriminator for domain B GBA Reconstructed Image L2 Loss GBA generates a reconstructed image of domain A. This makes the shape to be maintained when GAB generates a horse image from the zebra.real or fake ? Real Image in domain A Real Image in domain B Fake Image in domain B Extensions
  54. 54. 54 CycleGAN Extensions • Results Jun-Yan Zhu et al. Unpaired Image-to-Image Translation using Cycle Consistent Adversarial Networks, 2017
  55. 55. 55 CycleGAN Extensions • Results MNIST-to-SVHNSVHN-to-MNIST Odd columns contain real images and even columns contain generated images. https://github.com/yunjey/mnist-svhn-transfer
  56. 56. 56 StackGAN Han Zhang et al. StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks, 2016 • StackGAN:Text to Photo-realistic Image Synthesis Extensions
  57. 57. 57 StackGAN Fake image (128x128) Real image (128x128) D G Z (100) ‘real’ or ‘fake’ Generates a128x128 image from scratch (not guarantee good result) Discriminator for 128x128 image • Generating 128x128 from scratch Extensions
  58. 58. 58 StackGAN Extensions D1 D2 G1 G2 Fake image (64x64) Real image (64x64) Fake image (128x128) Real image (128x128) ‘real’ or ‘fake’ Generates a 64x64 image Upscales a 64x64 image to 128x128 (Easier than generating from scratch). Discriminator for 64x64 image Discriminator for 128x128 image • Generating 128x128 from 64x64 ‘real’ or ‘fake’ Z (100)
  59. 59. 59 Latest Work Jing Liao et al. Visual Attribute Transfer through Deep Image Analogy, 2017 Extensions • Visual Attribute Transfer
  60. 60. 60 Latest Work Richard Zhang et al. Real-Time User-Guided Image Colorization with Learned Deep Prioirs, 2017 Extensions • User-Interactive Image Colorization
  61. 61. Future of GAN 05
  62. 62. 62 Convergence Measure • Boundary Equilibrium GAN (BEGAN) Extensions David Berthelot et al. BEGAN: Boundary Equilibrium Generative Adversarial Networks, 2017
  63. 63. 63 Convergence Measure • Reconstruction Loss Extensions Sitao Xiang. On the effect of Batch Normalization and Weight Normalization in Generative Adversarial Network, 2017 Gz G(z) x Test images
  64. 64. 64 Better Upsampling • Deconvolution Checkboard Artifacts Extensions http://distill.pub/2016/deconv-checkerboard/
  65. 65. 65 Better Upsampling • Deconvolution vs Resize-Convolution Extensions http://distill.pub/2016/deconv-checkerboard/
  66. 66. 66 GAN in Supervised Learning • Machine Translation (Seq2Seq) Extensions A B C D Y X Y Z<start> Z <end>X Should ‘ABCD’ be translated to ‘XYZ’? Tackling the supervised learning
  67. 67. 67 GAN in Supervised Learning • Machine Translation (GANs) Extensions D G D Training with real sentences Training with fake sentences A (English) B (Korean) Does A and B have the same meaning? The discriminator should say ‘yes’. Does A and B have the same meaning? The discriminator should say ‘no’. A (English) B (Fake Korean) Lijun Wu et al. Adversarial Neural Machine Translation, 2017 The generator should generate B that has the same meaning as A to deceive the discriminator.
  68. 68. Thank you
  69. 69. Appendix 06
  70. 70. 70 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐺 𝐷 Theory in GAN GANs 𝑂𝑏𝑗𝑒𝑐𝑡𝑖𝑣𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝐺𝐴𝑁𝑠 𝑟𝑒𝑎𝑙 𝑑𝑎𝑡𝑎 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝐺𝑎𝑢𝑠𝑠𝑖𝑎𝑛 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 ℎ𝑖𝑔ℎ 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑎𝑙 𝑣𝑒𝑐𝑡𝑜𝑟 (𝑒. 𝑔. 64 × 64) l𝑜𝑤 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑎𝑙 𝑣𝑒𝑐𝑡𝑜𝑟 (𝑒. 𝑔. 100)
  71. 71. 71 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐺 𝐷 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐹𝑖𝑥 𝐺 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑖𝑡 𝑡𝑜 𝑎 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝐷 𝑂𝑝𝑡𝑖𝑚𝑎𝑙 𝐷 𝐷 𝐺𝑒𝑡 𝐷 𝑤ℎ𝑒𝑛 𝑉 𝐷 𝑖𝑠 𝑚𝑎𝑥𝑖𝑚𝑢𝑚
  72. 72. 72 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐺 𝐷 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸 𝑥~𝑝 𝑔(𝑥) log(1 − 𝐷(𝑥)) 𝐹𝑖𝑥 𝐺 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑖𝑡 𝑡𝑜 𝑎 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝐷 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑥 𝑓𝑟𝑜𝑚 𝑝 𝑔 𝑖𝑛𝑠𝑡𝑒𝑎𝑑 𝑜𝑓 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑧 𝑓𝑟𝑜𝑚 𝑝𝑧 𝑣𝑒𝑐𝑡𝑜𝑟 𝑜𝑓 64 × 64 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛 𝑣𝑒𝑐𝑡𝑜𝑟 𝑜𝑓 100 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛 𝑚𝑜𝑑𝑒𝑙 𝐺 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛 𝑓𝑜𝑟 ℎ𝑖𝑔ℎ 𝑑𝑖𝑚𝑒𝑛𝑠𝑖𝑜𝑛𝑎𝑙 𝑣𝑒𝑐𝑡𝑜𝑟 (𝑒. 𝑔. 64 × 64) 𝐷
  73. 73. 73 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐺 𝐷 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸 𝑥~𝑝 𝑔(𝑥) log(1 − 𝐷(𝑥)) = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 𝑑𝑥 𝑥 𝑥 𝐹𝑖𝑥 𝐺 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑖𝑡 𝑡𝑜 𝑎 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝐷 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑥 𝑓𝑟𝑜𝑚 𝑝 𝑔 𝑖𝑛𝑠𝑡𝑒𝑎𝑑 𝑜𝑓 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑧 𝑓𝑟𝑜𝑚 𝑝𝑧 = න 𝑝 𝑥 𝑓 𝑥 𝑑𝑥𝐸 𝑥~𝑝(𝑥) 𝑓(𝑥) 𝐷𝑒𝑓𝑖𝑛𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝐸𝑥𝑝𝑒𝑐𝑡𝑎𝑡𝑖𝑜𝑛 𝑥 𝐼𝑛𝑡𝑒𝑔𝑟𝑎𝑡𝑒 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒 𝑥 𝐷
  74. 74. 74 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐺 𝐷 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸 𝑥~𝑝 𝑔(𝑥) log(1 − 𝐷(𝑥)) = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 𝑑𝑥 = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 𝑑𝑥 𝑥 𝑥 𝑥 𝐹𝑖𝑥 𝐺 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑖𝑡 𝑡𝑜 𝑎 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝐷 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑥 𝑓𝑟𝑜𝑚 𝑝 𝑔 𝑖𝑛𝑠𝑡𝑒𝑎𝑑 𝑜𝑓 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑧 𝑓𝑟𝑜𝑚 𝑝𝑧 = න 𝑝 𝑥 𝑓 𝑥 𝑑𝑥𝐸 𝑥~𝑝(𝑥) 𝑓(𝑥) 𝐷𝑒𝑓𝑖𝑛𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝐸𝑥𝑝𝑒𝑐𝑡𝑎𝑡𝑖𝑜𝑛 𝑥 𝐵𝑎𝑠𝑖𝑐 𝑝𝑟𝑜𝑝𝑒𝑟𝑡𝑦 𝑜𝑓 𝐼𝑛𝑡𝑒𝑔𝑟𝑎𝑙 𝐷
  75. 75. 75 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) 𝐺 𝐷 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸𝑧~𝑝 𝑧(𝑧) log(1 − 𝐷(𝐺 𝑧 ) = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎(𝑥) log 𝐷 𝑥 + 𝐸 𝑥~𝑝 𝑔(𝑥) log(1 − 𝐷(𝑥)) = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 𝑑𝑥 = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 𝑑𝑥 𝑥 𝑥 𝑥 𝐹𝑖𝑥 𝐺 𝑡𝑜 𝑚𝑎𝑘𝑒 𝑖𝑡 𝑡𝑜 𝑎 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝐷 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑥 𝑓𝑟𝑜𝑚 𝑝 𝑔 𝑖𝑛𝑠𝑡𝑒𝑎𝑑 𝑜𝑓 𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑧 𝑓𝑟𝑜𝑚 𝑝𝑧 = න 𝑝 𝑥 𝑓 𝑥 𝑑𝑥𝐸 𝑥~𝑝(𝑥) 𝑓(𝑥) 𝐷𝑒𝑓𝑖𝑛𝑖𝑡𝑖𝑜𝑛 𝑜𝑓 𝐸𝑥𝑝𝑒𝑐𝑡𝑎𝑡𝑖𝑜𝑛 𝑥 𝐵𝑎𝑠𝑖𝑐 𝑝𝑟𝑜𝑝𝑒𝑟𝑡𝑦 𝑜𝑓 𝐼𝑛𝑡𝑒𝑔𝑟𝑎𝑙 𝑁𝑜𝑤 𝑤𝑒 𝑛𝑒𝑒𝑑 𝑡𝑜 𝑓𝑖𝑛𝑑 𝐷 𝑥 𝑤ℎ𝑖𝑐ℎ 𝑚𝑎𝑘𝑒𝑠 𝑡ℎ𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 𝑖𝑛𝑠𝑖𝑑𝑒 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑢𝑚. 𝐷
  76. 76. 76 Theoretical Results = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 𝐷 𝐷 𝑇ℎ𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 𝑖𝑛𝑠𝑖𝑑𝑒 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑙
  77. 77. 77 Theoretical Results = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 𝐷 𝐷 𝑎 log 𝑦 + 𝑏 log 1 − 𝑦 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑎 = 𝑝 𝑑𝑎𝑡𝑎 𝑥 , 𝑦 = 𝐷 𝑥 , 𝑏 = 𝑝 𝑔 𝑥
  78. 78. 78 Theoretical Results = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 𝐷 𝐷 𝑎 log 𝑦 + 𝑏 log 1 − 𝑦 𝑎 𝑦 + −𝑏 1 − 𝑦 = 𝑎 − (𝑎 + 𝑏)𝑦 𝑦(1 − 𝑦) 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑎 = 𝑝 𝑑𝑎𝑡𝑎 𝑥 , 𝑦 = 𝐷 𝑥 , 𝑏 = 𝑝 𝑔 𝑥 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑖𝑎𝑡𝑒 𝑤𝑖𝑡ℎ 𝑟𝑒𝑠𝑝𝑒𝑐𝑡 𝑡𝑜 𝐷 𝑥 𝑢𝑠𝑖𝑛𝑔 𝑁𝑜𝑡𝑒 𝑡ℎ𝑎𝑡 𝐷 𝑥 𝑐𝑎𝑛 𝑛𝑜𝑡 𝑎𝑓𝑓𝑒𝑐𝑡 𝑡𝑜 𝑝 𝑑𝑎𝑡𝑎 𝑥 𝑎𝑛𝑑 𝑝 𝑔 𝑥 . 𝑑 𝑑𝑥 log 𝑓(𝑥) = 𝑓′ (𝑥) 𝑓(𝑥)
  79. 79. 79 Theoretical Results = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 𝐷 𝐷 𝑎 log 𝑦 + 𝑏 log 1 − 𝑦 𝑎 𝑦 + −𝑏 1 − 𝑦 𝑎 − (𝑎 + 𝑏)𝑦 𝑦(1 − 𝑦) = 0 𝑦 = 𝑎 𝑎 + 𝑏 = 𝑎 − (𝑎 + 𝑏)𝑦 𝑦(1 − 𝑦) 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑎 = 𝑝 𝑑𝑎𝑡𝑎 𝑥 , 𝑦 = 𝐷 𝑥 , 𝑏 = 𝑝 𝑔 𝑥 𝑁𝑜𝑡𝑒 𝑡ℎ𝑎𝑡 𝐷 𝑥 𝑐𝑎𝑛 𝑛𝑜𝑡 𝑎𝑓𝑓𝑒𝑐𝑡 𝑡𝑜 𝑝 𝑑𝑎𝑡𝑎 𝑥 𝑎𝑛𝑑 𝑝 𝑔 𝑥 . 𝐼𝑡 ℎ𝑎𝑠 𝑎 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑣𝑎𝑙𝑢𝑒 𝑤ℎ𝑒𝑛 𝑁𝑜𝑡𝑒 𝑡ℎ𝑎𝑡 𝑡ℎ𝑒 𝑙𝑜𝑐𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑖𝑠 𝑡ℎ𝑒 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑤ℎ𝑒𝑛 𝑡ℎ𝑒𝑟𝑒 𝑎𝑟𝑒 𝑛𝑜 𝑜𝑡ℎ𝑒𝑟 𝑙𝑜𝑐𝑎𝑙 𝑒𝑥𝑡𝑟𝑒𝑚𝑒𝑠. 𝑑 𝑑𝑥 log 𝑓(𝑥) = 𝑓′ (𝑥) 𝑓(𝑥) 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑖𝑎𝑡𝑒 𝑤𝑖𝑡ℎ 𝑟𝑒𝑠𝑝𝑒𝑐𝑡 𝑡𝑜 𝐷 𝑥 𝑢𝑠𝑖𝑛𝑔 𝐹𝑖𝑛𝑑 𝑡ℎ𝑒 𝑝𝑜𝑖𝑛𝑡 𝑤ℎ𝑒𝑟𝑒 𝑡ℎ𝑒 𝑑𝑒𝑟𝑖𝑣𝑎𝑡𝑖𝑣𝑒 𝑣𝑎𝑙𝑢𝑒 𝑖𝑠 0 𝑙𝑜𝑐𝑎𝑙 𝑒𝑥𝑡𝑟𝑒𝑚𝑒 .
  80. 80. 80 Theoretical Results = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝐷 𝑥 + 𝑝 𝑔 𝑥 log 1 − 𝐷 𝑥 Theory in GAN GANs 𝐷∗(𝑥) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑉 𝐷 𝐷 𝐷 𝑎 log 𝑦 + 𝑏 log 1 − 𝑦 𝑎 𝑦 + −𝑏 1 − 𝑦 𝑎 − (𝑎 + 𝑏)𝑦 𝑦(1 − 𝑦) = 0 𝑦 = 𝑎 𝑎 + 𝑏 = 𝑎 − (𝑎 + 𝑏)𝑦 𝑦(1 − 𝑦) 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑎 = 𝑝 𝑑𝑎𝑡𝑎 𝑥 , 𝑦 = 𝐷 𝑥 , 𝑏 = 𝑝 𝑔 𝑥 𝑁𝑜𝑡𝑒 𝑡ℎ𝑎𝑡 𝐷 𝑥 𝑐𝑎𝑛 𝑛𝑜𝑡 𝑎𝑓𝑓𝑒𝑐𝑡 𝑡𝑜 𝑝 𝑑𝑎𝑡𝑎 𝑥 𝑎𝑛𝑑 𝑝 𝑔 𝑥 . 𝐼𝑡 ℎ𝑎𝑠 𝑎 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝑣𝑎𝑙𝑢𝑒 𝑤ℎ𝑒𝑛 𝑑 𝑑𝑥 log 𝑓(𝑥) = 𝑓′ (𝑥) 𝑓(𝑥) 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑡𝑖𝑎𝑡𝑒 𝑤𝑖𝑡ℎ 𝑟𝑒𝑠𝑝𝑒𝑐𝑡 𝑡𝑜 𝐷 𝑥 𝑢𝑠𝑖𝑛𝑔 𝐹𝑖𝑛𝑑 𝑡ℎ𝑒 𝑝𝑜𝑖𝑛𝑡 𝑤ℎ𝑒𝑟𝑒 𝑡ℎ𝑒 𝑑𝑒𝑟𝑖𝑣𝑎𝑡𝑖𝑣𝑒 𝑣𝑎𝑙𝑢𝑒 𝑖𝑠 0 𝑙𝑜𝑐𝑎𝑙 𝑒𝑥𝑡𝑟𝑒𝑚𝑒 . 𝐷∗ 𝑥 = 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑆𝑢𝑏𝑠𝑡𝑖𝑡𝑢𝑡𝑒 𝑎 = 𝑝 𝑑𝑎𝑡𝑎 𝑥 , 𝑦 = 𝐷 𝑥 , 𝑏 = 𝑝 𝑔 𝑥
  81. 81. 81 Theory in GAN GANs = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎 log 𝐷∗ 𝑥 + 𝐸 𝑥~𝑝 𝑔 log(1 − 𝐷∗ (𝑥)) = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 = −𝑙𝑜𝑔4 + 𝑙𝑜𝑔4 + න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 = −𝑙𝑜𝑔4 + න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 2 ∙ 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 2 ∙ 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 = −𝑙𝑜𝑔4 + 𝐾𝐿(𝑝 𝑑𝑎𝑡𝑎|| 𝑝 𝑑𝑎𝑡𝑎 + 𝑝 𝑔 2 ) + 𝐾𝐿(𝑝 𝑔|| 𝑝 𝑑𝑎𝑡𝑎 + 𝑝 𝑔 2 ) = −𝑙𝑜𝑔4 + 2 ∙ 𝐽𝑆𝐷(𝑝 𝑑𝑎𝑡𝑎||𝑝 𝑔) 𝑥 𝑥 𝑥 𝑥 𝑥𝑥 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐺 𝐷 𝑚𝑖𝑛 𝑉 𝐷∗, 𝐺 𝐺 𝑉 𝐷∗, 𝐺
  82. 82. 82 Theory in GAN GANs = 𝐸 𝑥~𝑝 𝑑𝑎𝑡𝑎 log 𝐷∗ 𝑥 + 𝐸 𝑥~𝑝 𝑔 log(1 − 𝐷∗ (𝑥)) = න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 = −𝑙𝑜𝑔4 + 𝑙𝑜𝑔4 + න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 = −𝑙𝑜𝑔4 + න 𝑝 𝑑𝑎𝑡𝑎 𝑥 log 2 ∙ 𝑝 𝑑𝑎𝑡𝑎(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 + න 𝑝 𝑔 𝑥 log 2 ∙ 𝑝 𝑔(𝑥) 𝑝 𝑑𝑎𝑡𝑎(𝑥) + 𝑝 𝑔(𝑥) 𝑑𝑥 = −𝑙𝑜𝑔4 + 𝐾𝐿(𝑝 𝑑𝑎𝑡𝑎|| 𝑝 𝑑𝑎𝑡𝑎 + 𝑝 𝑔 2 ) + 𝐾𝐿(𝑝 𝑔|| 𝑝 𝑑𝑎𝑡𝑎 + 𝑝 𝑔 2 ) = −𝑙𝑜𝑔4 + 2 ∙ 𝐽𝑆𝐷(𝑝 𝑑𝑎𝑡𝑎||𝑝 𝑔) 𝑥 𝑥 𝑥 𝑥 𝑥𝑥 𝑚𝑖𝑛 𝑚𝑎𝑥 𝑉 𝐷, 𝐺 = 𝐺 𝐷 𝑚𝑖𝑛 𝑉 𝐷∗, 𝐺 𝐺 𝑉 𝐷∗, 𝐺 𝐺 𝑠ℎ𝑜𝑢𝑙𝑑 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝐺 𝑠ℎ𝑜𝑢𝑙𝑑 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑂𝑝𝑡𝑖𝑚𝑖𝑧𝑖𝑛𝑔 𝑉 𝐷, 𝐺 𝑖𝑠 𝑠𝑎𝑚𝑒 𝑎𝑠 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑖𝑛𝑔 𝐽𝑆𝐷(𝑝 𝑑𝑎𝑡𝑎||𝑝 𝑔) 𝑂𝑝𝑡𝑖𝑚𝑎𝑙 𝐷

×