This document discusses the connections between generative adversarial networks (GANs) and energy-based models (EBMs). It shows that GAN training can be interpreted as approximating maximum likelihood training of an EBM by replacing the intractable data distribution with a generator distribution. Specifically:
1. GANs train a discriminator to estimate the energy function of an EBM, with the generator minimizing that energy of its samples.
2. EBM training can be seen as alternatively updating the generator and sampling from it, in a manner similar to contrastive divergence for EBMs.
3. This perspective unifies GANs and EBMs, and suggests ways to combine their training procedures to leverage their respective advantages