This document outlines different techniques for training generative adversarial networks (GANs). It begins by explaining the original GAN framework and issues with convergence. It then discusses improvements like Wasserstein GANs (WGANs) which optimize the Wasserstein distance instead of Jensen-Shannon divergence. WGANs require the discriminator to be Lipschitz continuous, which can be achieved through weight clipping or spectral normalization. Gradient penalties like WGAN-GP further regularize the discriminator by penalizing violations of the Lipschitz constraint. The document argues that globally or locally regularizing the discriminator through techniques like spectral normalization or gradient penalties leads to more stable GAN training by providing smoother guidance to the generator. Open questions remain around the