This document summarizes two recent papers that utilize tag information with GANs: StarGAN and TD-GAN. StarGAN proposes a method called multi-domain image-to-image translation that assumes images of different tags can be viewed as different domains. It aims to translate images between multiple domains with a single model. TD-GAN assumes images and their tags record the same object from two different perspectives. It aims to extract disentangled representations from images and tags and enforce consistency between them by integrating a tag mapping network. It shows applications in tasks like novel view synthesis and illumination transformation. The document also discusses conclusions and considerations around key assumptions of the papers, different ways to utilize tag information, and trade