74. ! Ian J Goodfellow, et al., “Generative Adversarial Nets”. NIPS2014
! Alec Radford et al., “Unsupervised Representation Learning with Deep Convolutional Adversarial Networks”.
ICLR2016
! Naveen Kodali, et al., “On Convergence and Stability of GANs”. arXiv:1705.07215
! Xudong Mao, et al., “Least Squares Generative Adversarial Networks”. ICCV2016
! Takeru Miyato, et al., “Spectral Normalization for Generative Adversarial Networks”. ICLR2018
! Martin Heusel, et al., “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash
Equilibrium”. NIPS2017
! Lars Mescheder, et al., “Which Training Methods for GANs do actually Converge?”. ICML2018
! Alexia Jolicoeur-martineau. “The Relativistic Discriminator: A key element missing from Standard GAN”.
ICLR2019
! Martin Arjovsky, et al., “Wasserstein GAN”. arXiv: 1701.07875
! Ishaan Gulrajani, et al., “Improved Training of Wasserstein GANs”. NIPS2017
! Akash Srivastava, et al., “VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning”,
NIPS2017
! Chang Xiao, et al., “BourGAN: Generative Networks with Metric Embeddings”. NIPS2018
! Tong Che, et al., “Mode Regularized Generative Adversarial Networks”. ICLR2017
! Luke Metz, et al., “Unrolled Generative Adversarial Networks”. ICLR2017
! Qi Mao, et al., “Mode Seeking Generative Adversarial Networks for Diverse Image Synthesis”. CVPR2019
! Han Zhang, et al., “StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial
Networks” ICCV2017
! Han Zhang, et al., “StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks”.
TPAMI2018
75. ! Tero Karras, et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”. ICLR2018
! Andrew Brock, et al., “Large Scale GAN Training for High Fidelity Natural Image Synthesis”. ICLR2019
! Tero Karras, et al., “A Style-Based Generator Architecture for Generative Adversarial Networks”. CVPR2019
! Christian Ledig, et al., “Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial
Network”. CVPR2017
! Xintao Wang, et al., “ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks”. ECCV2018
! Mengyu Chu, et al., “Temporally Coherent GANs for Video Super-Resolution (TecoGAN)”. arXiv: 1811.0939
! Phillip Isola, et al., “Image-to-Image Translation with Conditional Adversarial Networks”. CVPR2017
! Jun-Yan Zhu, et al., “Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks”.
ICCV2017
! Yunjey Choi, et al., “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image
Translation”. CVPR2018
! Ming-Yu Liu, et al., “Few-Shot Unsupervised Image-to-Image Translation”. ICCV2019
! Sangwoo Mo, et al., “InstaGAN: Instance-aware Image-to-Image Translation”. ICLR2019
! Junho Kim, et al., “U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance
Normalization for Image-to-Image Translation”. arXiv: 1907.10830
! Eric Tzeng, et al., “Adversarial Discriminative Domain Adaptation”. CVPR2017
! Issam Laradji, et al., “M-ADDA: Unsupervised Domain Adaptation with Deep Metric Learning”. ICML2018
76. ! Judy Hoffman, et al., “CyCADA: Cycle-Consistent Adversarial Domain Adaptation”. ICML2018
! Ming-Yu Liu, et al., “Coupled Generative Adversarial Networks”. NIPS2016
! Carl Vondrick, et al., “Generating Videos with Scene Dynamics”. NIPS2016
! Masaki Saito, et al., “Temporal Generative Adversarial Nets with Singular Value Clipping”. ICCV2017
! Sergey Tulyakov, et al., “MoCoGAN: Decomposing Motion and Content for Video Generation”. CVPR2018
! Katsunori Ohnishi, et al., “Hierarchical Video Generation from Orthogonal Information: Optical Flow and
Texture”. AAAI2018
! Aidan Clark, et al., “Adversarial Video Generation on Complex Datasets”. arXiv: 1907.06571
! Jiajun Wu, et al., “Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial
Modeling”. NIPS2016
! Ruihui Li, et al., “PU-GAN: a Point Cloud Upsampling Adversarial Network”. ICCV2019
! Shiyang Cheng, et al., “MeshGAN: Non-linear 3D Morphable Models of Faces”. arXiv: 1903.10384
! Thomas Schlegl, et al., “Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide
Marker Discovery”. IPMI2017
! Houssam Zenati, et al., “Efficient GAN-Based Anomaly Detection”. ICLRW2018
! Dan Li, et al., “Anomaly Detection with Generative Adversarial Networks for Multivariate Time Series”. arXiv:
1809.04758
! Pramuditha Perera, et al., “OCGAN: One-class Novelty Detection Using GANs with Constrained Latent
Representations”. CVPR2019
! Jesse Engel, et al., “GANSynth: Adversarial Neural Audio Synthesis”. ICLR2019
! Chris Donahue, et al., “Adversarial Audio Synthesis”. ICLR2019
77. ! Andrés Marafioti, et al., “Adversarial Generation of Time-Frequency Features with application in audio
synthesis”. ICML2019
! Santiago Pascual, et al., “SEGAN: Speech Enhancement Generative Adversarial Network”.
INTERSPEECH2017
! Kou Tanaka, et al., “WaveCycleGAN: Synthetic-to-natural speech waveform conversion using cycle-
consistent adversarial networks”. STL2018
! Kou Tanaka, et al., “WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation”.
arXiv: 1904.02892
! Takuhiro Kaneko, et al., “CycleGAN-VC: Non-parallel Voice Conversion Using Cycle-Consistent Adversarial
Networks”. EUSIPCO2018
! Takuhiro Kaneko, et al., “CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion”.
ICASSP2019
! Hirokazu Kameoka, et al., “StarGAN-VC: Non-parallel many-to-many voice conversion with star generative
adversarial networks”. arXiv: 1806.02169
! Takuhiro Kaneko, et al., “StarGAN-VC2: Rethinking Conditional Methods for StarGAN-Based Voice
Conversion”. INTERSPEECH2019
! “AdaGAN: Adaptive GAN for Many-to-Many Non-Parallel Voice Conversion”. ICLR2020 under review
! Yuki Saito, et al., “Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks”.
IEEE/ACM Transactions on Audio, Speech, and Language Processing 2018
! Mikołaj Bińkowski, et al., “High Fidelity Speech Synthesis with Adversarial Networks”. arXiv: 1909.11646
! Ju-chieh Chou, et al., “One-shot Voice Conversion by Separating Speaker and Content Representations
with Instance Normalization”. INTERSPEECH2019