Abstract
Obake-GAN (Perturbative GAN), which replaces convolution layers of existing convolutional GANs (DCGAN, WGAN-GP , BIGGAN, etc.) with perturbation layers that adds a fixed noise mask, is proposed. Compared with the convolutional GANs, the number of parameters to be trained is smaller, the convergence of training is faster, the inception score of generated images is higher, and the overall training cost is reduced. Algorithmic generation of the noise masks is also proposed, with which the training, as well as the generation, can be boosted with hardware acceleration. Obake-GAN is evaluated using conventional datasets (CIFAR10, LSUN, ImageNet), both in the cases when a perturbation layer is adopted only for Generators and when it is introduced to both Generator and Discriminator .
修士論文「Obake-GAN: GAN with Perturbation Layers」の発表資料
GANの畳込層の代わりに摂動層を導入し、
・Generator 学習パラメータ52%削減
・Discriminator 学習パラメータ87%削減
・ImageNetでInception Score 45%改善
・学習の収束を高速化
89. 89
[1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets.
In Advances in neural information processing systems (pp. 2672-2680).
[2] Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv
preprint arXiv:1710.10196.
[3] Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X., & He, X. (2018). Attngan: Fine-grained text to image generation with
attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp.
1316-1324).
[4] Brock, A., Donahue, J., & Simonyan, K. (2018). Large scale gan training for high fidelity natural image synthesis. arXiv preprint
arXiv:1809.11096.
[5] Y.Kishi, T.Ikegami, S.-i.O’uchi, R.Takano, W.Nogami, and T.Kudoh. “QuantizationOptimization for Training of Generative
Adversarial Network”, in 2018 Summer UnitedWorkshops on Parallel Distributed and Cooperative Processing (SWoPP). 2018-
ARC-23213, 7 2018, pp. 1-6
[6] Juefei-Xu, F., Naresh Boddeti, V., & Savvides, M. (2018). Perturbative neural networks. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (pp. 3310-3318).
[7] Ali Farhadi, “Image Sampling” Retrieved from https://slideplayer.com/slide/6019094/ (Access:2019, January 27)
[8] Aljahdali, A., & Mascagni, M. (2017). Feistel-inspired scrambling improves the quality of linear congruential generators. Monte Carlo
Methods and Applications, 23(2), 89-99.
[9] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C. (2017). Improved training of wasserstein gans. In Advances in
Neural Information Processing Systems (pp. 5767-5777).