This document summarizes improved training methods for Wasserstein GANs (WGANs). It begins with an overview of GANs and their limitations, such as gradient vanishing. It then introduces WGANs, which use the Wasserstein distance instead of Jensen-Shannon divergence to provide more meaningful gradients during training. However, weight clipping used in WGANs limits the function space and can cause optimization difficulties. The document proposes using gradient penalty instead of weight clipping to enforce a Lipschitz constraint. It also suggests sampling from an estimated optimal coupling rather than independently sampling real and generated samples to better match theory. Experimental results show the gradient penalty approach improves stability and performance of WGANs on image generation tasks.
사내 스터디용으로 공부하며 만든 발표 자료입니다. 부족한 부분이 있을 수도 있으니 알려주시면 정정하도록 하겠습니다.
*슬라이드 6에 나오는 classical CNN architecture(뒤에도 계속 나옴)에서 ReLU - Pool - ReLu에서 뒤에 나오는 ReLU는 잘못된 표현입니다. ReLU - Pool에서 ReLU 계산을 또 하는 건 redundant 하기 때문입니다(Kyung Mo Kweon 피드백 감사합니다)
발표자: 이활석 (Naver Clova)
발표일: 2017.11.
(현) NAVER Clova Vision
(현) TFKR 운영진
개요:
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨지고 있습니다.
특히 컴퓨터 비전 기술 분야에서는 지도학습에 해당하는 이미지 내에 존재하는 정보를 찾는 인식 기술에서,
비지도학습에 해당하는 특정 정보를 담는 이미지를 생성하는 기술인 생성 기술로 연구 동향이 바뀌어 가고 있습니다.
본 세미나에서는 생성 기술의 두 축을 담당하고 있는 VAE(variational autoencoder)와 GAN(generative adversarial network) 동작 원리에 대해서 간략히 살펴 보고, 관련된 주요 논문들의 결과를 공유하고자 합니다.
딥러닝에 대한 지식이 없더라도 생성 모델을 학습할 수 있는 두 방법론인 VAE와 GAN의 개념에 대해 이해하고
그 기술 수준을 파악할 수 있도록 강의 내용을 구성하였습니다.
발표자: 이활석(NAVER)
발표일: 2017.11.
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨 지고 있습니다. 본 과정에서는 비지도학습의 가장 대표적인 방법인 오토인코더의 모든 것에 대해서 살펴보고자 합니다. 차원 축소관점에서 가장 많이 사용되는Autoencoder와 (AE) 그 변형 들인 Denoising AE, Contractive AE에 대해서 공부할 것이며, 데이터 생성 관점에서 최근 각광 받는 Variational AE와 (VAE) 그 변형 들인 Conditional VAE, Adversarial AE에 대해서 공부할 것입니다. 또한, 오토인코더의 다양한 활용 예시를 살펴봄으로써 현업과의 접점을 찾아보도록 노력할 것입니다.
1. Revisit Deep Neural Networks
2. Manifold Learning
3. Autoencoders
4. Variational Autoencoders
5. Applications
Overview on Optimization algorithms in Deep LearningKhang Pham
Overview on function optimization in general and in deep learning. The slides cover from basic algorithms like batch gradient descent, stochastic gradient descent to the state of art algorithm like Momentum, Adagrad, RMSprop, Adam.
사내 스터디용으로 공부하며 만든 발표 자료입니다. 부족한 부분이 있을 수도 있으니 알려주시면 정정하도록 하겠습니다.
*슬라이드 6에 나오는 classical CNN architecture(뒤에도 계속 나옴)에서 ReLU - Pool - ReLu에서 뒤에 나오는 ReLU는 잘못된 표현입니다. ReLU - Pool에서 ReLU 계산을 또 하는 건 redundant 하기 때문입니다(Kyung Mo Kweon 피드백 감사합니다)
발표자: 이활석 (Naver Clova)
발표일: 2017.11.
(현) NAVER Clova Vision
(현) TFKR 운영진
개요:
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨지고 있습니다.
특히 컴퓨터 비전 기술 분야에서는 지도학습에 해당하는 이미지 내에 존재하는 정보를 찾는 인식 기술에서,
비지도학습에 해당하는 특정 정보를 담는 이미지를 생성하는 기술인 생성 기술로 연구 동향이 바뀌어 가고 있습니다.
본 세미나에서는 생성 기술의 두 축을 담당하고 있는 VAE(variational autoencoder)와 GAN(generative adversarial network) 동작 원리에 대해서 간략히 살펴 보고, 관련된 주요 논문들의 결과를 공유하고자 합니다.
딥러닝에 대한 지식이 없더라도 생성 모델을 학습할 수 있는 두 방법론인 VAE와 GAN의 개념에 대해 이해하고
그 기술 수준을 파악할 수 있도록 강의 내용을 구성하였습니다.
발표자: 이활석(NAVER)
발표일: 2017.11.
최근 딥러닝 연구는 지도학습에서 비지도학습으로 급격히 무게 중심이 옮겨 지고 있습니다. 본 과정에서는 비지도학습의 가장 대표적인 방법인 오토인코더의 모든 것에 대해서 살펴보고자 합니다. 차원 축소관점에서 가장 많이 사용되는Autoencoder와 (AE) 그 변형 들인 Denoising AE, Contractive AE에 대해서 공부할 것이며, 데이터 생성 관점에서 최근 각광 받는 Variational AE와 (VAE) 그 변형 들인 Conditional VAE, Adversarial AE에 대해서 공부할 것입니다. 또한, 오토인코더의 다양한 활용 예시를 살펴봄으로써 현업과의 접점을 찾아보도록 노력할 것입니다.
1. Revisit Deep Neural Networks
2. Manifold Learning
3. Autoencoders
4. Variational Autoencoders
5. Applications
Overview on Optimization algorithms in Deep LearningKhang Pham
Overview on function optimization in general and in deep learning. The slides cover from basic algorithms like batch gradient descent, stochastic gradient descent to the state of art algorithm like Momentum, Adagrad, RMSprop, Adam.
Talk on Optimization for Deep Learning, which gives an overview of gradient descent optimization algorithms and highlights some current research directions.
Variational Autoencoders For Image GenerationJason Anderson
Meetup: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/260580395/
Video: https://www.youtube.com/watch?v=fnULFOyNZn8
Blog: http://www.compthree.com/blog/autoencoder/
Code: https://github.com/compthree/variational-autoencoder
An autoencoder is a machine learning algorithm that represents unlabeled high-dimensional data as points in a low-dimensional space. A variational autoencoder (VAE) is an autoencoder that represents unlabeled high-dimensional data as low-dimensional probability distributions. In addition to data compression, the randomness of the VAE algorithm gives it a second powerful feature: the ability to generate new data similar to its training data. For example, a VAE trained on images of faces can generate a compelling image of a new "fake" face. It can also map new features onto input data, such as glasses or a mustache onto the image of a face that initially lacks these features. In this talk, we will survey VAE model designs that use deep learning, and we will implement a basic VAE in TensorFlow. We will also demonstrate the encoding and generative capabilities of VAEs and discuss their industry applications.
Unsupervised learning representation with Deep Convolutional Generative Adversarial Network, Paper by Alec Radford, Luke Metz, and Soumith Chintala
(indico Research, Facebook AI Research).
[PR12] categorical reparameterization with gumbel softmaxJaeJun Yoo
(Korean) Introduction to (paper1) Categorical Reparameterization with Gumbel Softmax and (paper2) The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Video: https://youtu.be/ty3SciyoIyk
Paper1: https://arxiv.org/abs/1611.01144
Paper2: https://arxiv.org/abs/1611.00712
PR-302: NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisHyeongmin Lee
드디어 PR12 Season 4가 시작되었습니다! 제가 이번 시즌에서 발표하게 된 첫 논문은 ""NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis"라는 논문입니다. View Synthesis라는 Task는 몇 개의 시점에서 대상을 찍은 영상이 주어지면 주어지지 않은 위치와 방향에서 바라본 대상의 영상을 합성해내는 기술입니다. 이를 위해서 본 논문에서는 대상의 3D 정보를 통째로 Neural Network가 외우게 하는 방법을 선택했는데요, 이 방식은 Implicit Neural Representation이라는 이름으로 유명해지고 있는 추세고, 2D 이미지에 대해서도 적용하려는 접근들이 늘고 있습니다.
영상 링크: https://youtu.be/zkeh7Tt9tYQ
논문 링크: https://arxiv.org/abs/2003.08934
Generative Adversarial Networks and Their ApplicationsArtifacia
This is the presentation from our AI Meet Jan 2017 on GANs and its applications.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Generative Adversarial Networks is an advanced topic and requires a prior basic understanding of CNNs. Here is some pre-reading material for you.
- https://arxiv.org/pdf/1406.2661v1.pdf
- https://arxiv.org/pdf/1701.00160v1.pdf
Photo-realistic Single Image Super-resolution using a Generative Adversarial ...Hansol Kang
* Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
NICE: Non-linear Independent Components Estimation Laurent Dinh, David Krueger, Yoshua Bengio. 2014.
Density estimation using Real NVP
Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio. 2017.
Glow: Generative Flow with Invertible 1x1 Convolutions
Diederik P. Kingma, Prafulla Dhariwal. 2018.
논문 리뷰 자료
Talk on Optimization for Deep Learning, which gives an overview of gradient descent optimization algorithms and highlights some current research directions.
Variational Autoencoders For Image GenerationJason Anderson
Meetup: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/260580395/
Video: https://www.youtube.com/watch?v=fnULFOyNZn8
Blog: http://www.compthree.com/blog/autoencoder/
Code: https://github.com/compthree/variational-autoencoder
An autoencoder is a machine learning algorithm that represents unlabeled high-dimensional data as points in a low-dimensional space. A variational autoencoder (VAE) is an autoencoder that represents unlabeled high-dimensional data as low-dimensional probability distributions. In addition to data compression, the randomness of the VAE algorithm gives it a second powerful feature: the ability to generate new data similar to its training data. For example, a VAE trained on images of faces can generate a compelling image of a new "fake" face. It can also map new features onto input data, such as glasses or a mustache onto the image of a face that initially lacks these features. In this talk, we will survey VAE model designs that use deep learning, and we will implement a basic VAE in TensorFlow. We will also demonstrate the encoding and generative capabilities of VAEs and discuss their industry applications.
Unsupervised learning representation with Deep Convolutional Generative Adversarial Network, Paper by Alec Radford, Luke Metz, and Soumith Chintala
(indico Research, Facebook AI Research).
[PR12] categorical reparameterization with gumbel softmaxJaeJun Yoo
(Korean) Introduction to (paper1) Categorical Reparameterization with Gumbel Softmax and (paper2) The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
Video: https://youtu.be/ty3SciyoIyk
Paper1: https://arxiv.org/abs/1611.01144
Paper2: https://arxiv.org/abs/1611.00712
PR-302: NeRF: Representing Scenes as Neural Radiance Fields for View SynthesisHyeongmin Lee
드디어 PR12 Season 4가 시작되었습니다! 제가 이번 시즌에서 발표하게 된 첫 논문은 ""NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis"라는 논문입니다. View Synthesis라는 Task는 몇 개의 시점에서 대상을 찍은 영상이 주어지면 주어지지 않은 위치와 방향에서 바라본 대상의 영상을 합성해내는 기술입니다. 이를 위해서 본 논문에서는 대상의 3D 정보를 통째로 Neural Network가 외우게 하는 방법을 선택했는데요, 이 방식은 Implicit Neural Representation이라는 이름으로 유명해지고 있는 추세고, 2D 이미지에 대해서도 적용하려는 접근들이 늘고 있습니다.
영상 링크: https://youtu.be/zkeh7Tt9tYQ
논문 링크: https://arxiv.org/abs/2003.08934
Generative Adversarial Networks and Their ApplicationsArtifacia
This is the presentation from our AI Meet Jan 2017 on GANs and its applications.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Generative Adversarial Networks is an advanced topic and requires a prior basic understanding of CNNs. Here is some pre-reading material for you.
- https://arxiv.org/pdf/1406.2661v1.pdf
- https://arxiv.org/pdf/1701.00160v1.pdf
Photo-realistic Single Image Super-resolution using a Generative Adversarial ...Hansol Kang
* Ledig, Christian, et al. "Photo-realistic single image super-resolution using a generative adversarial network." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
NICE: Non-linear Independent Components Estimation Laurent Dinh, David Krueger, Yoshua Bengio. 2014.
Density estimation using Real NVP
Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio. 2017.
Glow: Generative Flow with Invertible 1x1 Convolutions
Diederik P. Kingma, Prafulla Dhariwal. 2018.
논문 리뷰 자료
We start with motivation, few examples of uncertainties. Then we discretize elliptic PDE with uncertain coefficients, apply TT format for permeability, the stochastic operator and for the solution. We compare sparse multi-index set approach with full multi-index+TT.
Tensor Train format allows us to keep the whole multi-index set, without any multi-index set truncation.
A very wide spectrum of optimization problems can be efficiently solved with proximal gradient methods which hinge on the celebrated forward-backward splitting (FBS) schema. But such first-order methods are only effective when low or medium accuracy is required and are known to be rather slow or even impractical for badly conditioned problems. Moreover, the straightforward introduction of second-order (Hessian) information is beset with shortcomings as, typically, at every iteration we need to solve a non-separable optimisation problem. In this talk we will follow a different route to the solution of such optimisation problems. We will recast non-smooth optimisation problems as the minimisation of a real-valued, continuously differentiable function known as the forward-backward envelope. We will then employ a semismooth Newton method to solve the equivalent optimisation problem instead of the original one. We will then apply the proposed semismooth Newton method to L1-regularised least squares (LASSO) problems which is motivated by an an interesting application: recursive compressed sensing. Compressed sensing is a signal processing methodology for the reconstruction of sparsely sampled signals and it offers a new paradigm for sampling signals based on their innovation, that is, the minimum number of coefficients sufficient to accurately represent it in an appropriately selected basis. Compressed sensing leads to a lower sampling rate compared to theories using some fixed basis and has many applications in image processing, medical imaging and MRI, photography, holography, facial recognition, radio astronomy, radar technology and more. The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed; the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements. We will see how we can tailor the forward-backward Newton method to solve recursive compressed sensing problems at one tenth of the time required by other algorithms such as ISTA, FISTA, ADMM and interior-point methods (L1LS).
Shape restrictions such as monotonicity often naturally arise. In this talk, we consider a Bayesian approach to monotone nonparametric regression with a normal error. We assign a prior through piecewise constant functions and impose a conjugate normal prior on the coefficient. Since the resulting functions need not be monotone, we project samples from the posterior on the allowed parameter space to construct a “projection posterior”. We obtain the limit posterior distribution of a suitably centered and scaled posterior distribution for the function value at a point. The limit distribution has some interesting similarity and difference with the corresponding limit distribution for the maximum likelihood estimator. By comparing the quantiles of these two distributions, we observe an interesting new phenomenon that coverage of a credible interval can be more than the credibility level, the exact opposite of a phenomenon observed by Cox for smooth regression. We describe a recalibration strategy to modify the credible interval to meet the correct level of coverage.
This talk is based on joint work with Moumita Chakraborty, a doctoral student at North Carolina State University.
Brief History of Visual Representation LearningSangwoo Mo
- [2012-2015] Evolution of deep learning architectures
- [2016-2019] Learning paradigms for diverse tasks
- [2020-current] Scaling laws and foundation models
Learning Visual Representations from Uncurated DataSangwoo Mo
Slide about the defense of my Ph.D. dissertation: "Learning Visual Representations from Uncurated Data"
It includes four papers about
- Learning from multi-object images for contrastive learning [1] and Vision Transformer (ViT) [2]
- Learning with limited labels (semi-sup) for image classification [3] and vision-language [4] models
[1] Mo*, Kang* et al. Object-aware Contrastive Learning for Debiased Scene Representation. NeurIPS’21.
[2] Kang*, Mo* et al. OAMixer: Object-aware Mixing Layer for Vision Transformers. CVPRW’22.
[3] Mo et al. RoPAWS: Robust Semi-supervised Representation Learning from Uncurated Data. ICLR’23.
[4] Mo et al. S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions. Under Review.
A Unified Framework for Computer Vision Tasks: (Conditional) Generative Model...Sangwoo Mo
Lab seminar introduces Ting Chen's recent 3 works:
- Pix2seq: A Language Modeling Framework for Object Detection (ICLR’22)
- A Unified Sequence Interface for Vision Tasks (NeurIPS’22)
- A Generalist Framework for Panoptic Segmentation of Images and Videos (submitted to ICLR’23)
Lab seminar on
- Sharpness-Aware Minimization for Efficiently Improving Generalization (ICLR 2021)
- When Vision Transformers Outperform ResNets without Pretraining or Strong Data Augmentations (under review)
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
4. Generative Adversarial Networks (GANs)
Generative model aims to learn a model distribution pθ(x) be
match with the target distribution p(x)
Usually we assume x ∼ pθ(x) is a deterministic mapping
x = Gθ(z) of a simple noise z ∼ p(z)
* Figure from OpenAI blog.
4
5. Generative Adversarial Networks (GANs)
Q. How to train a generative model?
Explicit model: directly optimize the objective (e.g. MLE)
For example, PixelCNN maximizes
log pθ(x) =
n
i=1
log pθ(xi | x1:i−1)
5
6. Generative Adversarial Networks (GANs)
Q. How to train a generative model?
Explicit model: directly optimize the objective (e.g. MLE)
Implicit model1: learning by comparison
Idea of GAN
Train a discriminator D which compares p(x) and pθ(x)
Train a generator G using the signal from D
1
Do not know pθ(x) but only can sample from.
6
7. Generative Adversarial Networks (GANs)
What is happening in GAN?
GAN plays a minimax game between G and D:
min
G
max
D
V (G, D) where
V (G, D) = Ex∼p(x)[log D(x)] + Ez∼p(z)[log(1 − D(G(z))]
For given G, the optimal D∗ is
D∗
(x) =
p(x)
p(x) + pθ(x)
7
8. Generative Adversarial Networks (GANs)
What is happening in GAN?
Putting D∗ to the objective, we have
C(G) = max
D
V (G, D)
= KL(p
p + pθ
2
) + KL(pθ
p + pθ
2
) + const
= 2 · JSD(p pθ) + const
Hence, GAN minimizes the lower bound of JSD
8
9. Generative Adversarial Networks (GANs)
What is happening in GAN?
In practice, GAN suffers from gradient vanishing
To avoid this problem, we minimize − log D(G(z)) instead
Putting D∗, we have
C(G) = KL(pθ p) − 2 · JSD(p pθ) + const
Hence, it minimizes the lower bound of reverse KL
9
10. Wasserstein GANs (WGANs)
Why GAN is unstable?
Supports of p(x) and pθ(x) are disjoint1 a.s.
Then
JSD(p pθ) = log 2
KL(p pθ) = KL(pθ p) = +∞
The loss does not provide a valuable information
Solution
1. Add noise to overlap supports
2. Use better divergence
1
Lie on the low-dimensional manifolds.
10
11. Wasserstein GANs (WGANs)
Toy example
Let z ∼ U[0, 1] and x = (0, z) ∼ p(x)
Let Gθ(z) = (θ, z), hence pθ(x) = p(x) for θ = 0
* Figure from Lilian Weng’s blog.
11
12. Wasserstein GANs (WGANs)
Toy example
Here, Wasserstein distance is
W (p pθ) = |θ|
Unlike JSD and KL, it provides the closeness info.
* Figure from WGAN paper.
12
13. Wasserstein GANs (WGANs)
Wasserstein distance
Wasserstein-1 distance is
W (p, q) = inf
γ∈Π(p,q)
E(x,y)∼γ[ x − y 1]
Relation between divergences
W = conv in dist. < JSD = TV < KL
13
14. Wasserstein GANs (WGANs)
How to minimize Wasserstein distance?
Wasserstein-1 distance has a dual form:
W (p, q) = sup
f ∈F
Ex∼p(x)[f (x)] − Ex∼q(x)[f (x)]
where F is the set of 1-Lipschitz functions
Hence, the objective of WGAN is
min
G
max
D∈D
Ex∼p(x)[D(x)] − Ez∼p(z)[D(G(z))]
To achieve Lipschitz constraints, WGAN uses weight clipping
14
17. Observation
Theorem 1
Let (x, y) ∼ γ∗ where γ∗ is optimal coupling and f ∗ is optimal
function. Let xt = ty + (1 − t)x with 0 ≤ t ≤ 1. Then
P(x,y)∼γ f ∗
(xt) =
y − xt
y − xt
= 1
Corollary 2
f ∗ has gradient norm 1 a.e. on the line segments xy
17
18. Observation
Proof.
For (x, y) ∼ γ∗, f ∗(y) − f ∗(x) = y − x a.s
Let ψ(t) = f ∗(xt) − f ∗(x). Then
|ψ(t) − ψ(t )| = f ∗
(xt) − f ∗
(xt )
≤ xt − xt = x − y |t − t |,
hence ψ(t) is x − y -Lipschitz. Using this,
ψ(1) − ψ(0) = (ψ(1) − ψ(t)) + (ψ(t) − ψ(0))
≤ (1 − t) x − y + t x − y = x − y ,
and equality holds since
|ψ(1) − ψ(0)| = |f ∗
(y) − f ∗
(x)| = y − x
18
19. Observation
Proof.
Thus, ψ(t) − ψ(0) = t x − y , and so ψ(t) = t x − y .
Hence, f ∗(xt) = f ∗(x) + t y − x .
Let v = (y − x)/ y − x . Then
∂
∂v
f ∗
(xt) = lim
h→0
f ∗(xt + hv) − f ∗(xt)
h
= 1
Since f ∗(xt) ≤ 1, we conclude that f ∗(xt) = v.
19
20. Gradient Penalty (WGAN-GP)
From observation, we define gradient penalty
λ Eˆx∼ˆp(x)[( ˆx D(ˆx) 2 − 1)2
]
where ˆx ∼ ˆp(x) is uniformly sampled from the line segment xy
No critic BN: penalized gradient norm independently
Two-sided penalty: also tried one-sided penalty
max(0, D(ˆx) 2 − 1)2
but empirically no much difference
20
21. Possible Improvement
WGAN-GP does not sample (x, y) from optimal coupling γ∗
Instead, samples from x ∼ p(x), y ∼ pθ(x)
It does not match with the theory (Theorem 1)
Idea: (x, G(E(x))) would be a better approximation for γ∗
E is additionally trained encoder x → z
G(E(x)) is projection of x to G manifold
21
22. Experiments
WGAN-GP improves the stability
# of success1 for GAN & WGAN-GP
1
Inception score > threshold. Experiments on 32×32 ImageNet.
22
25. Reference
Goodfellow et al. Generative Adversarial Nets. NIPS 2014.
Arjovsky et al. Towards Principled Methods for Training
GANs. ICLR 2017.
Arjovsky et al. Wasserstein GAN. ICML 2017.
Gulrajani et al. Improved Training of Wasserstein GANs.
NIPS 2017.
25