Generative adversarial networks (GANs) are a class of unsupervised machine learning models used to generate new data with the same statistics as the training set. GANs work by having two neural networks, a generator and discriminator, compete against each other. The generator tries to generate fake images that look real, while the discriminator tries to tell real images apart from fake ones. This adversarial process allows the generator to produce highly realistic images. The paper proposes GANs and introduces their objective functions and training procedure to generate images similar to samples from the training data distribution.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Anomaly detection using deep one class classifier홍배 김
- Anomaly detection의 다양한 방법을 소개하고
- Support Vector Data Description (SVDD)를 이용하여
cluster의 모델링을 쉽게 하도록 cluster의 형상을 단순화하고
boundary근방의 애매한 point를 처리하는 방법 소개
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Anomaly detection using deep one class classifier홍배 김
- Anomaly detection의 다양한 방법을 소개하고
- Support Vector Data Description (SVDD)를 이용하여
cluster의 모델링을 쉽게 하도록 cluster의 형상을 단순화하고
boundary근방의 애매한 point를 처리하는 방법 소개
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Variational Autoencoders For Image GenerationJason Anderson
Meetup: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/260580395/
Video: https://www.youtube.com/watch?v=fnULFOyNZn8
Blog: http://www.compthree.com/blog/autoencoder/
Code: https://github.com/compthree/variational-autoencoder
An autoencoder is a machine learning algorithm that represents unlabeled high-dimensional data as points in a low-dimensional space. A variational autoencoder (VAE) is an autoencoder that represents unlabeled high-dimensional data as low-dimensional probability distributions. In addition to data compression, the randomness of the VAE algorithm gives it a second powerful feature: the ability to generate new data similar to its training data. For example, a VAE trained on images of faces can generate a compelling image of a new "fake" face. It can also map new features onto input data, such as glasses or a mustache onto the image of a face that initially lacks these features. In this talk, we will survey VAE model designs that use deep learning, and we will implement a basic VAE in TensorFlow. We will also demonstrate the encoding and generative capabilities of VAEs and discuss their industry applications.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data anlytics tools.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Variational Auto Encoder and the Math BehindVarun Reddy
The Math and internal workings of Variational Auto Encoders. It talks about Variational Inference and how VAEs take advantage of them to bypass having to calculate Marginal Probability distribution.
A fast-paced introduction to Deep Learning (DL) concepts, such as neural networks, back propagation, activation functions, CNNs, RNNs (if time permits), and the CLT/AUT/fixed-point theorems, along with a basic code sample in TensorFlow.
During this session you will learn how to manually create a basic neural network that acts as a classifier, and also the segue from linear regression to a neural network.
You'll also learn about GANs (Generative Adversarial Networks) for static images as well as voice, and the former case, their potential impact on self-driving cars.
An introduction to Deep Learning (DL) concepts, starting with a simple yet complete neural network (no frameworks), followed by aspects of deep neural networks, such as back propagation, activation functions, CNNs, and the AUT theorem. Next, a quick introduction to TensorFlow and Tensorboard, and then some code samples with Scala and TensorFlow.
An introduction to Deep Learning concepts, with a simple yet complete neural network, CNNs, followed by rudimentary concepts of Keras and TensorFlow, and some simple code fragments.
In these slides, Generative Adversarial Network (GAN) is briefly introduced, and some GAN applications in medical imaging are presented. In the conclusions, some comments are given for persons who are interested in research of medical imaging using GAN.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Variational Autoencoders For Image GenerationJason Anderson
Meetup: https://www.meetup.com/Cognitive-Computing-Enthusiasts/events/260580395/
Video: https://www.youtube.com/watch?v=fnULFOyNZn8
Blog: http://www.compthree.com/blog/autoencoder/
Code: https://github.com/compthree/variational-autoencoder
An autoencoder is a machine learning algorithm that represents unlabeled high-dimensional data as points in a low-dimensional space. A variational autoencoder (VAE) is an autoencoder that represents unlabeled high-dimensional data as low-dimensional probability distributions. In addition to data compression, the randomness of the VAE algorithm gives it a second powerful feature: the ability to generate new data similar to its training data. For example, a VAE trained on images of faces can generate a compelling image of a new "fake" face. It can also map new features onto input data, such as glasses or a mustache onto the image of a face that initially lacks these features. In this talk, we will survey VAE model designs that use deep learning, and we will implement a basic VAE in TensorFlow. We will also demonstrate the encoding and generative capabilities of VAEs and discuss their industry applications.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
https://telecombcn-dl.github.io/2017-dlsl/
Winter School on Deep Learning for Speech and Language. UPC BarcelonaTech ETSETB TelecomBCN.
The aim of this course is to train students in methods of deep learning for speech and language. Recurrent Neural Networks (RNN) will be presented and analyzed in detail to understand the potential of these state of the art tools for time series processing. Engineering tips and scalability issues will be addressed to solve tasks such as machine translation, speech recognition, speech synthesis or question answering. Hands-on sessions will provide development skills so that attendees can become competent in contemporary data anlytics tools.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Variational Auto Encoder and the Math BehindVarun Reddy
The Math and internal workings of Variational Auto Encoders. It talks about Variational Inference and how VAEs take advantage of them to bypass having to calculate Marginal Probability distribution.
A fast-paced introduction to Deep Learning (DL) concepts, such as neural networks, back propagation, activation functions, CNNs, RNNs (if time permits), and the CLT/AUT/fixed-point theorems, along with a basic code sample in TensorFlow.
During this session you will learn how to manually create a basic neural network that acts as a classifier, and also the segue from linear regression to a neural network.
You'll also learn about GANs (Generative Adversarial Networks) for static images as well as voice, and the former case, their potential impact on self-driving cars.
An introduction to Deep Learning (DL) concepts, starting with a simple yet complete neural network (no frameworks), followed by aspects of deep neural networks, such as back propagation, activation functions, CNNs, and the AUT theorem. Next, a quick introduction to TensorFlow and Tensorboard, and then some code samples with Scala and TensorFlow.
An introduction to Deep Learning concepts, with a simple yet complete neural network, CNNs, followed by rudimentary concepts of Keras and TensorFlow, and some simple code fragments.
In these slides, Generative Adversarial Network (GAN) is briefly introduced, and some GAN applications in medical imaging are presented. In the conclusions, some comments are given for persons who are interested in research of medical imaging using GAN.
EuroSciPy 2019 - GANs: Theory and ApplicationsEmanuele Ghelfi
EuroSciPy 2019: https://pretalx.com/euroscipy-2019/talk/Q79NND/
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
The workshop aims at providing a complete understanding of both the theory and the practical know-how to code and deploy this family of models in production. By the end of it, the attendees should be able to apply the concepts learned to other models without any issues.
We will be showcasing all the shiny new APIs introduced by TensorFlow 2.0 by showing how to build a GAN from scratch and how to "productionize" it by leveraging the AshPy Python package that allows to easily design, prototype, train and export Machine Learning models defined in TensorFlow 2.0.
The workshop is composed of:
- Theoretical introduction
- GANs from Scratch in TensorFlow 2.0
- High-performance input data pipeline with TensorFlow Datasets
- Introduction to the AshPy API
- Implementing, training, and visualizing DCGAN using AshPy
- Serving TF2 Models with Google Cloud Functions
The materials of the workshop will be openly provided via GitHub (https://github.com/zurutech/gans-from-theory-to-production).
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, backpropagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and elementary calculus (derivatives), are helpful in order to derive the maximum benefit from this session.
Next we'll see a simple neural network using Keras, followed by an introduction to TensorFlow and TensorBoard. (Bonus points if you know Zorn's Lemma, the Well-Ordering Theorem, and the Axiom of Choice.)
Unsupervised learning representation with Deep Convolutional Generative Adversarial Network, Paper by Alec Radford, Luke Metz, and Soumith Chintala
(indico Research, Facebook AI Research).
Machine Learning: Make Your Ruby Code SmarterAstrails
Boris Nadion was giving this presentation at RailsIsrael 2016. He's covered the basics of all major algorithms for supervised and unsupervised learning without a lot of math just to give the idea of what's possible to do with them.
There is also a demo and ruby code of Waze/Uber like suggested destinations prediction.
GANs are the new hottest topic in the ML arena; however, they present a challenge for the researchers and the engineers alike. Their design, and most importantly, the code implementation has been causing headaches to the ML practitioners, especially when moving to production.
Starting from the very basic of what a GAN is, passing trough Tensorflow implementation, using the most cutting-edge APIs available in the framework, and finally, production-ready serving at scale using Google Cloud ML Engine.
Slides for the talk: https://www.pycon.it/conference/talks/deep-diving-into-gans-form-theory-to-production
Github repo: https://github.com/zurutech/gans-from-theory-to-production
발표자: 최윤제(고려대 석사과정)
최윤제 (Yunjey Choi)는 고려대학교에서 컴퓨터공학을 전공하였으며, 현재는 석사과정으로 Machine Learning을 공부하고 있는 학생이다. 코딩을 좋아하며 이해한 것을 다른 사람들에게 공유하는 것을 좋아한다. 1년 간 TensorFlow를 사용하여 Deep Learning을 공부하였고 현재는 PyTorch를 사용하여 Generative Adversarial Network를 공부하고 있다. TensorFlow로 여러 논문들을 구현, PyTorch Tutorial을 만들어 Github에 공개한 이력을 갖고 있다.
개요:
Generative Adversarial Network(GAN)은 2014년 Ian Goodfellow에 의해 처음으로 제안되었으며, 적대적 학습을 통해 실제 데이터의 분포를 추정하는 생성 모델입니다. 최근 들어 GAN은 가장 인기있는 연구 분야로 떠오르고 있고 하루에도 수 많은 관련 논문들이 쏟아져 나오고 있습니다.
수 없이 쏟아져 나오고 있는 GAN 논문들을 다 읽기가 힘드신가요? 괜찮습니다. 기본적인 GAN만 완벽하게 이해한다면 새로 나오는 논문들도 쉽게 이해할 수 있습니다.
이번 발표를 통해 제가 GAN에 대해 알고 있는 모든 것들을 전달해드리고자 합니다. GAN을 아예 모르시는 분들, GAN에 대한 이론적인 내용이 궁금하셨던 분들, GAN을 어떻게 활용할 수 있을지 궁금하셨던 분들이 발표를 들으면 좋을 것 같습니다.
발표영상: https://youtu.be/odpjk7_tGY0
Generation of Deepfake images using GAN and Least squares GAN.pptDivyaGugulothu
Our project is to generate deep fake images using deep learning techniques i.e Generative
Adversarial Networks.
We generated Deep Fake images using Traditional GAN and least squares GAN.
Generative Adversarial Networks and Their ApplicationsArtifacia
This is the presentation from our AI Meet Jan 2017 on GANs and its applications.
You can join Artifacia AI Meet Bangalore Group: https://www.meetup.com/Artifacia-AI-Meet/
Generative Adversarial Networks is an advanced topic and requires a prior basic understanding of CNNs. Here is some pre-reading material for you.
- https://arxiv.org/pdf/1406.2661v1.pdf
- https://arxiv.org/pdf/1701.00160v1.pdf
- GPT-1: Improving Language Understanding by Generative Pre-Training
- GPT-2: Language Models are Unsupervised Multitask Learners
- GPT-3: Language Models are Few-Shot Learners
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
About
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
Technical Specifications
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
Key Features
Indigenized remote control interface card suitable for MAFI system CCR equipment. Compatible for IDM8000 CCR. Backplane mounted serial and TCP/Ethernet communication module for CCR remote access. IDM 8000 CCR remote control on serial and TCP protocol.
• Remote control: Parallel or serial interface
• Compatible with MAFI CCR system
• Copatiable with IDM8000 CCR
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
Application
• Remote control: Parallel or serial interface.
• Compatible with MAFI CCR system.
• Compatible with IDM8000 CCR.
• Compatible with Backplane mount serial communication.
• Compatible with commercial and Defence aviation CCR system.
• Remote control system for accessing CCR and allied system over serial or TCP.
• Indigenized local Support/presence in India.
• Easy in configuration using DIP switches.
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Quality defects in TMT Bars, Possible causes and Potential Solutions.PrashantGoswami42
Maintaining high-quality standards in the production of TMT bars is crucial for ensuring structural integrity in construction. Addressing common defects through careful monitoring, standardized processes, and advanced technology can significantly improve the quality of TMT bars. Continuous training and adherence to quality control measures will also play a pivotal role in minimizing these defects.
Explore the innovative world of trenchless pipe repair with our comprehensive guide, "The Benefits and Techniques of Trenchless Pipe Repair." This document delves into the modern methods of repairing underground pipes without the need for extensive excavation, highlighting the numerous advantages and the latest techniques used in the industry.
Learn about the cost savings, reduced environmental impact, and minimal disruption associated with trenchless technology. Discover detailed explanations of popular techniques such as pipe bursting, cured-in-place pipe (CIPP) lining, and directional drilling. Understand how these methods can be applied to various types of infrastructure, from residential plumbing to large-scale municipal systems.
Ideal for homeowners, contractors, engineers, and anyone interested in modern plumbing solutions, this guide provides valuable insights into why trenchless pipe repair is becoming the preferred choice for pipe rehabilitation. Stay informed about the latest advancements and best practices in the field.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSEDuvanRamosGarzon1
AIRCRAFT GENERAL
The Single Aisle is the most advanced family aircraft in service today, with fly-by-wire flight controls.
The A318, A319, A320 and A321 are twin-engine subsonic medium range aircraft.
The family offers a choice of engines
TECHNICAL TRAINING MANUAL GENERAL FAMILIARIZATION COURSE
Generative adversarial networks
1. Generative Adversarial Networks
Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza Machine Learning Group
University de Montreal
2014, Cited by 10479
Aug 01, 2019
SMC AI Research Center
Kyuri Kim
PR-001: Generative adversarial nets by Jaejun Yoo (2017/4/13)
2. Introduction
1.1 Supervised vs Unsupervised Learning
- Supervised Learning
- Unsupervised Learning
Variational Auto Encoder(VAE), GAN,
Clustering, Dimension Reduction, etc.
Data: (x, y)
x is data, y is label
Learn a function to map x → y
Data: (x)
Just data, no label
𝒈 𝜽(𝒙|𝒛)𝒒∅(𝒛|𝒙)
3. Introduction
1.1 Supervised vs Unsupervised Learning
- Supervised Learning
- Unsupervised Learning
Data: (x, y)
x is data, y is label
Learn a function to map x → y
mean
‘Sampling’
4. Introduction
1.1 Supervised vs Unsupervised Learning
- Supervised Learning
- Unsupervised Learning
Data: (x, y)
x is data, y is label
Learn a function to map x → y
NIPS 2016 Tutorial: generative adversarial networks paper
5. Introduction
1.2 Generative Adversarial Networks
Generated Image by Neural Network
Figure adopted from BEGAN paper released at 31. Mar.
2017, Google
- Generative Model - Adversial
Real or Fake?
Generator Discriminator
Fake
Money
Real
Money
8. Generative Adversarial Networks
2.1 Intuition of GAN
D
Discriminator
G
Generater
D
Discriminator
Real Image
Fake Image
D(x)
D(G(z))
May be High
The probability of that x came
from the real data (0~1)
May be low
z
Latent Code
12. Generative Adversarial Networks
2.2 Objective Function of GAN
Objective Function.
x
D
Discriminator
D(x)
G
Generater
D
Discriminator
G(z)
D(G(z))z
13. Generative Adversarial Networks
2.2 Objective Function of GAN
x
D
Discriminator
D(x)
G
Generater
D
Discriminator
G(z)
D(G(z))z
Objective Function.
14. Implementation
3.1 GAN Tensorflow code
mnist = input_data.read_data_sets("./mnist/data/",
one_hot=True)
total_epoch = 100
batch_size = 50
learning_rate = 0.0001
n_hidden = 256
n_input = 28 * 28
n_noise = 128
X = tf.placeholder(tf.float32, [None, n_input])
Z = tf.placeholder(tf.float32, [None, n_noise])
1. Requirements and Dataset
2. Generator Network
G_W1 = tf.Variable(tf.random_normal([n_noise, n_hidden],
stddev=0.01))
G_b1 = tf.Variable(tf.zeros([n_hidden]))
G_W2 = tf.Variable(tf.random_normal([n_hidden, n_input],
stddev=0.01))
G_b2 = tf.Variable(tf.zeros([n_input]))
def generator(noise_z):
hidden = tf.nn.relu(tf.matmul(noise_z, G_W1) + G_b1)
output = tf.nn.sigmoid(tf.matmul(hidden, G_W2) + G_b2)
return output
D D(x)x
G D D(G(z))z G(z)
Training with real Image
Training with fake Image
128
256
784
1
256784
15. Implementation
3.1 GAN Tensorflow code
3. Discriminator Network
4. Generate Fake Image, Loss & Optimizer
D_W2 = tf.Variable(tf.random_normal([n_hidden, 1],
stddev=0.01))
D_b2 = tf.Variable(tf.zeros([1]))
def discriminator(inputs):
hidden = tf.nn.relu(tf.matmul(inputs, D_W1) + D_b1)
output = tf.nn.sigmoid(tf.matmul(hidden, D_W2) + D_b2)
return output
G = generator(Z)
D_gene = discriminator(G)
D_real = discriminator(X)
loss_D = tf.reduce_mean(tf.log(D_real) + tf.log(1 - D_gene))
loss_G = tf.reduce_mean(tf.log(D_gene))
train_D = tf.train.AdamOptimizer(learning_rate).minimize(-
loss_D, var_list=D_var_list)
train_G = tf.train.AdamOptimizer(learning_rate).minimize(-
loss_G, var_list=G_var_list)
Training with real Image
Training with fake Image
D D(x)x
G D D(G(z))z G(z)
CE(pred, y) = −y⋅log(pred)−(1−y)⋅log(1−pred)
Dloss = CE(D(x),1)+CE(D(G(z)),0) = −log(D(x))−log(1−D(g(z)) )
Gloss = CE(D(G(z)),1) = −log(D(G(z))
16. Implementation
3.1 GAN Tensorflow code
5. Training & Testing
sess = tf.Session()
sess.run(tf.global_variables_initializer())
total_batch = int(mnist.train.num_examples/batch_size)
loss_val_D, loss_val_G = 0, 0
for epoch in range(total_epoch):
for i in range(total_batch):
batch_xs, batch_ys =
mnist.train.next_batch(batch_size)
noise = get_noise(batch_size, n_noise)
# 판별기와 생성기 신경망을 각각 학습
_, loss_val_D = sess.run([train_D, loss_D],
feed_dict={X: batch_xs,
Z: noise})
_, loss_val_G = sess.run([train_G, loss_G],
feed_dict={Z: noise})
print('Epoch:', '%04d' % epoch,
'D loss: {:.4}'.format(loss_val_D),
'G loss: {:.4}'.format(loss_val_G))
Training with real Image
Training with fake Image
D D(x)x
G D D(G(z))z G(z)
noise = get_noise(sample_size, n_noise)
samples = sess.run(G, feed_dict={Z: noise})
Get Closer to 1.
Get Closer to 0.
17. Implementation
3.1 GAN Tensorflow code
5. Training & Testing
sess = tf.Session()
sess.run(tf.global_variables_initializer())
total_batch = int(mnist.train.num_examples/batch_size)
loss_val_D, loss_val_G = 0, 0
for epoch in range(total_epoch):
for i in range(total_batch):
batch_xs, batch_ys =
mnist.train.next_batch(batch_size)
noise = get_noise(batch_size, n_noise)
# 판별기와 생성기 신경망을 각각 학습
_, loss_val_D = sess.run([train_D, loss_D],
feed_dict={X: batch_xs,
Z: noise})
_, loss_val_G = sess.run([train_G, loss_G],
feed_dict={Z: noise})
print('Epoch:', '%04d' % epoch,
'D loss: {:.4}'.format(loss_val_D),
'G loss: {:.4}'.format(loss_val_G))
Training with real Image
Training with fake Image
D D(x)x
G D D(G(z))z G(z)
noise = get_noise(sample_size, n_noise)
samples = sess.run(G, feed_dict={Z: noise})
Get Closer to 1.
18. Implementation
3.1 GAN Tensorflow code
6. Generate Image Sample saved
Training with real Image
Training with fake Image
D D(x)x
G D D(G(z))z G(z)
After training
19. Experiments
y = log(1-x) y = log(x)
The gradient is relatively small at D(G(z))=0. The gradient is very large at D(G(z))=0.
4.1 Non-Saturating game
20. Result & Conclusion
Yellow boxes are real data
samples that are nearest matches
to last column of fake images.
This shows the generator didn’t
merely memorize training
examples.