1. University Institute of Science
Master of Science
(Data Science)
Presented by:
REETU
UID: 21MSM3053
TOPIC: Generative models in Bayesian theory
Submitted to:
Er. Gagninder Kaur
2. Topics to be covered
Generative Modelling
What are GANs
Applications of GAN
Difference between generator and discriminator
Generative models in Bayesian theory
3. Introduction
Generative modelling
Generative modelling is unsupervised machine learning
approach. Generative modeling is the use of artificial
intelligence, statistics and probability in applications to
produce a representation or abstraction of observed
phenomena or target variables that can be calculated from
observations.
4. What are GANs?
GANs stands for Generative adversarial
network.
GANs are just one kind of generative
models.
In GANs, there is a generator and a discriminator.
The Generator generates fake samples of data
(be it an image, audio, etc.) and tries to fool the
Discriminator. The Discriminator, on the other
hand, tries to distinguish between the real and
fake samples. The Generator and the
Discriminator are both Neural Networks and they
both run in competition with each other in the
training phase. The steps are repeated several
times and in this, the Generator and Discriminator
get better and better in their respective jobs after
each repetition.
5. Applications of GANs
• It generates new data from the available data ,mostly creating and
producing images but not limited to images only.
• Generate music by using some clone voice.
• Text to image generation.
• Creation of anime characters in game development and animation
production.
• Image to image translation
• Low resolution to high resolution.
6. Difference between generator and discriminator
Generator and discriminator are two components of GAN.
Generator is unsupervised approach.
Discriminator is supervised approach .
X
G Fake
Data x’
D
T/
F
Backpropagation
Generator
Real data
Discriminator
7. Generative models in Bayesian theory
Bayesian probability theory provides a mathematical framework for
performing inference, or reasoning, using probability. In Bayesian
probability theory, one of these “events” is the hypothesis, H, and the other
is data, D, and we wish to judge the relative truth of the hypothesis given
the data. According to Bayes’ rule, we do this via the relation
The term P(D|H) is called the likelihood function and it assesses the probability of the
observed data arising from the hypothesis. The term P(H) is called the prior, as it reflects
one’s prior knowledge before the data are considered. Finally, the term P(H|D) is known
as the posterior, and as its name suggests, reflects the probability of the hypothesis after
consideration of the data.