SlideShare a Scribd company logo
Chapter 11
Reviewer : Sunwoo Kim
Christopher M. Bishop
Pattern Recognition and Machine Learning
Yonsei University
Department of Applied Statistics
Chapter 11. Sampling Methods
2
What are we doing?
In this chapter, we are studying some approximation methods via sampling.
We know the weak law of large number(WLLN), which indicates, a sample mean converges in probability to the true expectation.
𝒙
𝒑
𝝁 = 𝑬[𝒙]
This means, if we have enough samples that were generated true distribution, we can estimate the desired expectation or probability!
For example, we are trying to evaluate the following expectation.
However, we cannot guarantee the independence of samples 𝑧(𝑙)
!
Thus, we are processing it under full-joint distribution. Consider general graphical model.
1st Sampling!
2nd Sampling!
2nd Sampling!
3rd Sampling!
We may sequentially sampling from its
ancestor. Thus this sampling strategy is
called an ‘ancestral sampling.’
Chapter 11.1. Basic Sampling Algorithms
3
Basic Monte Carlo
We have covered this strategy in theoretical statistics. This can be done by…
By uniform i.i.d. random sampling
𝑈~𝑢𝑛𝑖𝑓𝑜𝑟𝑚(0,1), and let 𝐹 be a continuous cumulative distribution function of 𝑥 (𝐹 𝑥 = 𝑃(𝑋 ≤ 𝑥))
Then, 𝑋 = 𝐹−1
𝑈 has distribution of 𝐹.
To implement this method, we need two assumptions.
1. We should be able to sample random uniform sample from [0, 1]. (This is not easy but can.)
2. We should be able to compute inverse CDF of 𝒙.
In fact, second condition is almost impossible.
Why? Because this means ‘we cannot compute integration but can compute inverse function.’
This is quite non-sense! So, we need alternate method!
Still, we can use this algorithm in computing value of 𝝅 and some other values!
𝒑 𝒚 = 𝝀 𝒆𝒙𝒑 −𝝀𝒚
𝒚 = −
𝟏
𝝀
𝒍𝒏(𝟏 − 𝒛)
If 𝑧 follows uniform, then 𝑦 follows
exponential distribution!
Chapter 11.1. Basic Sampling Algorithms
4
Rejection sampling
This is a process of finding samples with some rejection rule!
However, functional form of 𝑧 is pretty complicated, and we cannot directly compute some functional value of 𝑧.
Thus, we are approximating the overall procedure by using ‘proposal distribution’ 𝒒(𝒛).
Note that q(z) is a distribution which has a relatively simple functional form, and tractable! (e.g. Normal dist.)
1. Find a constant 𝑘 which can envelope target distribution 𝑝(𝑧).
2. Find a sample from distribution 𝑘𝑞(𝑧0)
3. Generate a random number 𝑢0 from uniform ( 0, 𝑘𝑞 𝑧0 )
4. If 𝑢0 < 𝑘𝑞(𝑧0), accept 𝑧0. O.W. Reject.
That is, we reject sample 𝑧0 if it lies in grey-shaded region.
Reason why this works is as following.
Chapter 11.1. Basic Sampling Algorithms
5
Importance sampling
This sampling directly computes expectation of a desired distribution.
Rewind a sampling method which we covered at the beginning.
However, as dimension is getting higher, computation of this formula is
growing exponentially. Thus, we need other method which makes this
computation possible for high-dimension!
Similarly, we use proposal distribution 𝑞(𝑧), and we are generating
samples 𝒛(𝒍)
from proposal distribution 𝒒(𝒛).
Here, importance weight 𝑟𝑙 =
𝑝 𝑧 𝑙
𝑞(𝑧(𝑙))
acts as the weight of each sample.
We can replace this equation by using some normalizing constant(𝑍𝑞).
Chapter 11.1. Basic Sampling Algorithms
6
Sampling and the EM algorithm
Sampling strategy is pretty important under Bayesian framework, but it also plays significant role in various computation!
Consider the 𝑀 − 𝑆𝑡𝑒𝑝 of EM algorithm. This is called a monte-carlo EM algorithm.
Here, we can generate samples of 𝑧 from posterior 𝑝(|𝑋, 𝜃𝑜𝑙𝑑
), and we can generate hard assignment of data to the specific clusters.
This can be implemented to the IP-algorithm, which can be used in data augmentation process.
Chapter 11.2. Markov Chain Monte Carlo
7
Markov Chain
** I’ve got help from https://kaist.edwith.org/machinelearning1_17
Before discussing MCMC, it is beneficial to review the idea of Markov chain.
We have covered basic concept of Markov chain in stochastic process, but its noteworthy to revise some important facts.
First, there is a transition probability which defines the probability of moving from one state to another.
There are some properties of Markov Chain.
1. Irreducible : We can move 𝑖 ↔ 𝑗 (We can move from here to there, there to here.)
2. Recurrent : We can get back to state 𝑗 after some transition loop.
3. Aperiodic : We are not moving in a certain circle (not rotating 𝑗 𝑝 𝑐 𝑗 𝑝 …)
A state which is recurrent and aperiodic is an ‘ergodic’ state.
With the help of these conditions, we can define ‘stationary distribution.’
‘If a Markov chain is irreducible and ergodic, we can define stationary distribution!’
If we are performing transition a lot of time, we can get into stationary distribution. That is,
𝝅𝑻 = 𝝅
Note that this is inverse of expected return time, and it is uniquely determined and is a
probability distribution.
Chapter 11.2. Markov Chain Monte Carlo
8
Markov Chain
So, we have covered some interesting characteristic of Markov chain. Now, we have to connect the basic idea of Markov chain to sampling
methodology. First, we are not throwing away previous samples. Instead, we are re-using it in the next sampling process.
Let’s look at the overall idea of MCMC. (I’ll write it in Korean)
1. 우리의 목표는 𝑝(𝑧)에서 sample을 생성하는 것임.
2. 근데 그것이 쉽지 않음. 그래서 Markov Chain의 idea를 빌려옴.
3. 𝑝(𝑧)를 stationary distribution인 𝜋라고 생각하고, 그 𝜋𝑖를 생성하는 transition probability인 𝑃𝑖𝑗를 추정하는 개념임.
4. 이는 꽤 합리적인 방법인 것이, 우리는 sample을 sequential하게 생성하기 때문에, 일종의 transition으로 생각할 수 있음.
5. 그리고 궁극적으로 그것은 우리가 추정하고자 하는 true-distribution인 𝑷로 수렴해야 목표를 달성하는 전체적인 process인 것이다!
Summary.
We are trying to generate sample from 𝑝 𝑧 ≈ 𝜋.
That is, we already know stationary probability
But it’s intractable, so we are approximating 𝑃𝑖𝑗 , and generate samples there.
After large amount of transition, sampling from 𝑷𝒊𝒋 will be similar to
that of from 𝝅𝒊.
Chapter 11.2. Markov Chain Monte Carlo
9
Metropolis-Hastings Algorithm
However, in order to get stationary probability, transition should satisfy balance equation!
That is, 𝝅𝒊𝑷𝒊,𝒋 = 𝝅𝒋𝑷𝒋,𝒊. Here, we are approximating P with proposal distribution 𝑞(𝑧∗
|𝑧𝑡
).
Setting is done. Here again we are accepting or rejecting samples according to acceptance probability 𝜶.
Current sample 𝑧𝑡
. Suggested sample 𝒛∗
1. Accept 𝑧𝑡+1
= 𝑧∗
2. Reject 𝑧𝑡+1
= 𝑧𝑡
Problem is, ‘When should we accept? When should we reject?’
To satisfy 𝝅𝒊𝑷𝒊𝒋 = 𝝅𝒋𝑷𝒋𝒊, 𝒒 𝒛𝒕
𝒛∗
𝒑 𝒛∗
= 𝒒 𝒛∗
𝒛𝒕
𝒑(𝒛𝒕
)
But this is not easy! So, we are continuously adjusting transition probability 𝒒(𝒛∗
|𝒛𝒕
)
1. If 𝑞 𝑧∗
𝑧𝑡
𝑝 𝑧𝑡
> 𝑞 𝑧𝑡
𝑧∗
𝑝(𝑧∗
), going from current state to next state (𝑧𝑡
𝑧∗
) is bigger. So, we should make it smaller.
2. If 𝑞 𝑧𝑡
𝑧∗
𝑝 𝑧∗
> 𝑞 𝑧∗
𝑧𝑡
𝑝(𝑧𝑡
), going from current state to next state (𝑧𝑡
𝑧∗
) is smaller. So, we should always accept!
Then here, consider weight 𝑟 𝑧∗
𝑧𝑡
=
𝑞 𝑧𝑡
𝑧∗
𝑝 𝑧∗
𝑞 𝑧∗
𝑧𝑡
𝑝 𝑧𝑡
. Ideal value is one. Thus, acceptance ratio will be
Chapter 11.2. Markov Chain Monte Carlo
10
Metropolis-Hastings Algorithm
Before moving on, let’s make a short summary!
1. We assume a desired distribution(to be sampled) as a stationary distribution.
2. To acquire 𝜋, we need our distribution to satisfy balance equation.
3. Balance equation is an equation which indicates 𝜋𝑖𝑃𝑖𝑗 = 𝜋𝑗𝑃
𝑗𝑖.
4. That means, probability of moving from here to there is equal to there to here!
5. This can be expressed by acceptance rate 𝛼,
6. Ideal condition of
𝑞 𝑧𝑡
𝑧∗
𝑝 𝑧∗
𝑞 𝑧∗
𝑧𝑡
𝑝 𝑧𝑡
is to be equal to 1.
7. Thus, we are trying to make this ratio as close to 1 by selecting appropriate sample at each iteration.
However, here again we face a problem of choosing a good approximation 𝑞(𝑧).
Gibbs-sampling suggests a good 𝑞(𝑧).
Why don’t we simply use 𝒑(𝒛∗
|𝒛𝒕
) itself?
Chapter 11.3. Gibbs Sampling
11
Ideation
Gibbs sampling is a MCMC sampling, which is a special case of a Metropolis-Hastings algorithm!
Overall mechanism is equal to that of MH. Only difference is we are not using approximate 𝑞𝑘, but 𝒑(𝒛𝒌
∗
|𝒛−𝒌)
Idea is simple. We are rotating again and again between various components.
Then, how can we guarantee aforementioned process achieves desired
samples from given distribution??
We can simply show that by suggesting 𝑝 𝑧𝑘 𝑧−𝑘 = 𝑞(𝑧∗
|𝑧𝑡).
Note that our goal was to obtain detailed-balance.
By setting 𝒒𝒌 𝒛∗
𝒛 = 𝒑(𝒛𝒌
∗
|𝒛𝒌), we can see this always satisfies
detailed balance condition!!
Chapter 11.4. Slice Sampling
12
Throwing away step-size
In this example, we are not considering step size. Rather, we are restricting the next-level(aespa) sampling!!
1. From a given sample z(τ)
, we sample 𝑢 uniformly from 0 ≤ 𝑢 ≤ 𝑝(𝑧(𝜏)
)
2. Define a ‘slice’, which is a horizontal line in the above distribution.
3. We generate new sample from this slice region. But in reality, we cannot easily know this.
4. Thus, we empirically decide it! Let region be 𝑤.
5. Estimate functional value at the end 𝑝 𝑧𝑚𝑎𝑥 &𝑝 𝑧𝑚𝑖𝑛 and finds whether it lies in or out of the region.
6. According to the functional value of end region, we extend or shrink the region 𝒘.
We iteratively extend and
shrink the sampling region
𝒘 according to the ‘slice’
Chapter 11.5. The Hybrid Monte Carlo Algorithm
13
Hamiltonian Monte Carlo
** I’ve studied overall algorithm in https://www.youtube.com/watch?v=a-wydhEuAm0
Note that Metropolis-Hasting was a random-surfing on the estimated distribution.
Just like gradient descent’s momentum and speed, let’s modify some characteristics of ‘surfing’.
It is a ‘Hamiltonian Monte Carlo(HMC)’
HMC is a sampling method which comes from physics. Let’s think of energy
Absorbing
energy
Emitting
energy
Many of the individuals stays under
less energy state. Thus, the proportion
of high energy decreases
exponentially! That is why we can
model energy function with 𝑒−𝐸(𝑥)
This idea can be applied to our probability
distribution! That is,
𝒑 𝒙 = 𝒆−𝑬(𝒙)
However, we do not use simple energy function.
Rather, we use joint function of potential energy
and kinetic energy. Thus, total energy can be
𝐸(𝑧) is a potential energy
𝐾(𝑟) is a kinetic energy
Chapter 11.5. The Hybrid Monte Carlo Algorithm
14
Hamiltonian Monte Carlo
Here, we can set potential energy to be negative log-posterior.
And for the kinetic, we can simply set
Please note that 𝐻 𝑧, 𝑟 = 𝑐𝑜𝑛𝑠𝑡. I couldn’t understand the explanation why this value is
almost fixed. Anyone understood..?
Thus, overall estimation can be…
𝐸 𝑧 = − log 𝑝(𝑧),
here, 𝑝(𝑧) is our desired distribution!
We can iteratively get samples by generating 𝑟, then 𝑧, again and again.
Note that 𝑟 and 𝑧 are independent in their functional form.
Generating 𝑟 is relatively simple since 𝑝(𝑟|𝑧) is a standard normal!
𝒆
𝟏
𝟐
∑𝒓𝒊
~𝑵(𝟎, 𝟏)
Obviously, straight sampling from 𝑝(𝑧, 𝑟) is impossible. Why? Because if its possible, we would not get help of 𝒓!
Thus, we use leapfrog discretization, which is an approximation of this sampling.
Here, we alternatively choose 𝑧 and 𝑟.
Interesting thing is, we move ‘a half-way’ further.
Chapter 11.5. The Hybrid Monte Carlo Algorithm
15
Leapfrog algorithm
That is, we are getting help of the ‘momentum!’
This approximation is tractable since calculating gradient is not that hard task!
𝜖 here work as a step size movement in the algorithm(hyper-param)
We can think of this movement intuitively with right figure.
(Figure from previous Youtube-link!)
This helps us to stay within the high-region!

More Related Content

What's hot

PRML Chapter 4
PRML Chapter 4PRML Chapter 4
PRML Chapter 4
Sunwoo Kim
 
PRML復々習レーン2.3.2
PRML復々習レーン2.3.2PRML復々習レーン2.3.2
PRML復々習レーン2.3.2sukoyakarizumu
 
データ解析14 ナイーブベイズ
データ解析14 ナイーブベイズデータ解析14 ナイーブベイズ
データ解析14 ナイーブベイズ
Hirotaka Hachiya
 
PRML Chapter 3
PRML Chapter 3PRML Chapter 3
PRML Chapter 3
Sunwoo Kim
 
PRML11章
PRML11章PRML11章
PRML11章
Takashi Tamura
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
Kazuki Fujikawa
 
PRML Chapter 2
PRML Chapter 2PRML Chapter 2
PRML Chapter 2
Sunwoo Kim
 
PRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling MethodPRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling Method
Ha Phuong
 
PRML輪読#4
PRML輪読#4PRML輪読#4
PRML輪読#4
matsuolab
 
PRML第9章「混合モデルとEM」
PRML第9章「混合モデルとEM」PRML第9章「混合モデルとEM」
PRML第9章「混合モデルとEM」
Keisuke Sugawara
 
PRML Chapter 1
PRML Chapter 1PRML Chapter 1
PRML Chapter 1
Sunwoo Kim
 
PRML輪読#13
PRML輪読#13PRML輪読#13
PRML輪読#13
matsuolab
 
PRML Chapter 14
PRML Chapter 14PRML Chapter 14
PRML Chapter 14
Masahito Ohue
 
PRML輪読#8
PRML輪読#8PRML輪読#8
PRML輪読#8
matsuolab
 
マルコフ連鎖モンテカルロ法
マルコフ連鎖モンテカルロ法マルコフ連鎖モンテカルロ法
マルコフ連鎖モンテカルロ法
Masafumi Enomoto
 
PRML輪読#6
PRML輪読#6PRML輪読#6
PRML輪読#6
matsuolab
 
PRML輪読#14
PRML輪読#14PRML輪読#14
PRML輪読#14
matsuolab
 
PRML輪読#10
PRML輪読#10PRML輪読#10
PRML輪読#10
matsuolab
 
8.4 グラフィカルモデルによる推論
8.4 グラフィカルモデルによる推論8.4 グラフィカルモデルによる推論
8.4 グラフィカルモデルによる推論
sleepy_yoshi
 

What's hot (20)

PRML Chapter 4
PRML Chapter 4PRML Chapter 4
PRML Chapter 4
 
PRML復々習レーン2.3.2
PRML復々習レーン2.3.2PRML復々習レーン2.3.2
PRML復々習レーン2.3.2
 
データ解析14 ナイーブベイズ
データ解析14 ナイーブベイズデータ解析14 ナイーブベイズ
データ解析14 ナイーブベイズ
 
PRML Chapter 3
PRML Chapter 3PRML Chapter 3
PRML Chapter 3
 
PRML11章
PRML11章PRML11章
PRML11章
 
Predicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman networkPredicting organic reaction outcomes with weisfeiler lehman network
Predicting organic reaction outcomes with weisfeiler lehman network
 
PRML Chapter 2
PRML Chapter 2PRML Chapter 2
PRML Chapter 2
 
PRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling MethodPRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling Method
 
PRML輪読#4
PRML輪読#4PRML輪読#4
PRML輪読#4
 
PRML第9章「混合モデルとEM」
PRML第9章「混合モデルとEM」PRML第9章「混合モデルとEM」
PRML第9章「混合モデルとEM」
 
PRML Chapter 1
PRML Chapter 1PRML Chapter 1
PRML Chapter 1
 
PRML輪読#13
PRML輪読#13PRML輪読#13
PRML輪読#13
 
PRML Chapter 14
PRML Chapter 14PRML Chapter 14
PRML Chapter 14
 
PRML輪読#8
PRML輪読#8PRML輪読#8
PRML輪読#8
 
マルコフ連鎖モンテカルロ法
マルコフ連鎖モンテカルロ法マルコフ連鎖モンテカルロ法
マルコフ連鎖モンテカルロ法
 
PRML輪読#6
PRML輪読#6PRML輪読#6
PRML輪読#6
 
PRML輪読#14
PRML輪読#14PRML輪読#14
PRML輪読#14
 
PRML輪読#10
PRML輪読#10PRML輪読#10
PRML輪読#10
 
Jokyokai
JokyokaiJokyokai
Jokyokai
 
8.4 グラフィカルモデルによる推論
8.4 グラフィカルモデルによる推論8.4 グラフィカルモデルによる推論
8.4 グラフィカルモデルによる推論
 

Similar to PRML Chapter 11

Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
HaibinSu2
 
PRML Chapter 5
PRML Chapter 5PRML Chapter 5
PRML Chapter 5
Sunwoo Kim
 
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
Golden Helix Inc
 
PRML Chapter 6
PRML Chapter 6PRML Chapter 6
PRML Chapter 6
Sunwoo Kim
 
What happens if measure the electron spin twice?
What happens if measure the electron spin twice?What happens if measure the electron spin twice?
What happens if measure the electron spin twice?
Fausto Intilla
 
Anomaly detection Full Article
Anomaly detection Full ArticleAnomaly detection Full Article
Anomaly detection Full Article
MenglinLiu1
 
My dm ppt
My dm pptMy dm ppt
My dm ppt
kanika20071990
 
Tutorial on Markov Random Fields (MRFs) for Computer Vision Applications
Tutorial on Markov Random Fields (MRFs) for Computer Vision ApplicationsTutorial on Markov Random Fields (MRFs) for Computer Vision Applications
Tutorial on Markov Random Fields (MRFs) for Computer Vision Applications
Anmol Dwivedi
 
Generative models
Generative modelsGenerative models
Generative models
Avner Gidron
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsUniversity of Salerno
 
Sorting-algorithmbhddcbjkmbgjkuygbjkkius.pdf
Sorting-algorithmbhddcbjkmbgjkuygbjkkius.pdfSorting-algorithmbhddcbjkmbgjkuygbjkkius.pdf
Sorting-algorithmbhddcbjkmbgjkuygbjkkius.pdf
ArjunSingh81957
 
Fa18_P1.pptx
Fa18_P1.pptxFa18_P1.pptx
Fa18_P1.pptx
Md Abul Hayat
 
03 notes
03 notes03 notes
Talk 5
Talk 5Talk 5
MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...
MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...
MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...
Golden Helix Inc
 
sbs.pdf
sbs.pdfsbs.pdf
PRML Chapter 7
PRML Chapter 7PRML Chapter 7
PRML Chapter 7
Sunwoo Kim
 
simple
simplesimple
simple
avy2cess
 

Similar to PRML Chapter 11 (20)

Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
 
PRML Chapter 5
PRML Chapter 5PRML Chapter 5
PRML Chapter 5
 
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
MM - KBAC: Using mixed models to adjust for population structure in a rare-va...
 
PRML Chapter 6
PRML Chapter 6PRML Chapter 6
PRML Chapter 6
 
What happens if measure the electron spin twice?
What happens if measure the electron spin twice?What happens if measure the electron spin twice?
What happens if measure the electron spin twice?
 
Anomaly detection Full Article
Anomaly detection Full ArticleAnomaly detection Full Article
Anomaly detection Full Article
 
My dm ppt
My dm pptMy dm ppt
My dm ppt
 
Tutorial on Markov Random Fields (MRFs) for Computer Vision Applications
Tutorial on Markov Random Fields (MRFs) for Computer Vision ApplicationsTutorial on Markov Random Fields (MRFs) for Computer Vision Applications
Tutorial on Markov Random Fields (MRFs) for Computer Vision Applications
 
Generative models
Generative modelsGenerative models
Generative models
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
 
Sorting-algorithmbhddcbjkmbgjkuygbjkkius.pdf
Sorting-algorithmbhddcbjkmbgjkuygbjkkius.pdfSorting-algorithmbhddcbjkmbgjkuygbjkkius.pdf
Sorting-algorithmbhddcbjkmbgjkuygbjkkius.pdf
 
Fa18_P1.pptx
Fa18_P1.pptxFa18_P1.pptx
Fa18_P1.pptx
 
03 notes
03 notes03 notes
03 notes
 
Talk 5
Talk 5Talk 5
Talk 5
 
MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...
MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...
MM-KBAC – Using Mixed Models to Adjust for Population Structure in a Rare-var...
 
sbs.pdf
sbs.pdfsbs.pdf
sbs.pdf
 
JISA_Paper
JISA_PaperJISA_Paper
JISA_Paper
 
PRML Chapter 7
PRML Chapter 7PRML Chapter 7
PRML Chapter 7
 
Dec 14 - R2
Dec 14 - R2Dec 14 - R2
Dec 14 - R2
 
simple
simplesimple
simple
 

Recently uploaded

Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
John Andrews
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
TravisMalana
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
benishzehra469
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
balafet
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
ocavb
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
yhkoc
 
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
ahzuo
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
ewymefz
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
haila53
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar
 
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
mbawufebxi
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 

Recently uploaded (20)

Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
 
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
一比一原版(CBU毕业证)卡普顿大学毕业证如何办理
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
 
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 

PRML Chapter 11

  • 1. Chapter 11 Reviewer : Sunwoo Kim Christopher M. Bishop Pattern Recognition and Machine Learning Yonsei University Department of Applied Statistics
  • 2. Chapter 11. Sampling Methods 2 What are we doing? In this chapter, we are studying some approximation methods via sampling. We know the weak law of large number(WLLN), which indicates, a sample mean converges in probability to the true expectation. 𝒙 𝒑 𝝁 = 𝑬[𝒙] This means, if we have enough samples that were generated true distribution, we can estimate the desired expectation or probability! For example, we are trying to evaluate the following expectation. However, we cannot guarantee the independence of samples 𝑧(𝑙) ! Thus, we are processing it under full-joint distribution. Consider general graphical model. 1st Sampling! 2nd Sampling! 2nd Sampling! 3rd Sampling! We may sequentially sampling from its ancestor. Thus this sampling strategy is called an ‘ancestral sampling.’
  • 3. Chapter 11.1. Basic Sampling Algorithms 3 Basic Monte Carlo We have covered this strategy in theoretical statistics. This can be done by… By uniform i.i.d. random sampling 𝑈~𝑢𝑛𝑖𝑓𝑜𝑟𝑚(0,1), and let 𝐹 be a continuous cumulative distribution function of 𝑥 (𝐹 𝑥 = 𝑃(𝑋 ≤ 𝑥)) Then, 𝑋 = 𝐹−1 𝑈 has distribution of 𝐹. To implement this method, we need two assumptions. 1. We should be able to sample random uniform sample from [0, 1]. (This is not easy but can.) 2. We should be able to compute inverse CDF of 𝒙. In fact, second condition is almost impossible. Why? Because this means ‘we cannot compute integration but can compute inverse function.’ This is quite non-sense! So, we need alternate method! Still, we can use this algorithm in computing value of 𝝅 and some other values! 𝒑 𝒚 = 𝝀 𝒆𝒙𝒑 −𝝀𝒚 𝒚 = − 𝟏 𝝀 𝒍𝒏(𝟏 − 𝒛) If 𝑧 follows uniform, then 𝑦 follows exponential distribution!
  • 4. Chapter 11.1. Basic Sampling Algorithms 4 Rejection sampling This is a process of finding samples with some rejection rule! However, functional form of 𝑧 is pretty complicated, and we cannot directly compute some functional value of 𝑧. Thus, we are approximating the overall procedure by using ‘proposal distribution’ 𝒒(𝒛). Note that q(z) is a distribution which has a relatively simple functional form, and tractable! (e.g. Normal dist.) 1. Find a constant 𝑘 which can envelope target distribution 𝑝(𝑧). 2. Find a sample from distribution 𝑘𝑞(𝑧0) 3. Generate a random number 𝑢0 from uniform ( 0, 𝑘𝑞 𝑧0 ) 4. If 𝑢0 < 𝑘𝑞(𝑧0), accept 𝑧0. O.W. Reject. That is, we reject sample 𝑧0 if it lies in grey-shaded region. Reason why this works is as following.
  • 5. Chapter 11.1. Basic Sampling Algorithms 5 Importance sampling This sampling directly computes expectation of a desired distribution. Rewind a sampling method which we covered at the beginning. However, as dimension is getting higher, computation of this formula is growing exponentially. Thus, we need other method which makes this computation possible for high-dimension! Similarly, we use proposal distribution 𝑞(𝑧), and we are generating samples 𝒛(𝒍) from proposal distribution 𝒒(𝒛). Here, importance weight 𝑟𝑙 = 𝑝 𝑧 𝑙 𝑞(𝑧(𝑙)) acts as the weight of each sample. We can replace this equation by using some normalizing constant(𝑍𝑞).
  • 6. Chapter 11.1. Basic Sampling Algorithms 6 Sampling and the EM algorithm Sampling strategy is pretty important under Bayesian framework, but it also plays significant role in various computation! Consider the 𝑀 − 𝑆𝑡𝑒𝑝 of EM algorithm. This is called a monte-carlo EM algorithm. Here, we can generate samples of 𝑧 from posterior 𝑝(|𝑋, 𝜃𝑜𝑙𝑑 ), and we can generate hard assignment of data to the specific clusters. This can be implemented to the IP-algorithm, which can be used in data augmentation process.
  • 7. Chapter 11.2. Markov Chain Monte Carlo 7 Markov Chain ** I’ve got help from https://kaist.edwith.org/machinelearning1_17 Before discussing MCMC, it is beneficial to review the idea of Markov chain. We have covered basic concept of Markov chain in stochastic process, but its noteworthy to revise some important facts. First, there is a transition probability which defines the probability of moving from one state to another. There are some properties of Markov Chain. 1. Irreducible : We can move 𝑖 ↔ 𝑗 (We can move from here to there, there to here.) 2. Recurrent : We can get back to state 𝑗 after some transition loop. 3. Aperiodic : We are not moving in a certain circle (not rotating 𝑗 𝑝 𝑐 𝑗 𝑝 …) A state which is recurrent and aperiodic is an ‘ergodic’ state. With the help of these conditions, we can define ‘stationary distribution.’ ‘If a Markov chain is irreducible and ergodic, we can define stationary distribution!’ If we are performing transition a lot of time, we can get into stationary distribution. That is, 𝝅𝑻 = 𝝅 Note that this is inverse of expected return time, and it is uniquely determined and is a probability distribution.
  • 8. Chapter 11.2. Markov Chain Monte Carlo 8 Markov Chain So, we have covered some interesting characteristic of Markov chain. Now, we have to connect the basic idea of Markov chain to sampling methodology. First, we are not throwing away previous samples. Instead, we are re-using it in the next sampling process. Let’s look at the overall idea of MCMC. (I’ll write it in Korean) 1. 우리의 목표는 𝑝(𝑧)에서 sample을 생성하는 것임. 2. 근데 그것이 쉽지 않음. 그래서 Markov Chain의 idea를 빌려옴. 3. 𝑝(𝑧)를 stationary distribution인 𝜋라고 생각하고, 그 𝜋𝑖를 생성하는 transition probability인 𝑃𝑖𝑗를 추정하는 개념임. 4. 이는 꽤 합리적인 방법인 것이, 우리는 sample을 sequential하게 생성하기 때문에, 일종의 transition으로 생각할 수 있음. 5. 그리고 궁극적으로 그것은 우리가 추정하고자 하는 true-distribution인 𝑷로 수렴해야 목표를 달성하는 전체적인 process인 것이다! Summary. We are trying to generate sample from 𝑝 𝑧 ≈ 𝜋. That is, we already know stationary probability But it’s intractable, so we are approximating 𝑃𝑖𝑗 , and generate samples there. After large amount of transition, sampling from 𝑷𝒊𝒋 will be similar to that of from 𝝅𝒊.
  • 9. Chapter 11.2. Markov Chain Monte Carlo 9 Metropolis-Hastings Algorithm However, in order to get stationary probability, transition should satisfy balance equation! That is, 𝝅𝒊𝑷𝒊,𝒋 = 𝝅𝒋𝑷𝒋,𝒊. Here, we are approximating P with proposal distribution 𝑞(𝑧∗ |𝑧𝑡 ). Setting is done. Here again we are accepting or rejecting samples according to acceptance probability 𝜶. Current sample 𝑧𝑡 . Suggested sample 𝒛∗ 1. Accept 𝑧𝑡+1 = 𝑧∗ 2. Reject 𝑧𝑡+1 = 𝑧𝑡 Problem is, ‘When should we accept? When should we reject?’ To satisfy 𝝅𝒊𝑷𝒊𝒋 = 𝝅𝒋𝑷𝒋𝒊, 𝒒 𝒛𝒕 𝒛∗ 𝒑 𝒛∗ = 𝒒 𝒛∗ 𝒛𝒕 𝒑(𝒛𝒕 ) But this is not easy! So, we are continuously adjusting transition probability 𝒒(𝒛∗ |𝒛𝒕 ) 1. If 𝑞 𝑧∗ 𝑧𝑡 𝑝 𝑧𝑡 > 𝑞 𝑧𝑡 𝑧∗ 𝑝(𝑧∗ ), going from current state to next state (𝑧𝑡 𝑧∗ ) is bigger. So, we should make it smaller. 2. If 𝑞 𝑧𝑡 𝑧∗ 𝑝 𝑧∗ > 𝑞 𝑧∗ 𝑧𝑡 𝑝(𝑧𝑡 ), going from current state to next state (𝑧𝑡 𝑧∗ ) is smaller. So, we should always accept! Then here, consider weight 𝑟 𝑧∗ 𝑧𝑡 = 𝑞 𝑧𝑡 𝑧∗ 𝑝 𝑧∗ 𝑞 𝑧∗ 𝑧𝑡 𝑝 𝑧𝑡 . Ideal value is one. Thus, acceptance ratio will be
  • 10. Chapter 11.2. Markov Chain Monte Carlo 10 Metropolis-Hastings Algorithm Before moving on, let’s make a short summary! 1. We assume a desired distribution(to be sampled) as a stationary distribution. 2. To acquire 𝜋, we need our distribution to satisfy balance equation. 3. Balance equation is an equation which indicates 𝜋𝑖𝑃𝑖𝑗 = 𝜋𝑗𝑃 𝑗𝑖. 4. That means, probability of moving from here to there is equal to there to here! 5. This can be expressed by acceptance rate 𝛼, 6. Ideal condition of 𝑞 𝑧𝑡 𝑧∗ 𝑝 𝑧∗ 𝑞 𝑧∗ 𝑧𝑡 𝑝 𝑧𝑡 is to be equal to 1. 7. Thus, we are trying to make this ratio as close to 1 by selecting appropriate sample at each iteration. However, here again we face a problem of choosing a good approximation 𝑞(𝑧). Gibbs-sampling suggests a good 𝑞(𝑧). Why don’t we simply use 𝒑(𝒛∗ |𝒛𝒕 ) itself?
  • 11. Chapter 11.3. Gibbs Sampling 11 Ideation Gibbs sampling is a MCMC sampling, which is a special case of a Metropolis-Hastings algorithm! Overall mechanism is equal to that of MH. Only difference is we are not using approximate 𝑞𝑘, but 𝒑(𝒛𝒌 ∗ |𝒛−𝒌) Idea is simple. We are rotating again and again between various components. Then, how can we guarantee aforementioned process achieves desired samples from given distribution?? We can simply show that by suggesting 𝑝 𝑧𝑘 𝑧−𝑘 = 𝑞(𝑧∗ |𝑧𝑡). Note that our goal was to obtain detailed-balance. By setting 𝒒𝒌 𝒛∗ 𝒛 = 𝒑(𝒛𝒌 ∗ |𝒛𝒌), we can see this always satisfies detailed balance condition!!
  • 12. Chapter 11.4. Slice Sampling 12 Throwing away step-size In this example, we are not considering step size. Rather, we are restricting the next-level(aespa) sampling!! 1. From a given sample z(τ) , we sample 𝑢 uniformly from 0 ≤ 𝑢 ≤ 𝑝(𝑧(𝜏) ) 2. Define a ‘slice’, which is a horizontal line in the above distribution. 3. We generate new sample from this slice region. But in reality, we cannot easily know this. 4. Thus, we empirically decide it! Let region be 𝑤. 5. Estimate functional value at the end 𝑝 𝑧𝑚𝑎𝑥 &𝑝 𝑧𝑚𝑖𝑛 and finds whether it lies in or out of the region. 6. According to the functional value of end region, we extend or shrink the region 𝒘. We iteratively extend and shrink the sampling region 𝒘 according to the ‘slice’
  • 13. Chapter 11.5. The Hybrid Monte Carlo Algorithm 13 Hamiltonian Monte Carlo ** I’ve studied overall algorithm in https://www.youtube.com/watch?v=a-wydhEuAm0 Note that Metropolis-Hasting was a random-surfing on the estimated distribution. Just like gradient descent’s momentum and speed, let’s modify some characteristics of ‘surfing’. It is a ‘Hamiltonian Monte Carlo(HMC)’ HMC is a sampling method which comes from physics. Let’s think of energy Absorbing energy Emitting energy Many of the individuals stays under less energy state. Thus, the proportion of high energy decreases exponentially! That is why we can model energy function with 𝑒−𝐸(𝑥) This idea can be applied to our probability distribution! That is, 𝒑 𝒙 = 𝒆−𝑬(𝒙) However, we do not use simple energy function. Rather, we use joint function of potential energy and kinetic energy. Thus, total energy can be 𝐸(𝑧) is a potential energy 𝐾(𝑟) is a kinetic energy
  • 14. Chapter 11.5. The Hybrid Monte Carlo Algorithm 14 Hamiltonian Monte Carlo Here, we can set potential energy to be negative log-posterior. And for the kinetic, we can simply set Please note that 𝐻 𝑧, 𝑟 = 𝑐𝑜𝑛𝑠𝑡. I couldn’t understand the explanation why this value is almost fixed. Anyone understood..? Thus, overall estimation can be… 𝐸 𝑧 = − log 𝑝(𝑧), here, 𝑝(𝑧) is our desired distribution! We can iteratively get samples by generating 𝑟, then 𝑧, again and again. Note that 𝑟 and 𝑧 are independent in their functional form. Generating 𝑟 is relatively simple since 𝑝(𝑟|𝑧) is a standard normal! 𝒆 𝟏 𝟐 ∑𝒓𝒊 ~𝑵(𝟎, 𝟏) Obviously, straight sampling from 𝑝(𝑧, 𝑟) is impossible. Why? Because if its possible, we would not get help of 𝒓! Thus, we use leapfrog discretization, which is an approximation of this sampling. Here, we alternatively choose 𝑧 and 𝑟. Interesting thing is, we move ‘a half-way’ further.
  • 15. Chapter 11.5. The Hybrid Monte Carlo Algorithm 15 Leapfrog algorithm That is, we are getting help of the ‘momentum!’ This approximation is tractable since calculating gradient is not that hard task! 𝜖 here work as a step size movement in the algorithm(hyper-param) We can think of this movement intuitively with right figure. (Figure from previous Youtube-link!) This helps us to stay within the high-region!