Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

3,633 views

Published on

Talk given by Donia Skanji at the "reading classics seminar" in Paris-Dauphine

Published in:
Education

No Downloads

Total views

3,633

On SlideShare

0

From Embeds

0

Number of Embeds

2,371

Shares

0

Downloads

59

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionMonte Carlo Sampling methods using Markov Chains and their Applications Hastings-University of Toronto Reading seminar on classics: C.P.Robert presented by:Donia Skanji December 3, 2012 1/40 Hastings-University of Toronto Reading Seminar:MCMC
- 2. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionOutline 1 Introduction 2 Monte Carlo Principle 3 Markov Chain Theory 4 MCMC 5 Conclusion 2/40 Hastings-University of Toronto Reading Seminar:MCMC
- 3. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Introduction to MCMC Methods 3/40Hastings-University of Toronto Reading Seminar:MCMC
- 4. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionIntroduction: There are several numerical problems such as Integral computing and Maximum evaluation in large dimensional spaces Monte Carlo Methods are often applied to solve integration and optimisation problems. Monte Carlo Markov chain (MCMC) is one of the most known Monte Carlo methods. MCMC methods involve a large class of sampling algorithms that have had a greatest inﬂuence on science development. 4/40 Hastings-University of Toronto Reading Seminar:MCMC
- 5. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionStudy objectif To expose some relevant theory and techniques of application related to MCMC methods ♣ To present a generalization of Metropolis sampling method. 5/40 Hastings-University of Toronto Reading Seminar:MCMC
- 6. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionNext Steps Monte Carlo Principle 6/40 Hastings-University of Toronto Reading Seminar:MCMC
- 7. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionNext Steps Monte Carlo Principle Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
- 8. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionNext Steps Monte Carlo Principle To introduce: Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
- 9. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionNext Steps Monte Carlo Principle To introduce: -MCMC Methods Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
- 10. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionNext Steps Monte Carlo Principle To introduce: -MCMC Methods -MCMC Algorithms Markov Chain 6/40 Hastings-University of Toronto Reading Seminar:MCMC
- 11. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Monte Carlo Methods 7/40Hastings-University of Toronto Reading Seminar:MCMC
- 12. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionOverview The idea of Monte Carlo simulation is to draw an i.i.d. set of samples{x i }N from a target density π. i=1 These N samples can be used to approximate the target density with the following empirical point-mass function: 1 N πN (x) = N i=1 δx (i) (x) For independent samples, by Law of Large numbers, one can approximate the integrals I (f ) with tractable sums IN (f ) that converge as follows: 1 N i IN (f ) = N i=1 f (x ) → I (f ) = f (x)π(x)dx a.s see example 8/40 Hastings-University of Toronto Reading Seminar:MCMC
- 13. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionN sample from π xN x3 6 9 1 x x x 7 x x2 8 5 x x x4But independent sampling from π may be diﬃcult especially in ahigh dimensional space. 9/40 Hastings-University of Toronto Reading Seminar:MCMC
- 14. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionIt turns out that N N f (x i ) → f (x)π(x)dx (N → ∞) 1 i=1still applies if we generate samples using a Markovchain(dependent samples).The idea of MCMC is to use Markov chain convergenceproperties to overcome the dimensionality problems met byregular Monte carlo methods.But ﬁrst, some revision of Markov chains in a discrete set χ. 10/40 Hastings-University of Toronto Reading Seminar:MCMC
- 15. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Markov Chain Theory 11/40Hastings-University of Toronto Reading Seminar:MCMC
- 16. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionDeﬁnition Finite Markov Chain A Markov chain is a mathematical system that undergoes transitions from one state to another, between a ﬁnite or countable number of possible states. It is a random process usually characterized as memoryless: P(X (t+1) /X (0) , X (1) , . . . , X (t) ) = P(X (t+1) /X (t) ) 12/40 Hastings-University of Toronto Reading Seminar:MCMC
- 17. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionTransition Matrix Let P = {Pij } the transition Matrix of a markov chain with states 0, 1, 2 . . . , S then, if X (t) denotes the state occupied by the process at time t, we have: Pr (X (t+1) = j/X (t) = i) = Pij X (t+1) = X (t) .P 13/40 Hastings-University of Toronto Reading Seminar:MCMC
- 18. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionProperties Stationarity/Irreducibility Stationarity ♣ 14/40 Hastings-University of Toronto Reading Seminar:MCMC
- 19. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionProperties Stationarity/Irreducibility Stationarity As t → ∞,the Markov chain converges to its stationary(invariant) distribution:π = π.P ♣ 14/40 Hastings-University of Toronto Reading Seminar:MCMC
- 20. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionProperties Stationarity/Irreducibility Stationarity As t → ∞,the Markov chain converges to its stationary(invariant) distribution:π = π.P Irreducibility ♣ 14/40 Hastings-University of Toronto Reading Seminar:MCMC
- 21. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionProperties Stationarity/Irreducibility Stationarity As t → ∞,the Markov chain converges to its stationary(invariant) distribution:π = π.P Irreducibility ♣ Irreducible means any set of states can be reached from any other state in a ﬁnite number of moves (p(i, j) > 0 for every i and j). 14/40 Hastings-University of Toronto Reading Seminar:MCMC
- 22. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionMCMC The idea of Markov Monte Carlo Method is to choose P the transition Matrix so that π(the target density which is very diﬃcult to sample from) is its unique stationary distribution. Assume the Markov Chain: has a stationary distribution π(X ) is irreducible and aperiodic Then we have an Ergodic Theorem: Theorem(Ergodic Theorem) if the Markov chain xt is irriducible, aperiodic and stationary then for any function h with E |h| ∞ 1 N i h(xi ) → h(x)dπ(x) when N → ∞ 15/40 Hastings-University of Toronto Reading Seminar:MCMC
- 23. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion SummaryRecall that our goal is to build a markov chain (X t )using a transition matrix P so that the limiting distri-bution of (X t ) is the target density π and integrals canbe approximated using the ergodic theorem. 16/40 Hastings-University of Toronto Reading Seminar:MCMC
- 24. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionQuestion How do we construct a Markov chain whose stationary distribution is the target distribution,π Metropolis et al (1953) showed how. The method was generalized by Hastings (1970). 17/40 Hastings-University of Toronto Reading Seminar:MCMC
- 25. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionConstruction of the transition matrix in order to construct a markov chain with π as its stationary distribution, we have to consider a transition matrix P that satisfy the reversibility condition that for all i and j πi p(i → j) = πj p(j → i) πi pij = πj pji This property ensures that πi pij = πj (deﬁnition of a stationary distribution) and hence that π is a stationary distribution of P 18/40 Hastings-University of Toronto Reading Seminar:MCMC
- 26. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionConstruction of the transition matrix How to choose the transition Matrix P so that the πi Pij = πj Pji reversibility con- dition is veriﬁed? 19/40 Hastings-University of Toronto Reading Seminar:MCMC
- 27. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionOverview Suppose that we have a proposal matrix denoted Q where j qij = 1 . If it happens that Q itself satisﬁes the reversibility condition:πi qij = πj qji for all i and j then our research is over,but most likely it will not. We might ﬁnd for example that for some i and j:πi qij > πj qji A convenient way to correct this condition is to reduce the number of moves from i to j by introducing a probability αij that the move is made. 20/40 Hastings-University of Toronto Reading Seminar:MCMC
- 28. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionThe choice of the transition matrix we assume that the transition matrix P has this form: Pij = qij αij if i = j Pii = 1 − j=i Pij if i = j where: Q = qij is the proposal matrix or jumping matrix of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. αij is the acceptance probability to move from state i to state j. 21/40 Hastings-University of Toronto Reading Seminar:MCMC
- 29. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion In order to obtain the reversibility condition, we have to verify : πi pij = πj pji πi αij qij = πj αji qji (∗) The probabilities αij and αji are introduced to ensure that thetwo sides of (∗) are in balance. In his paper, Hastings deﬁned a generic form of the acceptanceprobability: sij αij = π q 1+ πi qij j ji Where:sij is a symetric function of i and j(sij = sji ) chosen sothat 0 αij 1 for all i and j With this form of Pij and αij suggested by Hastings, it’s readilyveriﬁed the reversibility condition. 22/40 Hastings-University of Toronto Reading Seminar:MCMC
- 30. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion2-The acceptance probability α The choice of α Recall that in this paper, Hastings deﬁned the acceptance probability αij as follows: sij αij = π q 1+ πi qij j ji For a speciﬁc choice of sij , we recognize the acceptance probabilities suggested by both: ⊕Metropolis et al(1953) ⊕Barker(1965) 23/40 Hastings-University of Toronto Reading Seminar:MCMC
- 31. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionThe acceptance probability α The choice of Sij Two choices for Sij are given for all i and j by πi qij πj qji (M) 1+ πj qji if πi qij 1 sij = πj qji πj qji 1+ πi qij if πi qij 1 (M) when qij = qji and Sij = Sij we have the method devised (M) π by Metropolis et al with αij = min(1, πji ) (B) whenqij = qji and Sij = Sij = 1 we have the method (B) πj devised by Barker with αij = ( πi +πj ) 24/40 Hastings-University of Toronto Reading Seminar:MCMC
- 32. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionRemark In this paper, Hastings mentionned that little is known about (M) (B) the merits of these two choices of Sij and Sij 25/40 Hastings-University of Toronto Reading Seminar:MCMC
- 33. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionThe Proposal Matrix Q The choice of Q It has been recognised that the choice of the proposal matrix/density is crucial to the success(rapid convergence) of MCMC algorithm. The proposal matrix can be almost arbitrary which allows to reach all states frequently and assure a high acceptance rate 26/40 Hastings-University of Toronto Reading Seminar:MCMC
- 34. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 35. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the ﬁrst sample. 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 36. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the ﬁrst sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 37. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the ﬁrst sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 38. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the ﬁrst sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 5 Accept proposal with probability α(i → j) 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 39. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the ﬁrst sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 5 Accept proposal with probability α(i → j) -if proposal accepted then move to j/step4 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 40. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm 1 First, pick a proposal matrix Q(i, j) of an arbitrary Markov chain on the states 0, 1..S, which suggests a new sample value j given a sample value i. 2 Also, start with some arbitrary point i0 as the ﬁrst sample. 3 Then, to return a new sample j given the most recent sample i, we proceed as follows: 4 Generate a proposed new sample value j from the jumping distribution Q(i → j). 5 Accept proposal with probability α(i → j) -if proposal accepted then move to j/step4 -repeat until a sample from the desired size is obtained 27/40 Hastings-University of Toronto Reading Seminar:MCMC
- 41. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionRemarks An empirical way for checking convergence is to let two or more diﬀerent chains run in parallel and see if they are concentrating on the some place. The calculation of α does not require knowledge of the normalizing constant of π because it appears both in the numerator and denominator. Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very diﬀerent distribution, especially if the starting point is in a region of low density. As a result a burn in period is typically necessary. 28/40 Hastings-University of Toronto Reading Seminar:MCMC
- 42. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionExample:Poisson Distribution as the Target Distribution Consider π as the Poisson distribution with intensity λ > 0 i πi = e −λ λ where i = 0, 1, 2, · · · i! Hastings(1970)suggests the following proposal transition matrix 1 1 2 2 0 0 ··· 1 1 0 1 0 ··· q00 = q01 = 2 if i = 0 2 2 1 1 if j = i − 1 Q = 0 2 0 2 · · · 1 qij = 2 0 0 1 0 ··· 1 2 if j = i + 1 2 0 otherwise . . . . . . . . ··· . . . . Q is in fact symmetric, and the algorithm reduces to that of Metropolis skip 29/40 Hastings-University of Toronto Reading Seminar:MCMC
- 43. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion 1 i 2 min(1, λ) if j = i − 1 1 λ 2 min(1, i+1 ) if j = i + 1 (M) pij = qij αij = 1 − pi,i−1 − pi,i+1 j =i 0 otherwiseFor i = 0 1 2 min(1, λ) if j = 1 p0j = 1 − 1 min(1, λ) 2 if j = 0 0 otherwise this transition probability is aperiodic and irreducible In practice, if λ is small, this choice of Q seems to work fairlywell and fast to approximate π 30/40 Hastings-University of Toronto Reading Seminar:MCMC
- 44. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 45. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 46. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 47. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 48. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 49. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 50. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } let u ∼ U[0, 1] 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 51. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } let u ∼ U[0, 1] if u ≤ αij then Xk+1 = j 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 52. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionAlgorithm Given a starting point i we take: 1 j=i+1 with probability 2 1 or j=i-1 with probability 2 qij = 2 δi−1 (j) + 1 δi+1 (j) 1 2 We calculate Metropolis and Hastings ratio: ♣ π(j) i! αij = min{1, π(i) } = min{1, λ(j−i) × j! } let u ∼ U[0, 1] if u ≤ αij then Xk+1 = j else Xk+1 = Xk = i 31/40 Hastings-University of Toronto Reading Seminar:MCMC
- 53. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionR implementation > l i b r a r y ( mcsm ) > f a c t=f u n c t i o n ( n ) {gamma( n+1)} > p o i s s o n f=f u n c t i o n ( n , lambda , x0 ) { x=x0 xn=x0 f o r ( i i n 1 : n ){ i f ( xn != 0 ) y=xn +(2∗ rbinom ( 1 , 1 , 0 . 5 ) − 1 ) e l s e { y=rbinom ( 1 , 1 , 0 . 5 ) } a l p h a=min ( 1 , lambda ˆ ( y−xn ) ∗ f a c t ( xn ) / f a c t ( y ) ) i f ( r u n i f ( 1 ) < a l p h a ) { xn=y } x=c ( x , xn ) } x} 32/40 Hastings-University of Toronto Reading Seminar:MCMC
- 54. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion 33/40Hastings-University of Toronto Reading Seminar:MCMC
- 55. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionMultivariate Target if the distribution π is d-dimensional and the simulated process X (t) = {X1 (t), · · · Xd (t)}, we may use the following techniques to construct the transition matrix P 1 In the transition from t to t + 1 all co-ordinates of X (t) may be changed 2 In the transition from t to t + 1 only one co-ordinates of X (t) may be changed, that selection may be made at random among the d co-ordinates 3 In the transition from time t to t + 1 only one co-ordinate may change in each transition, and the co-ordinate being selected in a ﬁxed rather than a random sequence. 34/40 Hastings-University of Toronto Reading Seminar:MCMC
- 56. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Hastings’justiﬁcation Hastings transformed the d dimensional problem to one dimensional problem The approach is based on updating one component at each time ♣ The transition matrix is deﬁned as follow:P = P1 .P2 · · · Pd For each (k = 1 · · · d), Pk is constructed so that πPk = π π will be a stationary distribution of P since πP = πP1 · · · Pd = πP2 · · · PdOrthogonal Matrices 35/40 Hastings-University of Toronto Reading Seminar:MCMC
- 57. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionConclusion +In this paper, Hastings gives a generalization of Metropolis et al (1953) approach. +He also introduiced gibbs sampling strategy when he presented the multivariate target. +Hastings treated the continuous case using a discretization analogy. -little information about the merits of Metropolis and Barker acceptance forms. 36/40 Hastings-University of Toronto Reading Seminar:MCMC
- 58. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC Conclusion Thank You For Your Attention 37/40Hastings-University of Toronto Reading Seminar:MCMC
- 59. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionBibliography [1]:W.K.Hastings(1970).Monte Carlo Sampling Methods Using Markov chain and their Applications [2]:Christian P Roberts (2010).Introduicing Monte Carlo Methods with R [3]:Kenneth Lange(2010).Numerical Analysis for statisticians [4]:Siddhartha Chib(1995).Understanding the metropolis Hastings algorithm [5]:Robert Gray(2001).Advanced statistical computing 38/40 Hastings-University of Toronto Reading Seminar:MCMC
- 60. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionRandom orthogonal Matrices Hastings suggests an interesting chain on the space n × n orthogonal matrices(H H = I , det(H) = 1) The proposal stage of Hasting’s algorithm consists of choosing at random 2 indices i and j and an angle θ ∈ [0, 2π] The proposed replacement for the current rotation matrix H is then H = Eij (θ).H Eij (θ) coincides with the identity matrix expect for some entries since Eij (θ)−1 = Eij (−θ) the transition density is symmetric and the markov chain induced is reversible back 39/40 Hastings-University of Toronto Reading Seminar:MCMC
- 61. Outline Introduction Monte Carlo Principle Markov Chain Theory MCMC ConclusionEstimating Pi using Monte Carlo methods (SAS output) Problem :Estimate PI using Monte Carlo Integration Strategy:Equation of a circle with radius= 1 : x 2 + y 2 = 1 which can be written y = 1 − x 2 Area of this circle =pi Area of this circle in the ﬁrst quadrant = pi 4 Generate Ux Uniform(0, 1) and Uy Uniform(0, 1) Check to see if Uy ≤ 2 1 − Ux The proportion of generated points when this Condition is true is an estimate of pi 4. Based on 10,000 simulated points using SAS: PI (SE ) = 3.1056(0.016) back 40/40 Hastings-University of Toronto Reading Seminar:MCMC

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment