This a short presentation for a 15 minutes talk at Bayesian Inference for Stochastic Processes 7, on the SMC^2 algorithm.
http://arxiv.org/abs/1101.1528
SMC^2: an algorithm for sequential analysis of state-space modelsPierre Jacob
In these slides I presented the SMC^2 method (see the article here: http://arxiv.org/abs/1101.1528 ) to an audience of marine biogeochemistry people, emphasizing on the model evidence estimation aspect.
An Analytical Expression for Service Curves of Fading ChannelsGiacomo Verticale
In this paper, we develop a method for analyzing time-varying wireless channels in the context of the modern theory of the stochastic network calculus. In particular, our technique is applicable to channels that can be modeled as Markov chains, which is the case of channels subject to Rayleigh fading. Our approach relies on theoretical results on the convergence time of reversible Markov processes and is applicable to chains with an arbitrary number of states. We provide two expressions for the delay tail distribution of traffic transmitted over a fading channel fed by a Markov source. The first expression is tighter and only requires a simple numerical minimization, the second expression is looser, but is in closed form.
This a short presentation for a 15 minutes talk at Bayesian Inference for Stochastic Processes 7, on the SMC^2 algorithm.
http://arxiv.org/abs/1101.1528
SMC^2: an algorithm for sequential analysis of state-space modelsPierre Jacob
In these slides I presented the SMC^2 method (see the article here: http://arxiv.org/abs/1101.1528 ) to an audience of marine biogeochemistry people, emphasizing on the model evidence estimation aspect.
An Analytical Expression for Service Curves of Fading ChannelsGiacomo Verticale
In this paper, we develop a method for analyzing time-varying wireless channels in the context of the modern theory of the stochastic network calculus. In particular, our technique is applicable to channels that can be modeled as Markov chains, which is the case of channels subject to Rayleigh fading. Our approach relies on theoretical results on the convergence time of reversible Markov processes and is applicable to chains with an arbitrary number of states. We provide two expressions for the delay tail distribution of traffic transmitted over a fading channel fed by a Markov source. The first expression is tighter and only requires a simple numerical minimization, the second expression is looser, but is in closed form.
Neuron-computer interface in Dynamic-Clamp experimentsSSA KPI
AACIMP 2010 Summer School lecture by Anton Chizhov. "Physics, Chemistry and Living Systems" stream. "Neuron-Computer Interface in Dynamic-Clamp Experiments. Models of Neuronal Populations and Visual Cortex" course. Part 1.
More info at http://summerschool.ssa.org.ua
Neuron-computer interface in Dynamic-Clamp experimentsSSA KPI
AACIMP 2010 Summer School lecture by Anton Chizhov. "Physics, Chemistry and Living Systems" stream. "Neuron-Computer Interface in Dynamic-Clamp Experiments. Models of Neuronal Populations and Visual Cortex" course. Part 1.
More info at http://summerschool.ssa.org.ua
Speech Recognition using HMM & GMM Models: A Review on Techniques and Approachesijsrd.com
Many ways of communications are used between human and computer, while using gesture is considered to be one of the most natural ways in a virtual reality system. Speech recognition is one of the typical methods of non-verbal communication for human beings and we naturally use various gestures to express our own intentions in everyday life. Gesture recognizers are supposed to capture and analyze the information transmitted by the hands of a person who communicates in sign language. This is a prerequisite for automatic sign-to-spoken-language translation, which has the potential to support the integration of deaf people into society. This paper present part of literature review on ongoing research and findings on different technique and approaches in gesture recognition using Hidden Markov Models for vision-based approach.
From the SMX West Conference in San Jose, California, March 1-3, 2016. SESSION: Search Engine Friendly Web Design. PRESENTATION: Search Engine Friendly Web Design: Designing For People Who Use Search Engines - Given by Shari Thurow, @sharithurow - Omni Marketing Interactive, Founder and SEO Director. #SMX #12D
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
Data Science - Part XIII - Hidden Markov ModelsDerek Kane
This lecture provides an overview on Markov processes and Hidden Markov Models. We will start off by going through a basic conceptual example and then explore the types of problems that can be solved with HMM's. The underlying algorithms will be discussed in detail with a quantitative focus and then we will conclude with a practical example concerning stock market prediction which highlights the techniques.
Stock Market Prediction using Hidden Markov Models and Investor sentimentPatrick Nicolas
This presentation describes hidden Markov Models to predict financial markets indices using the weekly sentiment survey from the American Association of Individual Investors.
The first section describes the hidden Markov model (HMM), followed by selection of features (investors' sentiment) and labeled data (S&P 500 index).
The second section dives into HMMs for continuous observations and detection of regime shifts/structural breaks using an auto-regressive Markov chain
The last section is devoted to alternative models to HMM.
Estimate the hidden States of a Non-linear Dynamic Stochastic System from Noisy Measurements. Estimation is a prerequisite. The Probability Theory summary is included.
The presentation is at graduate level in math and engineering.
For comments please connect me at solo.hermelin@gmail.com.
For more presentations on different subjects visit my website at http://www.solohermelin.com.
Kinetic pathways to the isotropic-nematic phase transformation: a mean field ...Amit Bhattacharjee
Here we illustrate the classic Ginzburg-Landau-de Gennes theory of isotropic nematic phase transition and show how fluctuations as well as deterministic kinetics can lead to phase equilibria.
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
A seminar presented in "CompFlu16" at IIIT Hyderabad in December 2016 on homogeneous nucleation kinetics in anisotropic liquids using a Landau-de Gennes field theoretic study.
A Closed-Form Expression for Queuing Delay in Rayleigh Fading Channels Using ...Giacomo Verticale
Stochastic Network Calculus is a modern theory for studying the delay performance of a queuing system.
So far, this theory proved very effective in studying QoS in the wireline transmission media.
In fact, it provides an upper bound to the probability tail of the queuing delay and requires only the expression of an arrival curve, which models the traffic source, and of a service curve, which models the scheduling discipline.
In this paper, we propose a model of the wireless channel based on Stochastic Network Calculus and provide an analytical expression for the first two moments of the service curve of a wireless channel whose capacity varies over time according to a Rayleigh fading process, such as in the WiMAX and LTE systems.
We also provide an approximate closed-form expression for the probability tail of the queuing delay.
Finally, we compare our results to simulations in order to assess the validity of our approach.
Polynomial matrices can help to elegantly formulate many broadband multi-sensor / multi-channel processing problems, and represent a direct extension of well-established narrowband techniques which typically involve eigen- (EVD) and singular value decompositions (SVD) for optimisation. Polynomial matrix decompositions extend the utility of the EVD to polynomial parahermitian matrices, and this talk presents a brief overview of such polynomial matrices, characteristics of the polynomial EVD (PEVD) and iterative algorithms for its solution. The presentation concludes with some surprising results when applying the PEVD to subband coding and broadband beamforming.
Susie Bayarri Plenary Lecture given in the ISBA (International Society of Bayesian Analysis) World Meeting in Montreal, Canada on June 30, 2022, by Pierre E, Jacob (https://sites.google.com/site/pierrejacob/)
Talk on the design on non-negative unbiased estimators, useful to perform exact inference for intractable target distributions.
Corresponds to the article http://arxiv.org/abs/1309.6473
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Presentation MCB seminar 09032011
1. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
SMC2 : A sequential Monte Carlo algorithm with
particle Markov chain Monte Carlo updates
N. CHOPIN1 , P.E. JACOB2 , & O. PAPASPILIOPOULOS3
MCB seminar, March 9th, 2011
1
ENSAE-CREST
2
CREST & Universit´ Paris Dauphine, funded by AXA research
e
3
Universitat Pompeu Fabra
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 1/ 72
2. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 2/ 72
3. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 3/ 72
4. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
State Space Models
Context
In these models:
we observe some data Y1:T = (Y1 , . . . YT ),
we suppose that they depend on some hidden states X1:T .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 4/ 72
5. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
State Space Models
A system of equations
Hidden states: p(x1 |θ) = µθ (x1 ) and when t ≥ 1
p(xt+1 |x1:t , θ) = p(xt+1 |xt , θ) = fθ (xt+1 |xt )
Observations:
p(yt |y1:t−1 , x1:t−1 , θ) = p(yt |xt , θ) = gθ (yt |xt )
Parameter: θ ∈ Θ, prior p(θ).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 5/ 72
6. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
State Space Models
Some interesting distributions
Bayesian inference focuses on:
p(θ|y1:T )
Filtering (traditionally) focuses on:
∀t ∈ [1, T ] pθ (xt |y1:t )
Smoothing (traditionally) focuses on:
∀t ∈ [1, T ] pθ (xt |y1:T )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 6/ 72
7. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
State Space Models
Some interesting distributions [spoiler]
PMCMC methods provide a sample from:
p(θ, x1:T |y1:T )
SMC2 provides a sample from:
∀t ∈ [1, T ] p(θ, x1:t |y1:t )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 7/ 72
8. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
Local level
yt
= xt + σV εt , εt ∼ N (0, 1),
x = xt + σW ηt , ηt ∼ N (0, 1),
t+1
x0 ∼ N (0, 1)
Here: θ = (σV , σW ). The model is linear and Gaussian.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 8/ 72
9. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
Stochastic Volatility (simple)
yt |xt ∼ N (0, e xt )
x = µ + ρ(xt−1 − µ) + σεt
t
x0 = µ0
Here: θ = (µ, ρ, σ), or can include µ0 .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 9/ 72
10. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
Population growth model
yt
= nt + σw εt
log nt+1 = log nt + b0 + b1 (nt )b2 + σ ηt
log n0 = µ0
Here: θ = (b0 , b1 , b2 , σ , σW ), or can include µ0 .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 10/ 72
11. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
Stochastic Volatility (sophisticated)
1/2
yt = µ + βvt + vt t ,t ≥ 1
iid iid
k ∼ Poi λξ 2 /ω 2 c1:k ∼ U(t, t + 1) ei:k ∼ Exp ξ/ω 2
k
zt+1 = e −λ zt + e −λ(t+1−cj ) ej
j=1
k
1
vt+1 = zt − zt+1 + ej
λ
j=1
xt+1 = (vt+1 , zt+1 )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 11/ 72
12. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
20
2
Squared observations
15
Observations
0
10
−2
5
−4
100 200 300 400 500 600 700 100 200 300 400 500 600 700
Time Time
(a) (b)
Figure: The S&P 500 data from 03/01/2005 to 21/12/2007.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 12/ 72
13. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
Athletics records model
2
g (yi,t |µt , ξ, σ)
g (y1:2,t |µt , ξ, σ) = {1 − G (y2,t |µt , ξ, σ)}
1 − G (yi,t |µt , ξ, σ)
i=1
xt = (µt , µt ) ,
˙ xt+1 | xt , ν ∼ N (Fxt , Q) ,
with
1 1 1/3 1/2
F = and Q = ν 2
0 1 1/2 1
−1/ξ
y −µ
G (y |µ, ξ, σ) = 1 − exp − 1 − ξ
σ +
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 13/ 72
14. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Examples
530
520
Times (seconds)
510
500
490
480
1980 1985 1990 1995 2000 2005 2010
Year
Figure: Best two times of each year, in women’s 3000 metres events
between 1976 and 2010.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 14/ 72
15. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why are those models challenging?
It’s all about dimensions. . .
pθ (y1:T |x1:T )pθ (x1:T )
pθ (x1:T |y1:T ) = ∝ pθ (y1:T |x1:T )pθ (x1:T )
pθ (y1:T )
. . . even if it’s not obvious
p(θ|y1:T ) ∝ p(y1:T |θ)p(θ)
= p(y1:T |x1:T , θ)p(x1:T |θ)dx1:T p(θ)
XT
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 15/ 72
16. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 16/ 72
17. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Metropolis-Hastings algorithm
A popular method to sample from a distribution π.
Algorithm 1 Metropolis-Hastings algorithm
1: Set some x (1)
2: for i = 2 to N do
3: Propose x ∗ ∼ q(·|x (i−1) )
4: Compute the ratio:
π(x ) q(x (i−1) |x )
α = min 1,
π(x (i−1) ) q(x |x (i−1) )
5: Set x (i) = x with probability α, otherwise set x (i) = x (i−1)
6: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 17/ 72
18. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Metropolis-Hastings algorithm
Requirements
π can be evaluated point-wise, up to a multiplicative constant.
x is low-dimensional, otherwise designing q gets tedious or
even impossible.
Back to SSM
p(θ|y1:T ) cannot be evaluated point-wise.
pθ (x1:T |y1:T ) and p(x1:T , θ|y1:T ) are high-dimensional, and
cannot be necessarily computed point-wise either.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 18/ 72
19. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Gibbs sampling
Suppose the target distribution π is defined on X d .
Algorithm 2 Gibbs sampling
(1)
1: Set some x1:d
2: for i = 2 to N do
3: for j = 1 to d do
(i) (i) (i) (i−1)
4: Draw xj ∼ π(xj |x1:j−1 , xj+1:d )
5: end for
6: end for
It allows to break a high-dimensional sampling problem into many
low-dimensional sampling problems!
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 19/ 72
20. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Gibbs sampling
Requirements
Conditional distributions π(xj |x1:j−1 , xj+1:d ) can be sampled
from, otherwise MH within Gibbs.
The components xj are not too correlated one to another.
Back to SSM
The hidden states x1:T are typically very correlated one to
another.
If the target is p(θ, x1:T |y1:T ), θ is also very correlated with
x1:T .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 20/ 72
21. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Context
Suppose we are interested in pθ (x1:T |y1:T ), with θ known.
(i)
We want to get a sample x1:T , i ∈ [1, N] from it.
General idea
We introduce the following sequence of distributions:
{pθ (x1:t |y1:t ), t ∈ [1, T ]}
Sample recursively from pθ (x1:t |y1:t ) to pθ (x1:t+1 |y1:t+1 ).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 21/ 72
22. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Definition
A particle filter is just a collection of weighted points, called
particles.
Particles
Writing (w (i) , x (i) )N ∼ π means that the empirical distribution:
i=1
N
w (i) δx (i) (dx)
i=1
converges towards π when N → +∞.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 22/ 72
23. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Importance Sampling
Suppose:
(i)
(w1 , x (i) )N ∼ π1
i=1
and if we define:
(i) (i) π2 (x (i) )
w2 = w1 ∗
π1 (x (i) )
then
(i)
(w2 , x (i) )N ∼ π2
i=1
under some common-sense assumptions on π1 and π2 .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 23/ 72
24. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
From one time-step to the other
Suppose
(i) (i)
(wt , x1:t )N ∼ pθ (x1:t |y1:t )
i=1
We want
(i) (i)
(wt+1 , x1:t+1 )N ∼ pθ (x1:t+1 |y1:t+1 )
i=1
Decomposition
pθ (x1:t+1 |y1:t+1 ) ∝ pθ (yt+1 |xt+1 )pθ (xt+1 |xt )pθ (x1:t |y1:t )
∝ gθ (yt+1 |xt+1 )fθ (xt+1 |xt )pθ (x1:t |y1:t )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 24/ 72
25. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Proposal
(i) (i)
Propose xt+1 ∼ qθ (xt+1 |x1:t = x1:t , y1:t ). Then:
(i) (i) (i) N
wt , (x1:t , xt+1 ) ∼ qθ (xt+1 |x1:t , y1:t+1 )pθ (x1:t |y1:t )
i=1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 25/ 72
26. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Reweighting
(i) (i) (i)
(i) (i) gθ (yt+1 |xt+1 )fθ (xt+1 |xt )
wt+1 = wt × (i) (i)
qθ (xt+1 |x1:t , y1:t+1 )
and finally we have
(i) (i)
(wt+1 , x1:t+1 )N ∼ pθ (x1:t+1 |y1:t+1 )
i=1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 26/ 72
27. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Resampling
To fight the weight degeneracy we introduce a resampling step.
Notation
Family of probability distribution on {1, . . . N}N :
N
N
a ∼ r (·|w ) for w ∈ [0, 1] such that w (i) = 1
i=1
(i) (i)
The variables (at−1 )N are the indices of the parents of (x1:t )N .
i=1 i=1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 27/ 72
28. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Algorithm 3 Sequential Monte Carlo algorithm
(i)
1: Propose x1 ∼ µθ (·)
(i)
2: Compute weights w1
3: for t = 2 to T do
4: Resample at−1 ∼ r (·|wt−1 )
(i) (i)
(i)t−1 a (i)
t−1 a (i)
5: Propose xt ∼ qθ (·|x1:t−1 , y1:t ), let x1:t = (x1:t−1 , xt )
(i) (i)
6: Update wt to get wt+1
7: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 28/ 72
29. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
time
Figure: Three weighted trajectories x1:t at time t.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 29/ 72
30. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
time
Figure: Three proposed trajectories x1:t+1 at time t + 1.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 30/ 72
31. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
time
Figure: Three reweighted trajectories x1:t+1 at time t + 1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 31/ 72
32. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Output
In the end we get particles:
(i) (i)
(wT , x1:T )N ∼ pθ (x1:T |y1:T )
i=1
Requirements
Proposal kernels qθ (·|x1:t−1 , y1:t ) from which we can sample.
Weight functions which we can evaluate point-wise.
These proposal kernels and weight functions must result in
properly weighted samples.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 32/ 72
33. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Sequential Monte Carlo for filtering
Marginal likelihood
A side effect of the SMC algorithm is that we can approximate the
marginal likelihood ZT :
ZT = p(y1:T |θ)
with the following unbiased estimate:
T N
ˆN 1 (i) P
ZT = wt − − → ZT
−−
N N→∞
t=1 i=1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 33/ 72
34. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 34/ 72
35. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Reference
Particle Markov Chain Monte Carlo methods
is an article by Andrieu, Doucet, Holenstein,
JRSS B., 2010, 72(3):269–342
Motivation
Bayesian inference in state space models:
p(θ, x1:T |y1:T )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 35/ 72
36. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Idealized Metropolis–Hastings for SSM
If only. . .
. . . we had p(θ|y1:T ) ∝ p(θ)p(y1:T |θ) up to a multiplicative
constant, we could run a MH algorithm with acceptance rate:
p(θ )p(y1:T |θ ) q(θ(i) |θ )
α(θ(i) , θ ) = min 1,
p(θ(i) )p(y1:T |θ(i) ) q(θ |θ(i) )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 36/ 72
37. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Valid Metropolis–Hastings for SSM ??
Plug in estimates
ˆN
However we have ZT (θ) ≈ p(y1:T |θ) by running a SMC algorithm,
and we can try to run a MH algorithm with acceptance rate:
ˆN
p(θ )ZT (θ ) q(θ(i) |θ )
α(θ(i) , θ ) = min 1,
ˆ
p(θ(i) )Z N (θ(i) ) q(θ |θ(i) )
T
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 37/ 72
38. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
The Beauty of Particle MCMC
“Exact approximation”
Turns out it is a valid MH algorithm that targets exactly p(θ|y1:T ),
regardless of the number N of particles used in the SMC algorithm
ˆN
that provides the estimates ZT (θ) at each iteration.
State estimation
In fact the PMCMC algorithms provide samples from
p(θ, x1:T |y1:T ), and not only from the posterior distribution of the
parameters.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 38/ 72
39. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Particle Metropolis-Hastings
Algorithm 4 Particle Metropolis-Hastings algorithm
1: Set some θ(1)
ˆN (1)
2: Run a SMC algorithm, keep ZT (θ(1) ), draw a trajectory x1:T
3: for i = 2 to I do
4: Propose θ ∼ q(·|θ(i−1) )
5: ˆN
Run a SMC algorithm, keep ZT (θ ), draw a trajectory x1:T
6: Compute the ratio:
ˆN
p(θ )ZT (θ ) q(θ(i−1) |θ )
α(θ(i−1) , θ ) = min 1,
ˆ
p(θ(i−1) )Z N (θ(i−1) ) q(θ |θ(i−1) )
T
(i)
7: Set θ(i) = θ , x1:T = x1:T with probability α, otherwise keep
the previous values
8: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 39/ 72
40. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
Variables generated by SMC
(1) (N)
∀t ∈ [1, T ] xt = (xt , . . . xt )
(1) (N)
∀t ∈ [1, T − 1] at = (at , . . . at )
Joint distribution
N
(i)
ψ(x1 , . . . xT , a1 , . . . aT −1 ) = qθ (x1 )
i=1
T N (i)
(i) a
1:t−1
× r (at−1 |wt−1 ) qθ (xt |x1:t−1 )
t=2 i=1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 40/ 72
41. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
Extended proposal distribution
k ,
The PMH proposes: a new parameter θ , a trajectory x1:T , and
the rest of the variables generated by the SMC.
q N (θ , k , x1 , . . . xT , a1 , . . . aT −1 )
k ,
= q(θ |θ(i) )wT ψ (x1 , . . . xT , a1 , . . . aT −1 )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 41/ 72
42. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
Extended target distribution
π N (θ, k, x1 , . . . xT , a1 , . . . aT −1 )
˜
p(θ, x1:T |y1:T ) ψ θ (x1 , . . . xT , a1 , . . . aT −1 )
=
NT bk
qθ (x1 1 ) T r (bt−1 |wt−1 )qθ (xt t |x1:t−1 )
k
k
b k bt−1
t=2
k (k)
with b1:T the index history of particle x1:T .
Valid algorithm
From the explicit form of the extended distributions, showing that
PMH is a standard MH algorithm becomes straightforward.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 42/ 72
43. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Particle MCMC: conclusion
Remarks
It is exact regardless of N . . .
. . . however a sufficient number N of particles is required to
get decent acceptance rates.
SMC methods are considered expensive, but easy to
parallelize.
Applies to a broad class of models.
More sophisticated SMC and MCMC methods can be used,
and result in more sophisticated Particle MCMC methods.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 43/ 72
44. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 44/ 72
45. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Our idea. . .
. . . was to use the same, very powerful “extended distribution”
framework, to build a SMC sampler instead of a MCMC algorithm.
Foreseen benefits
to sample more efficiently from the posterior distribution
p(θ|y1:T ),
to sample sequentially from p(θ|y1 ), p(θ|y1 , y2 ), . . . p(θ|y1:T ).
and it turns out, it allows even a bit more.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 45/ 72
46. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Idealized SMC sampler for SSM
Algorithm 5 Iterated Batch Importance Sampling
1: Sample from the prior θ(m) ∼ p(·) for m ∈ [1, Nθ ]
2: Set ω (m) ← 1
3: for t = 1 to T do
4: Compute ut (θ(m) ) = p(yt |y1:t−1 , θ(m) )
5: Update ω (m) ← ω (m) × ut (θ(m) )
6: if some degeneracy criterion is met then
7: Resample the particles, reset the weights ω (m) ← 1
8: Move the particles using a Markov kernel leaving the dis-
tribution invariant
9: end if
10: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 46/ 72
47. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Valid SMC sampler for SSM ??
Plug in estimates
Similarly to PMCMC methods, we want to replace
p(yt |y1:t−1 , θ(m) ) with an unbiased estimate, and see what
happens.
SMC everywhere
We associate Nx x-particles to each of the Nθ θ-particles.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 47/ 72
48. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Valid SMC sampler for SSM ??
Marginal likelihood
Remember, a side effect of the SMC algorithm is that we can
approximate the incremental likelihood:
Nx
1 (i,m)
wt ≈ p(yt |y1:t−1 , θ(m) )
Nx
i=1
Move steps
Instead of simple MH kernels, use PMH kernels.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 48/ 72
49. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
A simple idea. . .
. . . especially after the PMCMC article.
Still. . .
. . . some work had to be done to justify the validity of the
algorithm.
In short, it leads to a standard SMC sampler on a sequence of
extended distributions πt (proposition 1 of the article).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 49/ 72
50. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
Additional notations
hn denotes the index history of xtn , that is, hn (t) = n, and
t t
n
htn (s) = aht (s+1) recursively, for s = t − 1, . . . , 1.
s
xn denotes a state trajectory finishing in xtn , that is:
1:t
hn (s)
xn (s) = xs t
1:t , for s = 1, . . . , t.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 50/ 72
51. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
Here is what the distribution πt looks like:
1:N 1:Nx
πt (θ, x1:t x , a1:t−1 ) = p(θ|y1:t )
N
N
1 x p(xn |θ, y1:t ) x
1:t i
× t−1
q1,θ (x1 )
Nx Nx
n=1 i=1
n
i=ht (1)
t Nx
i
as−1 i
as−1
i
× Ws−1,θ qs,θ (xs |xs−1 )
s=2 i=1
n
i=ht (s)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 51/ 72
52. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Why does it work?
PMCMC move steps
These steps are valid because the PMCMC invariant distribution
πt defined on
1:N 1:Nx
θ, k, x1:t x , a1:t−1
is such that πt is the marginal distribution of
1:N 1:Nx
θ, x1:t x , a1:t−1
with respect to πt .
(Sections 3.2, 3.3 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 52/ 72
53. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Benefits
Explicit form of the distribution
It allows to prove the validity of the algorithm, but also:
to get samples from p(θ, x1:t |y1:t ),
to validate an automatic calibration of Nx .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 53/ 72
54. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Benefits
Drawing trajectories
If for every θ-particle θ(m) one draws an index n (m) uniformly on
{1, . . . Nx }, then the weighted sample:
n (m),m
(ω m , θm , x1:t )m∈1:Nθ
follows p(θ, x1:t |y1:t ).
Memory cost
Need to store the x-trajectories, if one wants to make inference
about x1:t (smoothing).
If the interest is only on parameter inference (θ), filtering (xt ) and
prediction (yt+1 ), no need to store the trajectories.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 54/ 72
55. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Benefits
Estimating functionals of the states
We have a test function h and want to estimate E [h(θ, x1:t )|y1:t ].
Estimator:
Nθ
1 n (m),m
Nθ
ω m h(θm , x1:t ).
m=1 ω m m=1
Rao–Blackwellized estimator:
Nθ Nx
1 n,m
Nθ
ωm Wt,θm h(θm , x1:t ) .
n
m
m=1 ω m=1 n=1
(Section 3.4 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 55/ 72
56. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Benefits
Evidence
The evidence of the data given the model is defined as:
t
p(y1:t ) = p(ys |y1:s−1 )
s=1
And it can be used to compare models. SMC2 provides the
following estimate:
Nθ
ˆ 1
Lt = Nθ
ω m p (yt |y1:t−1 , θm )
ˆ
m
m=1 ω m=1
(Section 3.5 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 56/ 72
57. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Benefits
Exchange importance sampling step
˜
Launch a new SMC for each θ-particle, with Nx x-particles. Joint
distribution:
˜ ˜
1:N 1:Nx
πt (θ, x1:t x , a1:t−1 )ψt,θ (˜1:tNx , ˜1:t−1 )
x 1: a1:Nx
Retain the new x-particles and drop the old ones, updating the
θ-weights with:
˜ ˜
˜ ˜
ˆ ˜1: a1:Nx
Zt (θ, x1:tNx , ˜1:t−1 )
exch
ut θ, x1:t x , a1:t−1 , x1:tNx , ˜1:t−1
1:N 1:Nx
˜1: a1:Nx =
ˆ
Zt (θ, x 1:Nx , a1:Nx )
1:t 1:t−1
(Section 3.6 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 57/ 72
58. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Warning
Plug in estimates
Not any SMC sampler can be turned into a SMC2 algorithm, by
replacing the exact weights with estimates: these have to be
unbiased. . . !!
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 58/ 72
59. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Warning
Example
For instance, if instead of using the sequence of distributions:
{p(θ|y1:t )}T
t=1
one wants to use the “tempered” sequence:
{p(θ|y1:T )γk }K
k=1
with γk an increasing sequence from 0 to 1, then one should find
unbiased estimates of p(θ|y1:T )γk −γk−1 to plug into the idealized
SMC sampler.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 59/ 72
60. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
Stochastic Volatility (sophisticated)
1/2
yt = µ + βvt + vt t ,t ≥ 1
iid iid
k ∼ Poi λξ 2 /ω 2 c1:k ∼ U(t, t + 1) ei:k ∼ Exp ξ/ω 2
k
zt+1 = e −λ zt + e −λ(t+1−cj ) ej
j=1
k
1
vt+1 = zt − zt+1 + ej
λ
j=1
xt+1 = (vt+1 , zt+1 )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 60/ 72
61. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
1.0 800
700
8 0.8
600
Squared observations
Acceptance rates
6 0.6 500
Nx
400
4 0.4
300
2 0.2 200
100
0 0.0
200 400 600 800 1000 0 200 400 600 800 1000 0 200 400 600 800 1000
Time Iterations Iterations
(a) (b) (c)
Figure: Squared observations (synthetic data set), acceptance rates, and
illustration of the automatic increase of Nx .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 61/ 72
62. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
T = 250 T = 500 T = 750 T = 1000
8
6
Density
4
2
0
−1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0 −1.0 −0.5 0.0 0.5 1.0
µ
Figure: Concentration of the posterior distribution for parameter µ.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 62/ 72
63. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
Multifactor model
k1 k2
1/2
yt = µ+βvt +vt t +ρ1 e1,j +ρ2 e2,j −ξ(w ρ1 λ1 +(1−w )ρ2 λ2 )
j=1 j=1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 63/ 72
64. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
Evidence compared to the one factor model
variable
20 Multi factor without leverage
4 Multi factor with leverage
Squared observations
15
2
10
0
5
−2
100 200 300 400 500 600 700 100 200 300 400 500 600 700
Time Iterations
(a) (b)
Figure: S&P500 squared observations, and log-evidence comparison
between models (relative to the one-factor model).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 64/ 72
65. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
Athletics records model
2
g (yi,t |µt , ξ, σ)
g (y1:2,t |µt , ξ, σ) = {1 − G (y2,t |µt , ξ, σ)}
1 − G (yi,t |µt , ξ, σ)
i=1
xt = (µt , µt ) ,
˙ xt+1 | xt , ν ∼ N (Fxt , Q) ,
with
1 1 1/3 1/2
F = and Q = ν 2
0 1 1/2 1
−1/ξ
y −µ
G (y |µ, ξ, σ) = 1 − exp − 1 − ξ
σ +
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 65/ 72
66. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
530
520
Times (seconds)
510
500
490
480
1980 1985 1990 1995 2000 2005 2010
Year
Figure: Best two times of each year, in women’s 3000 metres events
between 1976 and 2010.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 66/ 72
67. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
Motivating question
How unlikely is Wang Junxia’s record in 1993?
A smoothing problem
We want to estimate the likelihood of Wang Junxia’s record in
1993, given that we observe a better time than the previous world
record. We want to use all the observations from 1976 to 2010 to
answer the question.
Note
We exclude observations from the year 1993.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 67/ 72
68. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
Some probabilities of interest
y
pt = P(yt ≤ y |y1976:2010 )
= G (y |µt , θ)p(µt |y1976:2010 , θ)p(θ|y1976:2010 ) dµt dθ
Θ X
486.11 502.62 cond := p 486.11 /p 502.62 .
The interest lies in p1993 , p1993 and pt t t
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 68/ 72
69. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Numerical illustrations
10−1
10−2
Probability
10−3
10−4
1980 1985 1990 1995 2000 2005 2010
Year
502.62
Figure: Estimates of the probability of interest (top) pt , (middle)
cond 486.11 2
pt and (bottom) pt , obtained with the SMC algorithm. The
y -axis is in log scale, and the dotted line indicates the year 1993 which
motivated the study.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 69/ 72
70. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Conclusion
A powerful framework
The SMC2 framework allows to obtain various quantities of
interest, in a quite generic and “black-box” way.
It extends the PMCMC framework introduced by Andrieu,
Doucet and Holenstein.
A package is available:
http://code.google.com/p/py-smc2/.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 70/ 72
71. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Acknowledgments
N. Chopin is supported by the ANR grant
ANR-008-BLAN-0218 “BigMC” of the French Ministry of
research.
P.E. Jacob is supported by a PhD fellowship from the AXA
Research Fund.
O. Papaspiliopoulos would like to acknowledge financial
support by the Spanish government through a “Ramon y
Cajal” fellowship and grant MTM2009-09063.
The authors are thankful to Arnaud Doucet (University of British
Columbia) and to Gareth W. Peters (University of New South
Wales) for useful comments.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 71/ 72
72. Introduction and State Space Models
Reminder on some Monte Carlo methods
Particle Markov Chain Monte Carlo
SMC2
Bibliography
SMC2 : A sequential Monte Carlo algorithm with particle Markov
chain Monte Carlo updates, N. Chopin, P.E. Jacob, O.
Papaspiliopoulos, submitted
Main references:
Particle Markov Chain Monte Carlo methods, C. Andrieu, A.
Doucet, R. Holenstein, JRSS B., 2010, 72(3):269–342
The pseudo-marginal approach for efficient computation, C.
Andrieu, G.O. Roberts, Ann. Statist., 2009, 37, 697–725
Random weight particle filtering of continuous time processes,
P. Fearnhead, O. Papaspiliopoulos, G.O. Roberts, A. Stuart,
JRSS B., 2010, 72:497–513
Feynman-Kac Formulae, P. Del Moral, Springer
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 72/ 72