SlideShare a Scribd company logo
1 of 52
Download to read offline
Sequential Monte Carlo algorithms for
agent-based models of disease transmission
Jeremy Heng
ESSEC Business School
Joint work with Phyllis Ju and Pierre Jacob (Harvard)
Probability and Statistics Seminar
University of Kansas, Department of Mathematics
7 April 2021
JH Agent-based models 1/ 39
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 2/ 39
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 2/ 39
Agent-based models
• Agent-based models specify how a population of agents
interact and evolve over time
• Flexible, interpretable and widely employed in many fields
• Can render realistic macroscopic phenomena from simple
microscopic rules
Figure: SimCity by Electronic Arts
JH Agent-based models 3/ 39
Software for agent-based models
JH Agent-based models 4/ 39
Calibration of agent-based models
• These models are typically calibrated by matching key
features of simulated and actual data
• Can be computationally intensive and difficult to calibrate
126 CHAPTER 5
Figure 5.3. Simulated and historical settlement patterns, in red, for Long House
Valley in A.D. 1125. North is to the top of the page.
of the 1270–1450 period could have supported a reduced but substantial
population in small settlements dispersed across suitable farming habitats
located primarily in areas of high potential crop production in the
Figure: Simulated and historical settlement patterns in long house valley
JH Agent-based models 5/ 39
Statistical inference for agent-based models
• Given occasional noisy measurements of the population, we
could consider statistical inference for such models
• Few works have addressed this important topic as
likelihood-based inference is computationally challenging
• We propose sequential Monte Carlo algorithms for some
classical agent-based models in epidemiology
• The general principle is to ‘open the black box’ nature of
these models and exploit its inherent structure
JH Agent-based models 6/ 39
Compartmental models in epidemiology
• A population-level approach assigns the population to
compartments and models the number of people in each
compartment over time
SIR model
Susceptible
Infected
Recovered
λ
γ
SIS model
Susceptible
Infected
λ
γ
JH Agent-based models 7/ 39
Agent-based models in epidemiology
• The agent-based approach assumes agents can take these
states and models the state of each agent n over time
SIR model
Susceptible
Infected
Recovered
λn
γn
SIS model
Susceptible
Infected
λn
γn
JH Agent-based models 8/ 39
Why agent-based models?
• May be unrealistic to assume all agents interact
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Figure: Fully connected network versus small world network
JH Agent-based models 9/ 39
Why agent-based models?
• May be unrealistic to assume agents are interchangeable
0.0
0.1
0.2
0.3
0.4
0.5
0 10 20 30
Incubation period
Density
(days)
Gender Men Women
0.0
0.1
0.2
0.3
0 10 20 30
Incubation period
Density
(days) Age <50 >=50
Figure: Gender-specific (left) and age-specific (right) distributions of
COVID-19 incubation period (Zhao et al. 2020, AoAS)
JH Agent-based models 10/ 39
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 10/ 39
Agent-based SIS model
• We consider the agent-based SIS model and encode
Susceptible = 0 and Infected = 1
• Let Xt = (Xn
t )n∈[1:N] ∈ {0, 1}N denote the state of a closed
population of N agents at time t ∈ [0 : T]
• Initialization X0 ∼ µθ given by
Xn
0 ∼ Ber(αn
0), independently for n ∈ [1 : N]
• Markov transition Xt ∼ fθ(·|Xt−1) at time t ∈ [1 : T] is
given by
Xn
t ∼ Ber(αn
(Xt−1)), independently for n ∈ [1 : N]
JH Agent-based models 11/ 39
Agent-based SIS model
• Transition probability specified as
αn
(Xt−1) =
(
λnD(n)−1
P
m∈N(n) Xm
t−1, if Xn
t−1 = 0
1 − γn, if Xn
t−1 = 1
• Interactions specified by an undirected network: D(n) and
N(n) denote the degree and neighbours of agent n
• Infection and recovery rates are modelled using
agent-specific attributes
λn
= (1 + exp(−β>
λ wn
))−1
, γn
= (1 + exp(−β>
γ wn
))−1
,
where βλ, βγ ∈ Rd are parameters and wn ∈ Rd are the
covariates of agent n (similarly αn
0 depends on β0)
JH Agent-based models 12/ 39
Agent-based SIS model
• If the network is fully connected D(n) = N, N(n) = [1 : N]
and the agents are homogeneous λn = λ, γn = γ
• We recover the classical SIS model of Kermack and
McKendrick (1927), which has a deterministic limit as
N → ∞
• These simpler models offer dimension reduction which
facilitates inference
• However, one cannot incorporate network information and
agent attributes
• We will use these simplifications to construct efficient SMC
proposal distributions for the agent-based model
JH Agent-based models 13/ 39
Agent-based SIS model
• Observations (Yt)t∈[0:T] are the number of infections
reported over time
• Modelled as conditionally independent given (Xt)t∈[0:T], and
Yt ∼ gθ(·|Xt) = Bin(I(Xt), ρ)
• I(Xt) =
PN
n=1 Xn
t is the number of infections and ρ ∈ (0, 1) is
the reporting rate
• Parameters to be inferred θ = (β0, βλ, βγ, ρ)
JH Agent-based models 14/ 39
Graphical model representation
0
1
0
2
0
3
0
1
1
1
2
1
3
1
2
1
2
2
2
3
2
3
1
3
2
3
3
3
4
1
4
2
4
3
4
ρ
βγ
βλ
β0
Figure: T = 4 time steps, a fully connected network with N = 3 agents
JH Agent-based models 15/ 39
Likelihood of agent-based SIS model
• We have a standard hidden Markov model
pθ(x0:T , y0:T ) = µθ(x0)
T
Y
t=1
fθ(xt|xt−1)
T
Y
t=0
gθ(yt|xt)
• The marginal likelihood is
pθ(y0:T ) =
X
x0:T ∈{0,1}N×(T+1)
pθ(x0:T , y0:T ),
• Maximum likelihood estimation computes
arg max
θ
pθ(y0:T )
• Bayesian inference samples from
p(θ|y0:T ) ∝ p(θ)pθ(y0:T )
JH Agent-based models 16/ 39
Likelihood of agent-based SIS model
• We have a hidden Markov model on a discrete state-space
• We can employ forward algorithm to compute the marginal
likelihood exactly
• The cost is of order
(no. of states)2
× (no. of observations) = O(22N
T)
• For large N, we have to rely on sequential Monte Carlo
(SMC) methods to approximate the marginal likelihood
JH Agent-based models 17/ 39
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 17/ 39
Sequential Monte Carlo
• Sequential Monte Carlo (SMC) methods, aka particle
filters, are now quite advanced and well-understood since its
introduction in the 90s
• The idea is to recursively simulate an interacting particle
system of size P
• For time t ∈ [0 : T], we have P states and ancestor indexes
(X
(1)
t , . . . , X
(P)
t ), (A
(1)
t , . . . , A
(P)
t )
JH Agent-based models 18/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
For time t = 0 and particle p ∈ [1 : P]
sample X
(p)
0 ∼ q0(x0|θ)
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
For time t = 0 and particle p ∈ [1 : P]
weight W
(p)
0 ∝ w0(X
(p)
0 )
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X
For time t = 0 and particle p ∈ [1 : P]
sample ancestor A
(p)
0 ∼ R

W
(1)
0 , . . . , W
(P)
0

, resampled particle: X
A
(p)
0
0
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X
For time t = 1 and particle p ∈ [1 : P]
sample X
(p)
1 ∼ q1(x1|X
A
(p)
0
0 , θ)
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X
For time t = 1 and particle p ∈ [1 : P]
weight W n
1 ∝ w1(X
A
(p)
0
0 , X
(p)
1 )
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X X
For time t = 1 and particle p ∈ [1 : P]
sample ancestor A
(p)
1 ∼ R

W
(1)
1 , . . . , W
(P)
1

, resampled particle: X
A
(p)
1
1
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X X
X
X
X
Repeat for time t ∈ [2 : T].
JH Agent-based models 19/ 39
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X X
X
X
X
Repeat for time t ∈ [2 : T]. Note this is for a given θ!
JH Agent-based models 19/ 39
Likelihood estimation
• Weight functions (wt)t∈[0:T] and proposals distributions
(qt)t∈[0:T] have to satisfy
w0(x0)
T
Y
t=1
wt(xt−1, xt) =
pθ(x0:T , y0:T )
q(x0:T |θ)
where q(x0:T |θ) = q0(x0|θ)
QT
t=1 qt(xt|xt−1, θ)
• We can compute a marginal likelihood estimator
p̂θ(y0:T ) =



1
P
P
X
p=1
w0(X
(p)
0 )






T
Y
t=1
1
P
P
X
p=1
wt(X
(A
(p)
t−1)
t−1 , X
(p)
t )



• Unbiasedness and consistency as P → ∞ follow from Del
Moral (2004)
JH Agent-based models 20/ 39
Bootstrap particle filter
• The bootstrap particle filter (BPF) of Gordon et al. (1993)
employs the proposal distributions
q0(x0|θ) = µθ(x0), qt(xt|xt−1, θ) = fθ(xt|xt−1)
and weight functions
wt(xt) = gθ(yt|xt)
• BPF can be readily implemented as simulating the latent
process is straightforward
• However, it suffers from curse of dimensionality for large N
– need large P to control variance of p̂θ(y0:T )
– p̂θ(y0:T ) can collapse to zero
JH Agent-based models 21/ 39
Likelihood estimation
• Efficiency of SMC crucially relies on the choice of proposal
distributions
• Poor performance of BPF is not surprising, as it does not use
any information from the observations
• We show how to implement the fully adapted auxiliary
particle filter that accounts for the next observation
• We propose a novel controlled SMC method that takes the
entire observation sequence into account
JH Agent-based models 22/ 39
Auxiliary particle filter
• The auxiliary particle filter (APF) was introduced in Pitt
and Shephard (1999) and Carpenter et al. (1999)
• It employs the proposal distributions
q0(x0|θ) = pθ(x0|y0), qt(xt|xt−1, θ) = pθ(xt|xt−1, yt)
and weight functions
wt(xt−1) = pθ(yt|xt−1)
• Sampling from these proposals and evaluating these weights
are not always tractable
JH Agent-based models 23/ 39
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
JH Agent-based models 24/ 39
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
=
X
xt ∈{0,1}N
N
Y
n=1
Ber(xn
t ; αn
(xt−1))Bin(yt; I(xt), ρ)
JH Agent-based models 24/ 39
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
=
X
xt ∈{0,1}N
N
Y
n=1
Ber(xn
t ; αn
(xt−1))Bin(yt; I(xt), ρ)
=
N
X
it =yt
PoiBin(it; αn
(xt−1))Bin(yt; it, ρ)
since the sum of independent Bernoulli with non-identical
success probabilities follows a Poisson binomial distribution
JH Agent-based models 24/ 39
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
=
X
xt ∈{0,1}N
N
Y
n=1
Ber(xn
t ; αn
(xt−1))Bin(yt; I(xt), ρ)
=
N
X
it =yt
PoiBin(it; αn
(xt−1))Bin(yt; it, ρ)
since the sum of independent Bernoulli with non-identical
success probabilities follows a Poisson binomial distribution
• Poisson binomial PMF costs O(N2) to compute (Chen and
Liu, 1997)
JH Agent-based models 24/ 39
Auxiliary particle filter
• To sample from pθ(xt|xt−1, yt), we augment It = I(Xt) as an
auxiliary variable
pθ(xt, it|xt−1, yt) = pθ(it|xt−1, yt)pθ(xt|xt−1, it)
• Conditional distribution of the number of infections is
pθ(it|xt−1, yt) =
PoiBin(it; αn(xt−1))Bin(yt; it, ρ)
pθ(yt|xt−1)
• Distribution of agent states conditioned on their sum is a
conditioned Bernoulli
pθ(xt|xt−1, it) = CondBer(xt; α(xt−1), it),
which costs O(N2) to sample (Chen and Liu, 1997)
JH Agent-based models 25/ 39
Auxiliary particle filter
• Hence the overall cost of APF is O(N2TP)
• We can reduce the cost to O(N log(N)TP) using two ideas
• Reduce cost of Poisson binomial PMF evaluation to O(N)
using translated Poisson approximation at a bias of
O(N−1/2) (Barbour and Ćekanavićius, 2002)
• Reduce cost of conditioned Bernoulli sampling to
O(N log(N)) using Markov chain Monte Carlo (Heng,
Jacob and Ju, 2020)
JH Agent-based models 26/ 39
Controlled sequential Monte Carlo
• We introduce a novel implementation of the controlled SMC
(cSMC) proposed by Heng et al. (2020)
• The optimal proposal that gives a zero variance marginal
likelihood estimator is the smoothing distribution
pθ(x0:T |y0:T ) = pθ(x0|y0:T )
T
Y
t=1
pθ(xt|xt−1, yt:T )
• At time t ∈ [1 : T], the transition is
pθ(xt|xt−1, yt:T ) =
fθ(xt|xt−1)ψ?
t (xt)
fθ(ψ?
t |xt−1)
• ψ?
t (xt) = p(yt:T |xt) is the backward information filter (BIF)
and fθ(ψ?
t |xt−1) =
P
xt ∈{0,1}N fθ(xt|xt−1)ψ?
t (xt)
JH Agent-based models 27/ 39
Controlled sequential Monte Carlo
• BIF satisfies the backward recursion ψ?
T (xT ) = gθ(yT |xT ),
ψ?
t (xt) = gθ(yt|xt)fθ(ψ?
t+1|xt), t ∈ [0 : T − 1]
• This costs O(22NT) to compute, so approximations are
necessary when N is large
• Our approach is based on dimensionality reduction by
coarse-graining the agent-based model
• We approximate the model with heterogenous agents by a
model with homogenous agents whose individual infection and
recovery rates given by their population averages, i.e.
λn ≈ λ̄ = N−1
PN
n=1 λn and γn ≈ γ̄ = N−1
PN
n=1 γn
JH Agent-based models 28/ 39
Controlled sequential Monte Carlo
• BIF of the approximate model ψt(I(xt)) can be computed
exactly in O(N3T) cost, and approximately in O(N2T)
• We then define the SMC proposal transition as
qt(xt|xt−1, θ) =
fθ(xt|xt−1)ψt(I(xt))
fθ(ψt|xt−1)
,
and employ the weight function
wt(xt) =
gθ(yt|xt)fθ(ψt+1|xt)
ψt(I(xt))
• Sampling and weighting can be done in a similar manner as
APF
JH Agent-based models 29/ 39
Controlled sequential Monte Carlo
• Quality of proposals depend on the coarse-graining
approximation
• We establish a bound on the Kullback–Leibler divergence from
q(x0:T |θ) and pθ(x0:T |y0:T ) (Chatterjee and Diaconis, 2018)
• Finer-grained approximations can be obtained using clustering
of the infection and recovery rates, at the expense of
increased cost
JH Agent-based models 30/ 39
Better performance with more information, but higher cost
method ● ● ●
BPF APF cSMC
0:
O( )
O( 2 )
O( 2 ) + O( 3 )
●
● ●
●
●
●
●
●
●
●
●
●
●
●
● ● ●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
● ● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
● ●
● ●
● ●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ●
●
● ●
●
●
●
●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ● ● ●
●
●
●
●
●
●
● ● ● ● ●
●
●
● ● ●
●
●
● ● ● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ●
● ● ●
●
● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ●
25
50
75
100
0 25 50 75
time
ESS%
method ● ● ●
BPF APF cSMC
0:
Figure: Effective sample size for N = 100 fully connected agents and
T = 90 time steps
JH Agent-based models 31/ 39
Informative observations
●
● ●
●
●
●
●
●
●
●
●
●
●
●
● ● ●
●
●
●
● ●
● ●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
● ● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
● ●
● ●
● ●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ●
● ● ●
●
●
●
●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ● ● ●
●
●
●
●
●
●
● ● ● ● ●
●
●
● ● ●
●
●
● ● ● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ●
● ● ●
●
● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
● ●
● ● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ● ● ●
●
●
●
●
●
●
● ● ● ● ●
●
●
● ● ●
●
●
● ● ● ●
●
● ● ● ● ● ● ●
●
●
●
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ●
● ●
●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ●
●
●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
original
outliers
0 25 50 75 90
0
25
50
75
100
0
25
50
75
100
time
ESS%
method ● ● ●
BPF APF cSMC
∈ {25, 50, 75} 2
Figure: Bottom panel: observations at t ∈ {25, 50, 75} are replaced by
min(2yt, N)
JH Agent-based models 32/ 39
Marginal likelihood estimation
ˆθ( 0: ) θ
= 100, = 90 = 2
Figure: Variance of log p̂θ(y0:T ) at data generating parameter
JH Agent-based models 33/ 39
Marginal likelihood estimation
ˆθ( 0: ) βλ != β#
λ
β!
λ = (−1, 2) βλ = (−3
= 90 = 2
Figure: Variance of log p̂θ(y0:T ) at a less likely parameter
JH Agent-based models 34/ 39
Numerical illustration
• Estimated log-likelihood function as the number of
observations increases
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
T = 10 T = 30 T = 90
−4 0 4 −4 0 4 −4 0 4
−4
0
4
βλ
1
β
λ
2
−100 −20 −10−5 0
log−likelihood
Figure: MLE (black dot) and DGP (red dot)
JH Agent-based models 35/ 39
Numerical illustration
• Estimated log-likelihood function as the number of
observations increases
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
T = 10 T = 30 T = 90
−4 0 4 −4 0 4 −4 0 4
−4
0
4
βλ
2
β
γ
2
−100 −20 −10−5 0
log−likelihood
Figure: MLE (black dot) and DGP (red dot)
JH Agent-based models 36/ 39
Concluding remarks
• SMC methods can be readily deployed within particle
MCMC for parameter and state inference (Andrieu, Doucet
and Holenstein, 2010)
• We considered APF and cSMC for the agent-based SIR model
• A general alternative to SMC methods is MCMC algorithms
to sample from the smoothing distribution
• Preprint https://arxiv.org/abs/2101.12156
• R package https://github.com/nianqiaoju/agents
• Slides https://sites.google.com/view/jeremyheng/
JH Agent-based models 37/ 39
References
C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo
methods. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 72(3):269–342, 2010.
A. Barbour and V. Ćekanavićius. Total variation asymptotics for sums of
independent integer random variables. The Annals of Probability,
30(2):509–545, 2002.
J. Carpenter, P. Clifford, and P. Fearnhead. Improved particle filter for nonlinear
problems. IEE Proceedings-Radar, Sonar and Navigation, 146(1):2–7, 1999.
S. Chatterjee and P. Diaconis. The sample size required in importance
sampling. The Annals of Applied Probability, 28(2):1099–1135, 2018.
S. Chen and J. Liu. Statistical applications of the Poisson-Binomial and
conditional Bernoulli distributions. Statistica Sinica, 875–892, 1997.
P. Del Moral. Feynman-kac formulae: Genealogical and Interacting Particle
Systems with Applications. Springer-Verlag New York, 2004.
JH Agent-based models 38/ 39
References
N. Gordon, D. Salmond, and A. Smith. Novel approach to
nonlinear/non-gaussian Bayesian state estimation. In IEE proceedings F (radar
and signal processing), volume 140, pages 107–113. IET, 1993.
J. Heng, A. Bishop, G. Deligiannidis, and A. Doucet. Controlled sequential
Monte Carlo. Annals of Statistics, 48(5):2904–2929, 2020.
J. Heng, P. Jacob, and N. Ju. A simple Markov chain for independent Bernoulli
variables conditioned on their sum. arXiv preprint arXiv:2012.03103, 2020.
M. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters.
Journal of the American statistical association, 94(446):590–599, 1999.
JH Agent-based models 39/ 39

More Related Content

What's hot

MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methodsChristian Robert
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsRao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010Christian Robert
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsStephane Senecal
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical MethodsChristian Robert
 
Stochastic Differentiation
Stochastic DifferentiationStochastic Differentiation
Stochastic DifferentiationSSA KPI
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerChristian Robert
 

What's hot (20)

MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsRao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
 
Richard Everitt's slides
Richard Everitt's slidesRichard Everitt's slides
Richard Everitt's slides
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Stochastic Differentiation
Stochastic DifferentiationStochastic Differentiation
Stochastic Differentiation
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
mcmc
mcmcmcmc
mcmc
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 

Similar to Sequential Monte Carlo algorithms for agent-based models of disease transmission

Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease TransmissionSequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease TransmissionJeremyHeng10
 
A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...JuanPabloCarbajal3
 
Lecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsLecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsFrank Kienle
 
Chapter-4 combined.pptx
Chapter-4 combined.pptxChapter-4 combined.pptx
Chapter-4 combined.pptxHamzaHaji6
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
 
Spatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologySpatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologyLilac Liu Xu
 
Bayesian Deep Learning
Bayesian Deep LearningBayesian Deep Learning
Bayesian Deep LearningRayKim51
 
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...Gota Morota
 
Presentacion limac-unc
Presentacion limac-uncPresentacion limac-unc
Presentacion limac-uncPucheta Julian
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018JeremyHeng10
 
Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxHaibinSu2
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlonozomuhamada
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 

Similar to Sequential Monte Carlo algorithms for agent-based models of disease transmission (20)

Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease TransmissionSequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
 
A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...A walk through the intersection between machine learning and mechanistic mode...
A walk through the intersection between machine learning and mechanistic mode...
 
PhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-SeneviratnePhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-Seneviratne
 
Lecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsLecture: Monte Carlo Methods
Lecture: Monte Carlo Methods
 
2019 PMED Spring Course - SMARTs-Part II - Eric Laber, April 10, 2019
2019 PMED Spring Course - SMARTs-Part II - Eric Laber, April 10, 2019 2019 PMED Spring Course - SMARTs-Part II - Eric Laber, April 10, 2019
2019 PMED Spring Course - SMARTs-Part II - Eric Laber, April 10, 2019
 
Chapter-4 combined.pptx
Chapter-4 combined.pptxChapter-4 combined.pptx
Chapter-4 combined.pptx
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
 
Spatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologySpatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in Epidemiology
 
Bayesian Deep Learning
Bayesian Deep LearningBayesian Deep Learning
Bayesian Deep Learning
 
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
 
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
 
Presentacion limac-unc
Presentacion limac-uncPresentacion limac-unc
Presentacion limac-unc
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018
 
Monte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptxMonte Carlo Berkeley.pptx
Monte Carlo Berkeley.pptx
 
A bit about мcmc
A bit about мcmcA bit about мcmc
A bit about мcmc
 
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
Program on Mathematical and Statistical Methods for Climate and the Earth Sys...
 
2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo2012 mdsp pr04 monte carlo
2012 mdsp pr04 monte carlo
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
 
Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Nonnegative Matrix Factorization with Side Information for Time Series Recove...Nonnegative Matrix Factorization with Side Information for Time Series Recove...
Nonnegative Matrix Factorization with Side Information for Time Series Recove...
 

Recently uploaded

꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Callshivangimorya083
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998YohFuh
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsappssapnasaifi408
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDRafezzaman
 
Data Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxData Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxFurkanTasci3
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home ServiceSapana Sha
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样vhwb25kk
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...Suhani Kapoor
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfLars Albertsson
 

Recently uploaded (20)

Decoding Loan Approval: Predictive Modeling in Action
Decoding Loan Approval: Predictive Modeling in ActionDecoding Loan Approval: Predictive Modeling in Action
Decoding Loan Approval: Predictive Modeling in Action
 
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
꧁❤ Greater Noida Call Girls Delhi ❤꧂ 9711199171 ☎️ Hard And Sexy Vip Call
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /WhatsappsBeautiful Sapna Vip  Call Girls Hauz Khas 9711199012 Call /Whatsapps
Beautiful Sapna Vip Call Girls Hauz Khas 9711199012 Call /Whatsapps
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
 
Data Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptxData Science Jobs and Salaries Analysis.pptx
Data Science Jobs and Salaries Analysis.pptx
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service
 
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
1:1定制(UQ毕业证)昆士兰大学毕业证成绩单修改留信学历认证原版一模一样
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
VIP High Profile Call Girls Amravati Aarushi 8250192130 Independent Escort Se...
 
Schema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdfSchema on read is obsolete. Welcome metaprogramming..pdf
Schema on read is obsolete. Welcome metaprogramming..pdf
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 

Sequential Monte Carlo algorithms for agent-based models of disease transmission

  • 1. Sequential Monte Carlo algorithms for agent-based models of disease transmission Jeremy Heng ESSEC Business School Joint work with Phyllis Ju and Pierre Jacob (Harvard) Probability and Statistics Seminar University of Kansas, Department of Mathematics 7 April 2021 JH Agent-based models 1/ 39
  • 2. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 2/ 39
  • 3. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 2/ 39
  • 4. Agent-based models • Agent-based models specify how a population of agents interact and evolve over time • Flexible, interpretable and widely employed in many fields • Can render realistic macroscopic phenomena from simple microscopic rules Figure: SimCity by Electronic Arts JH Agent-based models 3/ 39
  • 5. Software for agent-based models JH Agent-based models 4/ 39
  • 6. Calibration of agent-based models • These models are typically calibrated by matching key features of simulated and actual data • Can be computationally intensive and difficult to calibrate 126 CHAPTER 5 Figure 5.3. Simulated and historical settlement patterns, in red, for Long House Valley in A.D. 1125. North is to the top of the page. of the 1270–1450 period could have supported a reduced but substantial population in small settlements dispersed across suitable farming habitats located primarily in areas of high potential crop production in the Figure: Simulated and historical settlement patterns in long house valley JH Agent-based models 5/ 39
  • 7. Statistical inference for agent-based models • Given occasional noisy measurements of the population, we could consider statistical inference for such models • Few works have addressed this important topic as likelihood-based inference is computationally challenging • We propose sequential Monte Carlo algorithms for some classical agent-based models in epidemiology • The general principle is to ‘open the black box’ nature of these models and exploit its inherent structure JH Agent-based models 6/ 39
  • 8. Compartmental models in epidemiology • A population-level approach assigns the population to compartments and models the number of people in each compartment over time SIR model Susceptible Infected Recovered λ γ SIS model Susceptible Infected λ γ JH Agent-based models 7/ 39
  • 9. Agent-based models in epidemiology • The agent-based approach assumes agents can take these states and models the state of each agent n over time SIR model Susceptible Infected Recovered λn γn SIS model Susceptible Infected λn γn JH Agent-based models 8/ 39
  • 10. Why agent-based models? • May be unrealistic to assume all agents interact 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure: Fully connected network versus small world network JH Agent-based models 9/ 39
  • 11. Why agent-based models? • May be unrealistic to assume agents are interchangeable 0.0 0.1 0.2 0.3 0.4 0.5 0 10 20 30 Incubation period Density (days) Gender Men Women 0.0 0.1 0.2 0.3 0 10 20 30 Incubation period Density (days) Age <50 >=50 Figure: Gender-specific (left) and age-specific (right) distributions of COVID-19 incubation period (Zhao et al. 2020, AoAS) JH Agent-based models 10/ 39
  • 12. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 10/ 39
  • 13. Agent-based SIS model • We consider the agent-based SIS model and encode Susceptible = 0 and Infected = 1 • Let Xt = (Xn t )n∈[1:N] ∈ {0, 1}N denote the state of a closed population of N agents at time t ∈ [0 : T] • Initialization X0 ∼ µθ given by Xn 0 ∼ Ber(αn 0), independently for n ∈ [1 : N] • Markov transition Xt ∼ fθ(·|Xt−1) at time t ∈ [1 : T] is given by Xn t ∼ Ber(αn (Xt−1)), independently for n ∈ [1 : N] JH Agent-based models 11/ 39
  • 14. Agent-based SIS model • Transition probability specified as αn (Xt−1) = ( λnD(n)−1 P m∈N(n) Xm t−1, if Xn t−1 = 0 1 − γn, if Xn t−1 = 1 • Interactions specified by an undirected network: D(n) and N(n) denote the degree and neighbours of agent n • Infection and recovery rates are modelled using agent-specific attributes λn = (1 + exp(−β> λ wn ))−1 , γn = (1 + exp(−β> γ wn ))−1 , where βλ, βγ ∈ Rd are parameters and wn ∈ Rd are the covariates of agent n (similarly αn 0 depends on β0) JH Agent-based models 12/ 39
  • 15. Agent-based SIS model • If the network is fully connected D(n) = N, N(n) = [1 : N] and the agents are homogeneous λn = λ, γn = γ • We recover the classical SIS model of Kermack and McKendrick (1927), which has a deterministic limit as N → ∞ • These simpler models offer dimension reduction which facilitates inference • However, one cannot incorporate network information and agent attributes • We will use these simplifications to construct efficient SMC proposal distributions for the agent-based model JH Agent-based models 13/ 39
  • 16. Agent-based SIS model • Observations (Yt)t∈[0:T] are the number of infections reported over time • Modelled as conditionally independent given (Xt)t∈[0:T], and Yt ∼ gθ(·|Xt) = Bin(I(Xt), ρ) • I(Xt) = PN n=1 Xn t is the number of infections and ρ ∈ (0, 1) is the reporting rate • Parameters to be inferred θ = (β0, βλ, βγ, ρ) JH Agent-based models 14/ 39
  • 17. Graphical model representation 0 1 0 2 0 3 0 1 1 1 2 1 3 1 2 1 2 2 2 3 2 3 1 3 2 3 3 3 4 1 4 2 4 3 4 ρ βγ βλ β0 Figure: T = 4 time steps, a fully connected network with N = 3 agents JH Agent-based models 15/ 39
  • 18. Likelihood of agent-based SIS model • We have a standard hidden Markov model pθ(x0:T , y0:T ) = µθ(x0) T Y t=1 fθ(xt|xt−1) T Y t=0 gθ(yt|xt) • The marginal likelihood is pθ(y0:T ) = X x0:T ∈{0,1}N×(T+1) pθ(x0:T , y0:T ), • Maximum likelihood estimation computes arg max θ pθ(y0:T ) • Bayesian inference samples from p(θ|y0:T ) ∝ p(θ)pθ(y0:T ) JH Agent-based models 16/ 39
  • 19. Likelihood of agent-based SIS model • We have a hidden Markov model on a discrete state-space • We can employ forward algorithm to compute the marginal likelihood exactly • The cost is of order (no. of states)2 × (no. of observations) = O(22N T) • For large N, we have to rely on sequential Monte Carlo (SMC) methods to approximate the marginal likelihood JH Agent-based models 17/ 39
  • 20. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 17/ 39
  • 21. Sequential Monte Carlo • Sequential Monte Carlo (SMC) methods, aka particle filters, are now quite advanced and well-understood since its introduction in the 90s • The idea is to recursively simulate an interacting particle system of size P • For time t ∈ [0 : T], we have P states and ancestor indexes (X (1) t , . . . , X (P) t ), (A (1) t , . . . , A (P) t ) JH Agent-based models 18/ 39
  • 22. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ For time t = 0 and particle p ∈ [1 : P] sample X (p) 0 ∼ q0(x0|θ) JH Agent-based models 19/ 39
  • 23. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ For time t = 0 and particle p ∈ [1 : P] weight W (p) 0 ∝ w0(X (p) 0 ) JH Agent-based models 19/ 39
  • 24. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X For time t = 0 and particle p ∈ [1 : P] sample ancestor A (p) 0 ∼ R W (1) 0 , . . . , W (P) 0 , resampled particle: X A (p) 0 0 JH Agent-based models 19/ 39
  • 25. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X For time t = 1 and particle p ∈ [1 : P] sample X (p) 1 ∼ q1(x1|X A (p) 0 0 , θ) JH Agent-based models 19/ 39
  • 26. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X For time t = 1 and particle p ∈ [1 : P] weight W n 1 ∝ w1(X A (p) 0 0 , X (p) 1 ) JH Agent-based models 19/ 39
  • 27. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X X For time t = 1 and particle p ∈ [1 : P] sample ancestor A (p) 1 ∼ R W (1) 1 , . . . , W (P) 1 , resampled particle: X A (p) 1 1 JH Agent-based models 19/ 39
  • 28. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X X X X X Repeat for time t ∈ [2 : T]. JH Agent-based models 19/ 39
  • 29. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X X X X X Repeat for time t ∈ [2 : T]. Note this is for a given θ! JH Agent-based models 19/ 39
  • 30. Likelihood estimation • Weight functions (wt)t∈[0:T] and proposals distributions (qt)t∈[0:T] have to satisfy w0(x0) T Y t=1 wt(xt−1, xt) = pθ(x0:T , y0:T ) q(x0:T |θ) where q(x0:T |θ) = q0(x0|θ) QT t=1 qt(xt|xt−1, θ) • We can compute a marginal likelihood estimator p̂θ(y0:T ) =    1 P P X p=1 w0(X (p) 0 )       T Y t=1 1 P P X p=1 wt(X (A (p) t−1) t−1 , X (p) t )    • Unbiasedness and consistency as P → ∞ follow from Del Moral (2004) JH Agent-based models 20/ 39
  • 31. Bootstrap particle filter • The bootstrap particle filter (BPF) of Gordon et al. (1993) employs the proposal distributions q0(x0|θ) = µθ(x0), qt(xt|xt−1, θ) = fθ(xt|xt−1) and weight functions wt(xt) = gθ(yt|xt) • BPF can be readily implemented as simulating the latent process is straightforward • However, it suffers from curse of dimensionality for large N – need large P to control variance of p̂θ(y0:T ) – p̂θ(y0:T ) can collapse to zero JH Agent-based models 21/ 39
  • 32. Likelihood estimation • Efficiency of SMC crucially relies on the choice of proposal distributions • Poor performance of BPF is not surprising, as it does not use any information from the observations • We show how to implement the fully adapted auxiliary particle filter that accounts for the next observation • We propose a novel controlled SMC method that takes the entire observation sequence into account JH Agent-based models 22/ 39
  • 33. Auxiliary particle filter • The auxiliary particle filter (APF) was introduced in Pitt and Shephard (1999) and Carpenter et al. (1999) • It employs the proposal distributions q0(x0|θ) = pθ(x0|y0), qt(xt|xt−1, θ) = pθ(xt|xt−1, yt) and weight functions wt(xt−1) = pθ(yt|xt−1) • Sampling from these proposals and evaluating these weights are not always tractable JH Agent-based models 23/ 39
  • 34. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) JH Agent-based models 24/ 39
  • 35. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) = X xt ∈{0,1}N N Y n=1 Ber(xn t ; αn (xt−1))Bin(yt; I(xt), ρ) JH Agent-based models 24/ 39
  • 36. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) = X xt ∈{0,1}N N Y n=1 Ber(xn t ; αn (xt−1))Bin(yt; I(xt), ρ) = N X it =yt PoiBin(it; αn (xt−1))Bin(yt; it, ρ) since the sum of independent Bernoulli with non-identical success probabilities follows a Poisson binomial distribution JH Agent-based models 24/ 39
  • 37. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) = X xt ∈{0,1}N N Y n=1 Ber(xn t ; αn (xt−1))Bin(yt; I(xt), ρ) = N X it =yt PoiBin(it; αn (xt−1))Bin(yt; it, ρ) since the sum of independent Bernoulli with non-identical success probabilities follows a Poisson binomial distribution • Poisson binomial PMF costs O(N2) to compute (Chen and Liu, 1997) JH Agent-based models 24/ 39
  • 38. Auxiliary particle filter • To sample from pθ(xt|xt−1, yt), we augment It = I(Xt) as an auxiliary variable pθ(xt, it|xt−1, yt) = pθ(it|xt−1, yt)pθ(xt|xt−1, it) • Conditional distribution of the number of infections is pθ(it|xt−1, yt) = PoiBin(it; αn(xt−1))Bin(yt; it, ρ) pθ(yt|xt−1) • Distribution of agent states conditioned on their sum is a conditioned Bernoulli pθ(xt|xt−1, it) = CondBer(xt; α(xt−1), it), which costs O(N2) to sample (Chen and Liu, 1997) JH Agent-based models 25/ 39
  • 39. Auxiliary particle filter • Hence the overall cost of APF is O(N2TP) • We can reduce the cost to O(N log(N)TP) using two ideas • Reduce cost of Poisson binomial PMF evaluation to O(N) using translated Poisson approximation at a bias of O(N−1/2) (Barbour and Ćekanavićius, 2002) • Reduce cost of conditioned Bernoulli sampling to O(N log(N)) using Markov chain Monte Carlo (Heng, Jacob and Ju, 2020) JH Agent-based models 26/ 39
  • 40. Controlled sequential Monte Carlo • We introduce a novel implementation of the controlled SMC (cSMC) proposed by Heng et al. (2020) • The optimal proposal that gives a zero variance marginal likelihood estimator is the smoothing distribution pθ(x0:T |y0:T ) = pθ(x0|y0:T ) T Y t=1 pθ(xt|xt−1, yt:T ) • At time t ∈ [1 : T], the transition is pθ(xt|xt−1, yt:T ) = fθ(xt|xt−1)ψ? t (xt) fθ(ψ? t |xt−1) • ψ? t (xt) = p(yt:T |xt) is the backward information filter (BIF) and fθ(ψ? t |xt−1) = P xt ∈{0,1}N fθ(xt|xt−1)ψ? t (xt) JH Agent-based models 27/ 39
  • 41. Controlled sequential Monte Carlo • BIF satisfies the backward recursion ψ? T (xT ) = gθ(yT |xT ), ψ? t (xt) = gθ(yt|xt)fθ(ψ? t+1|xt), t ∈ [0 : T − 1] • This costs O(22NT) to compute, so approximations are necessary when N is large • Our approach is based on dimensionality reduction by coarse-graining the agent-based model • We approximate the model with heterogenous agents by a model with homogenous agents whose individual infection and recovery rates given by their population averages, i.e. λn ≈ λ̄ = N−1 PN n=1 λn and γn ≈ γ̄ = N−1 PN n=1 γn JH Agent-based models 28/ 39
  • 42. Controlled sequential Monte Carlo • BIF of the approximate model ψt(I(xt)) can be computed exactly in O(N3T) cost, and approximately in O(N2T) • We then define the SMC proposal transition as qt(xt|xt−1, θ) = fθ(xt|xt−1)ψt(I(xt)) fθ(ψt|xt−1) , and employ the weight function wt(xt) = gθ(yt|xt)fθ(ψt+1|xt) ψt(I(xt)) • Sampling and weighting can be done in a similar manner as APF JH Agent-based models 29/ 39
  • 43. Controlled sequential Monte Carlo • Quality of proposals depend on the coarse-graining approximation • We establish a bound on the Kullback–Leibler divergence from q(x0:T |θ) and pθ(x0:T |y0:T ) (Chatterjee and Diaconis, 2018) • Finer-grained approximations can be obtained using clustering of the infection and recovery rates, at the expense of increased cost JH Agent-based models 30/ 39
  • 44. Better performance with more information, but higher cost method ● ● ● BPF APF cSMC 0: O( ) O( 2 ) O( 2 ) + O( 3 ) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 25 50 75 100 0 25 50 75 time ESS% method ● ● ● BPF APF cSMC 0: Figure: Effective sample size for N = 100 fully connected agents and T = 90 time steps JH Agent-based models 31/ 39
  • 45. Informative observations ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● original outliers 0 25 50 75 90 0 25 50 75 100 0 25 50 75 100 time ESS% method ● ● ● BPF APF cSMC ∈ {25, 50, 75} 2 Figure: Bottom panel: observations at t ∈ {25, 50, 75} are replaced by min(2yt, N) JH Agent-based models 32/ 39
  • 46. Marginal likelihood estimation ˆθ( 0: ) θ = 100, = 90 = 2 Figure: Variance of log p̂θ(y0:T ) at data generating parameter JH Agent-based models 33/ 39
  • 47. Marginal likelihood estimation ˆθ( 0: ) βλ != β# λ β! λ = (−1, 2) βλ = (−3 = 90 = 2 Figure: Variance of log p̂θ(y0:T ) at a less likely parameter JH Agent-based models 34/ 39
  • 48. Numerical illustration • Estimated log-likelihood function as the number of observations increases ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● T = 10 T = 30 T = 90 −4 0 4 −4 0 4 −4 0 4 −4 0 4 βλ 1 β λ 2 −100 −20 −10−5 0 log−likelihood Figure: MLE (black dot) and DGP (red dot) JH Agent-based models 35/ 39
  • 49. Numerical illustration • Estimated log-likelihood function as the number of observations increases ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● T = 10 T = 30 T = 90 −4 0 4 −4 0 4 −4 0 4 −4 0 4 βλ 2 β γ 2 −100 −20 −10−5 0 log−likelihood Figure: MLE (black dot) and DGP (red dot) JH Agent-based models 36/ 39
  • 50. Concluding remarks • SMC methods can be readily deployed within particle MCMC for parameter and state inference (Andrieu, Doucet and Holenstein, 2010) • We considered APF and cSMC for the agent-based SIR model • A general alternative to SMC methods is MCMC algorithms to sample from the smoothing distribution • Preprint https://arxiv.org/abs/2101.12156 • R package https://github.com/nianqiaoju/agents • Slides https://sites.google.com/view/jeremyheng/ JH Agent-based models 37/ 39
  • 51. References C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010. A. Barbour and V. Ćekanavićius. Total variation asymptotics for sums of independent integer random variables. The Annals of Probability, 30(2):509–545, 2002. J. Carpenter, P. Clifford, and P. Fearnhead. Improved particle filter for nonlinear problems. IEE Proceedings-Radar, Sonar and Navigation, 146(1):2–7, 1999. S. Chatterjee and P. Diaconis. The sample size required in importance sampling. The Annals of Applied Probability, 28(2):1099–1135, 2018. S. Chen and J. Liu. Statistical applications of the Poisson-Binomial and conditional Bernoulli distributions. Statistica Sinica, 875–892, 1997. P. Del Moral. Feynman-kac formulae: Genealogical and Interacting Particle Systems with Applications. Springer-Verlag New York, 2004. JH Agent-based models 38/ 39
  • 52. References N. Gordon, D. Salmond, and A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. In IEE proceedings F (radar and signal processing), volume 140, pages 107–113. IET, 1993. J. Heng, A. Bishop, G. Deligiannidis, and A. Doucet. Controlled sequential Monte Carlo. Annals of Statistics, 48(5):2904–2929, 2020. J. Heng, P. Jacob, and N. Ju. A simple Markov chain for independent Bernoulli variables conditioned on their sum. arXiv preprint arXiv:2012.03103, 2020. M. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters. Journal of the American statistical association, 94(446):590–599, 1999. JH Agent-based models 39/ 39