SlideShare a Scribd company logo
Sequential Monte Carlo algorithms for
agent-based models of disease transmission
Jeremy Heng
ESSEC Business School
Joint work with Phyllis Ju (Purdue) and Pierre Jacob (ESSEC)
KAUST
1 September 2021
JH Agent-based models 1/ 42
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 2/ 42
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 2/ 42
Agent-based models
• Agent-based models specify how a population of agents
interact and evolve over time
• Flexible, interpretable and widely employed in many fields
(e.g. ecology, epidemiology, transportation)
• Can render realistic macroscopic phenomena from simple
microscopic rules
Figure: SimCity by Electronic Arts
JH Agent-based models 3/ 42
A brief history of agent-based models
• Dates back to work by von Neumann and Ulam in 1940s on
cellular automata
• Popularized in many disciplines during the 1990s for various
reasons
• Growing computational power made it possible to simulate
such models
• Low levels of mathematical sophistication required to build
such models
JH Agent-based models 4/ 42
Software for agent-based models
JH Agent-based models 5/ 42
Calibration of agent-based models
• These models are typically calibrated by matching key
features of simulated and actual data
• Can be computationally intensive and difficult to calibrate
126 CHAPTER 5
Figure 5.3. Simulated and historical settlement patterns, in red, for Long House
Valley in A.D. 1125. North is to the top of the page.
of the 1270–1450 period could have supported a reduced but substantial
population in small settlements dispersed across suitable farming habitats
located primarily in areas of high potential crop production in the
Figure: Simulated and historical settlement patterns in long house valley
JH Agent-based models 6/ 42
Statistical inference for agent-based models
• Given occasional noisy measurements of the population, we
could consider statistical inference for such models
• Few works have addressed this important topic as
likelihood-based inference is computationally challenging
• We propose sequential Monte Carlo algorithms for some
classical agent-based models in epidemiology
• The general principle is to ‘open the black box’ nature of
these models and exploit its inherent structure
JH Agent-based models 7/ 42
Compartmental models in epidemiology
• A population-level approach assigns the population to
compartments and models the number of people in each
compartment over time
SIR model
Susceptible
Infected
Recovered
λ
γ
SIS model
Susceptible
Infected
λ
γ
JH Agent-based models 8/ 42
Agent-based models in epidemiology
• The agent-based approach assumes agents can take these
states and models the state of each agent n over time
SIR model
Susceptible
Infected
Recovered
λn
γn
SIS model
Susceptible
Infected
λn
γn
JH Agent-based models 9/ 42
Why agent-based models?
• May be unrealistic to assume agents are interchangeable
0.0
0.1
0.2
0.3
0.4
0.5
0 10 20 30
Incubation period
Density
(days)
Gender Men Women
0.0
0.1
0.2
0.3
0 10 20 30
Incubation period
Density
(days)
Age <50 >=50
Figure: Gender-specific (left) and age-specific (right) distributions of
COVID-19 incubation period (Zhao et al. 2020, AoAS)
JH Agent-based models 10/ 42
Why agent-based models?
• May be unrealistic to assume all agents interact
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
Figure: Fully connected network versus small world network
JH Agent-based models 11/ 42
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 11/ 42
Agent-based SIS model
• We consider the agent-based SIS model and encode
Susceptible = 0 and Infected = 1
• Let Xt = (Xn
t )n∈[1:N] ∈ {0, 1}N denote the state of a closed
population of N agents at time t ∈ [0 : T]
• Initialization X0 ∼ µθ given by
Xn
0 ∼ Ber(αn
0), independently for n ∈ [1 : N]
• Markov transition Xt ∼ fθ(·|Xt−1) at time t ∈ [1 : T] is
given by
Xn
t ∼ Ber(αn
(Xt−1)), independently for n ∈ [1 : N]
JH Agent-based models 12/ 42
Agent-based SIS model
• Transition probability specified as
αn
(Xt−1) =
(
λnD(n)−1
P
m∈N(n) Xm
t−1, if Xn
t−1 = 0
1 − γn, if Xn
t−1 = 1
• Interactions specified by an undirected network: D(n) and
N(n) denote the degree and neighbours of agent n
• Infection and recovery rates are modelled using
agent-specific attributes
λn
= (1 + exp(−β>
λ wn
))−1
, γn
= (1 + exp(−β>
γ wn
))−1
,
where βλ, βγ ∈ Rd are parameters and wn ∈ Rd are the
covariates of agent n (similarly αn
0 depends on β0)
JH Agent-based models 13/ 42
Agent-based SIS model
• If the network is fully connected D(n) = N, N(n) = [1 : N]
and the agents are homogeneous λn = λ, γn = γ
• We recover the classical SIS model of Kermack and
McKendrick (1927), which has a deterministic limit as
N → ∞
• These simpler models offer dimension reduction which
facilitates inference
• However, one cannot incorporate network information and
agent attributes
• We will use these simplifications to construct efficient SMC
proposal distributions for the agent-based model
JH Agent-based models 14/ 42
Agent-based SIS model
• Observations (Yt)t∈[0:T] are the number of infections
reported over time
• Modelled as conditionally independent given (Xt)t∈[0:T], and
Yt ∼ gθ(·|Xt) = Bin(I(Xt), ρ)
• I(Xt) =
PN
n=1 Xn
t is the number of infections and ρ ∈ (0, 1) is
the reporting rate
• Parameters to be inferred θ = (β0, βλ, βγ, ρ)
JH Agent-based models 15/ 42
Graphical model representation
0
1
0
2
0
3
0
1
1
1
2
1
3
1
2
1
2
2
2
3
2
3
1
3
2
3
3
3
4
1
4
2
4
3
4
ρ
βγ
βλ
β0
Figure: T = 4 time steps, a fully connected network with N = 3 agents
JH Agent-based models 16/ 42
Likelihood of agent-based SIS model
• We have a standard hidden Markov model
pθ(x0:T , y0:T ) = µθ(x0)
T
Y
t=1
fθ(xt|xt−1)
T
Y
t=0
gθ(yt|xt)
• The marginal likelihood is
pθ(y0:T ) =
X
x0:T ∈{0,1}N×(T+1)
pθ(x0:T , y0:T ),
• Maximum likelihood estimation computes
arg max
θ
pθ(y0:T )
• Bayesian inference samples from
p(θ|y0:T ) ∝ p(θ)pθ(y0:T )
JH Agent-based models 17/ 42
Likelihood of agent-based SIS model
• We have a hidden Markov model on a discrete state-space
• We can employ forward algorithm to compute the marginal
likelihood exactly
• The cost is of order
(no. of states)2
× (no. of observations) = O(22N
T)
• For large N, we have to rely on sequential Monte Carlo
(SMC) methods to approximate the marginal likelihood
JH Agent-based models 18/ 42
Outline
1 Agent-based models
2 Agent-based SIS model
3 Sequential Monte Carlo
JH Agent-based models 18/ 42
Sequential Monte Carlo
• Sequential Monte Carlo (SMC) methods, aka particle
filters, are now quite advanced and well-understood since its
introduction in the 90s
• The idea is to recursively simulate an interacting particle
system of size P
• For time t ∈ [0 : T], we have P states and ancestor indexes
(X
(1)
t , . . . , X
(P)
t ), (A
(1)
t , . . . , A
(P)
t )
JH Agent-based models 19/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
For time t = 0 and particle p ∈ [1 : P]
sample X
(p)
0 ∼ q0(x0|θ)
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
For time t = 0 and particle p ∈ [1 : P]
weight W
(p)
0 ∝ w0(X
(p)
0 )
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X
For time t = 0 and particle p ∈ [1 : P]
sample ancestor A
(p)
0 ∼ R

W
(1)
0 , . . . , W
(P)
0

, resampled particle: X
A
(p)
0
0
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X
For time t = 1 and particle p ∈ [1 : P]
sample X
(p)
1 ∼ q1(x1|X
A
(p)
0
0 , θ)
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X
For time t = 1 and particle p ∈ [1 : P]
weight W n
1 ∝ w1(X
A
(p)
0
0 , X
(p)
1 )
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X X
For time t = 1 and particle p ∈ [1 : P]
sample ancestor A
(p)
1 ∼ R

W
(1)
1 , . . . , W
(P)
1

, resampled particle: X
A
(p)
1
1
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X X
X
X
X
Repeat for time t ∈ [2 : T].
JH Agent-based models 20/ 42
Sequential Monte Carlo
…
X0 X1 XT
Y0 Y1 YT
…
✓
X
X
X X
X
X
X
Repeat for time t ∈ [2 : T]. Note this is for a given θ!
JH Agent-based models 20/ 42
Likelihood estimation
• Weight functions (wt)t∈[0:T] and proposals distributions
(qt)t∈[0:T] have to satisfy
w0(x0)
T
Y
t=1
wt(xt−1, xt) =
pθ(x0:T , y0:T )
q(x0:T |θ)
where q(x0:T |θ) = q0(x0|θ)
QT
t=1 qt(xt|xt−1, θ)
• We can compute a marginal likelihood estimator
p̂θ(y0:T ) =



1
P
P
X
p=1
w0(X
(p)
0 )






T
Y
t=1
1
P
P
X
p=1
wt(X
(A
(p)
t−1)
t−1 , X
(p)
t )



• Unbiasedness and consistency as P → ∞ follow from Del
Moral (2004)
JH Agent-based models 21/ 42
Bootstrap particle filter
• The bootstrap particle filter (BPF) of Gordon et al. (1993)
employs the proposal distributions
q0(x0|θ) = µθ(x0), qt(xt|xt−1, θ) = fθ(xt|xt−1)
and weight functions
wt(xt) = gθ(yt|xt)
• BPF can be readily implemented as simulating the latent
process is straightforward
• However, it suffers from curse of dimensionality for large N
– need large P to control variance of p̂θ(y0:T )
– p̂θ(y0:T ) can collapse to zero
JH Agent-based models 22/ 42
Likelihood estimation
• Efficiency of SMC crucially relies on the choice of proposal
distributions
• Poor performance of BPF is not surprising, as it does not use
any information from the observations
• We show how to implement the fully adapted auxiliary
particle filter that accounts for the next observation
• We propose a novel controlled SMC method that takes the
entire observation sequence into account
JH Agent-based models 23/ 42
Auxiliary particle filter
• The auxiliary particle filter (APF) was introduced in Pitt
and Shephard (1999) and Carpenter et al. (1999)
• It employs the proposal distributions
q0(x0|θ) = pθ(x0|y0), qt(xt|xt−1, θ) = pθ(xt|xt−1, yt)
and weight functions
wt(xt−1) = pθ(yt|xt−1)
• Sampling from these proposals and evaluating these weights
are not always tractable
JH Agent-based models 24/ 42
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
JH Agent-based models 25/ 42
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
=
X
xt ∈{0,1}N
N
Y
n=1
Ber(xn
t ; αn
(xt−1))Bin(yt; I(xt), ρ)
JH Agent-based models 25/ 42
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
=
X
xt ∈{0,1}N
N
Y
n=1
Ber(xn
t ; αn
(xt−1))Bin(yt; I(xt), ρ)
=
N
X
it =yt
PoiBin(it; αn
(xt−1))Bin(yt; it, ρ)
since the sum of independent Bernoulli with non-identical
success probabilities follows a Poisson binomial distribution
JH Agent-based models 25/ 42
Auxiliary particle filter
• The predictive likelihood is
pθ(yt|xt−1) =
X
xt ∈{0,1}N
fθ(xt|xt−1)gθ(yt|xt)
=
X
xt ∈{0,1}N
N
Y
n=1
Ber(xn
t ; αn
(xt−1))Bin(yt; I(xt), ρ)
=
N
X
it =yt
PoiBin(it; αn
(xt−1))Bin(yt; it, ρ)
since the sum of independent Bernoulli with non-identical
success probabilities follows a Poisson binomial distribution
• Poisson binomial PMF costs O(N2) to compute (Chen and
Liu, 1997)
JH Agent-based models 25/ 42
Auxiliary particle filter
• To sample from pθ(xt|xt−1, yt), we augment It = I(Xt) as an
auxiliary variable
pθ(xt, it|xt−1, yt) = pθ(it|xt−1, yt)pθ(xt|xt−1, it)
• Conditional distribution of the number of infections is
pθ(it|xt−1, yt) =
PoiBin(it; αn(xt−1))Bin(yt; it, ρ)
pθ(yt|xt−1)
• Distribution of agent states conditioned on their sum is a
conditioned Bernoulli
pθ(xt|xt−1, it) = CondBer(xt; α(xt−1), it),
which costs O(N2) to sample (Chen and Liu, 1997)
JH Agent-based models 26/ 42
Auxiliary particle filter
• Hence the overall cost of APF is O(N2TP)
• We can reduce the cost to O(N log(N)TP) using two ideas
• Reduce cost of Poisson binomial PMF evaluation to O(N)
using translated Poisson approximation at a bias of
O(N−1/2) (Barbour and Ćekanavićius, 2002)
• Reduce cost of conditioned Bernoulli sampling to
O(N log(N)) using Markov chain Monte Carlo (Heng,
Jacob and Ju, 2020)
JH Agent-based models 27/ 42
Controlled sequential Monte Carlo
• We introduce a novel implementation of the controlled SMC
(cSMC) proposed by Heng et al. (2020)
• The optimal proposal that gives a zero variance marginal
likelihood estimator is the smoothing distribution
pθ(x0:T |y0:T ) = pθ(x0|y0:T )
T
Y
t=1
pθ(xt|xt−1, yt:T )
• At time t ∈ [1 : T], the transition is
pθ(xt|xt−1, yt:T ) =
fθ(xt|xt−1)ψ?
t (xt)
fθ(ψ?
t |xt−1)
• ψ?
t (xt) = p(yt:T |xt) is the backward information filter (BIF)
and fθ(ψ?
t |xt−1) =
P
xt ∈{0,1}N fθ(xt|xt−1)ψ?
t (xt)
JH Agent-based models 28/ 42
Controlled sequential Monte Carlo
• BIF satisfies the backward recursion ψ?
T (xT ) = gθ(yT |xT ),
ψ?
t (xt) = gθ(yt|xt)fθ(ψ?
t+1|xt), t ∈ [0 : T − 1]
• This costs O(22NT) to compute, so approximations are
necessary when N is large
• Our approach is based on dimensionality reduction by
coarse-graining the agent-based model
• We approximate the model with heterogenous agents by a
model with homogenous agents whose individual infection and
recovery rates given by their population averages, i.e.
λn ≈ λ̄ = N−1
PN
n=1 λn and γn ≈ γ̄ = N−1
PN
n=1 γn
JH Agent-based models 29/ 42
Controlled sequential Monte Carlo
• BIF of the approximate model ψt(I(xt)) can be computed
exactly in O(N3T) cost, and approximately in O(N2T)
• We then define the SMC proposal transition as
qt(xt|xt−1, θ) =
fθ(xt|xt−1)ψt(I(xt))
fθ(ψt|xt−1)
,
and employ the weight function
wt(xt) =
gθ(yt|xt)fθ(ψt+1|xt)
ψt(I(xt))
• Sampling and weighting can be done in a similar manner as
APF
JH Agent-based models 30/ 42
Controlled sequential Monte Carlo
• Quality of proposals depend on the coarse-graining
approximation
• We establish a bound on the Kullback–Leibler divergence from
q(x0:T |θ) and pθ(x0:T |y0:T ) (Chatterjee and Diaconis, 2018)
• Finer-grained approximations can be obtained using clustering
of the infection and recovery rates, at the expense of
increased cost
JH Agent-based models 31/ 42
Better performance with more information, but higher cost
method ● ● ●
BPF APF cSMC
0:
O( )
O( 2 )
O( 2 ) + O( 3 )
●
● ●
●
●
●
●
●
●
●
●
●
●
●
● ● ●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
● ● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
● ●
● ●
● ●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ●
●
● ●
●
●
●
●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ● ● ●
●
●
●
●
●
●
● ● ● ● ●
●
●
● ● ●
●
●
● ● ● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ●
● ● ●
●
● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ●
25
50
75
100
0 25 50 75
time
ESS%
method ● ● ●
BPF APF cSMC
Figure: Effective sample size for N = 100 fully connected agents and
T = 90 time steps
JH Agent-based models 32/ 42
Informative observations
●
● ●
●
●
●
●
●
●
●
●
●
●
●
● ● ●
●
●
●
● ●
● ●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
● ●
●
●
●
●
●
●
● ● ●
● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
● ●
●
●
●
● ●
● ●
● ●
● ●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ●
● ● ●
●
●
●
●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ● ● ●
●
●
●
●
●
●
● ● ● ● ●
●
●
● ● ●
●
●
● ● ● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ●
● ● ●
●
● ●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ●
● ● ● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
● ●
● ● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
● ●
● ●
●
●
●
●
●
●
●
● ● ●
●
●
●
●
● ●
●
●
●
●
●
● ● ● ● ● ●
●
●
●
●
●
●
● ● ● ● ●
●
●
● ● ●
●
●
● ● ● ●
●
● ● ● ● ● ● ●
●
●
●
● ● ● ● ● ●
● ● ● ● ● ● ● ● ● ● ● ● ●
● ●
●
●
● ● ● ● ● ● ● ●
● ●
● ● ● ● ● ● ● ● ● ● ● ● ● ●
●
●
● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
original
outliers
0 25 50 75 90
0
25
50
75
100
0
25
50
75
100
time
ESS%
method ● ● ●
BPF APF cSMC
∈ {25, 50, 75} 2
Figure: Bottom panel: observations at t ∈ {25, 50, 75} are replaced by
min(2yt, N)
JH Agent-based models 33/ 42
Marginal likelihood estimation
ˆθ( 0: ) θ
= 100, = 90 = 2
Figure: Variance of log p̂θ(y0:T ) at data generating parameter
JH Agent-based models 34/ 42
Marginal likelihood estimation
ˆθ( 0: ) βλ != β#
λ
β!
λ = (−1, 2) βλ = (−3
= 90 = 2
Figure: Variance of log p̂θ(y0:T ) at a less likely parameter
JH Agent-based models 35/ 42
Numerical illustration
• Estimated log-likelihood function as the number of
observations increases
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
T = 10 T = 30 T = 90
−4 0 4 −4 0 4 −4 0 4
−4
0
4
βλ
1
β
λ
2
−100 −20 −10−5 0
log−likelihood
Figure: MLE (black dot) and DGP (red dot)
JH Agent-based models 36/ 42
Numerical illustration
• Estimated log-likelihood function as the number of
observations increases
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
T = 10 T = 30 T = 90
−4 0 4 −4 0 4 −4 0 4
−4
0
4
βλ
2
β
γ
2
−100 −20 −10−5 0
log−likelihood
Figure: MLE (black dot) and DGP (red dot)
JH Agent-based models 37/ 42
Influenza outbreak in a boarding school
• SMC methods can be readily deployed within particle
MCMC for parameter and state inference (Andrieu, Doucet
and Holenstein, 2010)
JH Agent-based models 38/ 42
Influenza outbreak in a boarding school
• SMC methods can be readily deployed within particle
MCMC for parameter and state inference (Andrieu, Doucet
and Holenstein, 2010)
• Choice of network can impact inference results
0.0
0.2
0.4
0.6
0 2 4 6 8
λ γ
Density
Network Erdos−Renyi Full Small−world D=4 Small−world D=2
JH Agent-based models 38/ 42
Smallpox outbreak in a church community
• We considered APF and cSMC for the agent-based SIR model
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
10
20
30
0 30 60 89
date
removal
JH Agent-based models 39/ 42
Smallpox outbreak in a church community
• We considered APF and cSMC for the agent-based SIR model
• We can analyze vaccine efficacy in this framework
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
0
10
20
30
0 30 60 89
date
removal
0.000
0.025
0.050
0.075
0.100
0 1 2 3 4 5 6
R0
proportion
vaccinated
Yes
No
JH Agent-based models 39/ 42
Concluding remarks
• A general alternative to SMC methods is MCMC algorithms
to sample from the smoothing distribution
• Preprint https://arxiv.org/abs/2101.12156
• R package https://github.com/nianqiaoju/agents
• Slides https://sites.google.com/view/jeremyheng/
JH Agent-based models 40/ 42
References
C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo
methods. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 72(3):269–342, 2010.
A. Barbour and V. Ćekanavićius. Total variation asymptotics for sums of
independent integer random variables. The Annals of Probability,
30(2):509–545, 2002.
J. Carpenter, P. Clifford, and P. Fearnhead. Improved particle filter for nonlinear
problems. IEE Proceedings-Radar, Sonar and Navigation, 146(1):2–7, 1999.
S. Chatterjee and P. Diaconis. The sample size required in importance
sampling. The Annals of Applied Probability, 28(2):1099–1135, 2018.
S. Chen and J. Liu. Statistical applications of the Poisson-Binomial and
conditional Bernoulli distributions. Statistica Sinica, 875–892, 1997.
P. Del Moral. Feynman-kac formulae: Genealogical and Interacting Particle
Systems with Applications. Springer-Verlag New York, 2004.
JH Agent-based models 41/ 42
References
N. Gordon, D. Salmond, and A. Smith. Novel approach to
nonlinear/non-gaussian Bayesian state estimation. In IEE proceedings F (radar
and signal processing), volume 140, pages 107–113. IET, 1993.
J. Heng, A. Bishop, G. Deligiannidis, and A. Doucet. Controlled sequential
Monte Carlo. Annals of Statistics, 48(5):2904–2929, 2020.
J. Heng, P. Jacob, and N. Ju. A simple Markov chain for independent Bernoulli
variables conditioned on their sum. arXiv preprint arXiv:2012.03103, 2020.
M. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters.
Journal of the American statistical association, 94(446):590–599, 1999.
JH Agent-based models 42/ 42

More Related Content

What's hot

MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
Christian Robert
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018
JeremyHeng10
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
Christian Robert
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
Christian Robert
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
Christian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
Christian Robert
 
Stochastic Differentiation
Stochastic DifferentiationStochastic Differentiation
Stochastic Differentiation
SSA KPI
 
Particle filtering
Particle filteringParticle filtering
Particle filtering
Wei Wang
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsRao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Christian Robert
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
Christian Robert
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
Christian Robert
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
Stefano Cabras
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
Christian Robert
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
Christian Robert
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
Caleb (Shiqiang) Jin
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
Stephane Senecal
 

What's hot (20)

MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Talk in BayesComp 2018
Talk in BayesComp 2018Talk in BayesComp 2018
Talk in BayesComp 2018
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Stochastic Differentiation
Stochastic DifferentiationStochastic Differentiation
Stochastic Differentiation
 
Particle filtering
Particle filteringParticle filtering
Particle filtering
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsRao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Sampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methodsSampling strategies for Sequential Monte Carlo (SMC) methods
Sampling strategies for Sequential Monte Carlo (SMC) methods
 

Similar to Sequential Monte Carlo algorithms for agent-based models of disease transmission

Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease TransmissionSequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
JeremyHeng10
 
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Gota Morota
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
The Statistical and Applied Mathematical Sciences Institute
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Umberto Picchini
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
Chiheb Ben Hammouda
 
Dependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsDependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian Nonparametrics
Julyan Arbel
 
Looking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of CarcinogenesisLooking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of Carcinogenesis
Sascha Zöllner
 
Chapter-4 combined.pptx
Chapter-4 combined.pptxChapter-4 combined.pptx
Chapter-4 combined.pptx
HamzaHaji6
 
PhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-SeneviratnePhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-Seneviratne
Sarath Senevirtatne
 
Spatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologySpatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in Epidemiology
Lilac Liu Xu
 
ppt0320defenseday
ppt0320defensedayppt0320defenseday
ppt0320defenseday
Xi (Shay) Zhang, PhD
 
2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter
nozomuhamada
 
Lecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsLecture: Monte Carlo Methods
Lecture: Monte Carlo Methods
Frank Kienle
 
Frequency14.pptx
Frequency14.pptxFrequency14.pptx
Frequency14.pptx
MewadaHiren
 
Leiden_VU_Delft_seminar short.pdf
Leiden_VU_Delft_seminar short.pdfLeiden_VU_Delft_seminar short.pdf
Leiden_VU_Delft_seminar short.pdf
Chiheb Ben Hammouda
 
PhD defense talk slides
PhD  defense talk slidesPhD  defense talk slides
PhD defense talk slides
Chiheb Ben Hammouda
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
University of Glasgow
 
The tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic modelsThe tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic models
Colin Gillespie
 
A brief introduction to mutual information and its application
A brief introduction to mutual information and its applicationA brief introduction to mutual information and its application
A brief introduction to mutual information and its application
Hyun-hwan Jeong
 
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
SYRTO Project
 

Similar to Sequential Monte Carlo algorithms for agent-based models of disease transmission (20)

Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease TransmissionSequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
Sequential Monte Carlo Algorithms for Agent-based Models of Disease Transmission
 
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
Allele Frequencies as Stochastic Processes: Mathematical & Statistical Approa...
 
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
MUMS Opening Workshop - Model Uncertainty in Data Fusion for Remote Sensing -...
 
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Bayesian inference for mixed-effects models driven by SDEs and other stochast...
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
Dependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsDependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian Nonparametrics
 
Looking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of CarcinogenesisLooking Inside Mechanistic Models of Carcinogenesis
Looking Inside Mechanistic Models of Carcinogenesis
 
Chapter-4 combined.pptx
Chapter-4 combined.pptxChapter-4 combined.pptx
Chapter-4 combined.pptx
 
PhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-SeneviratnePhysicsSIG2008-01-Seneviratne
PhysicsSIG2008-01-Seneviratne
 
Spatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in EpidemiologySpatial Point Processes and Their Applications in Epidemiology
Spatial Point Processes and Their Applications in Epidemiology
 
ppt0320defenseday
ppt0320defensedayppt0320defenseday
ppt0320defenseday
 
2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter2012 mdsp pr05 particle filter
2012 mdsp pr05 particle filter
 
Lecture: Monte Carlo Methods
Lecture: Monte Carlo MethodsLecture: Monte Carlo Methods
Lecture: Monte Carlo Methods
 
Frequency14.pptx
Frequency14.pptxFrequency14.pptx
Frequency14.pptx
 
Leiden_VU_Delft_seminar short.pdf
Leiden_VU_Delft_seminar short.pdfLeiden_VU_Delft_seminar short.pdf
Leiden_VU_Delft_seminar short.pdf
 
PhD defense talk slides
PhD  defense talk slidesPhD  defense talk slides
PhD defense talk slides
 
Dynamics of structures with uncertainties
Dynamics of structures with uncertaintiesDynamics of structures with uncertainties
Dynamics of structures with uncertainties
 
The tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic modelsThe tau-leap method for simulating stochastic kinetic models
The tau-leap method for simulating stochastic kinetic models
 
A brief introduction to mutual information and its application
A brief introduction to mutual information and its applicationA brief introduction to mutual information and its application
A brief introduction to mutual information and its application
 
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
 

Recently uploaded

一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理
一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理
一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理
z6osjkqvd
 
一比一原版卡尔加里大学毕业证(uc毕业证)如何办理
一比一原版卡尔加里大学毕业证(uc毕业证)如何办理一比一原版卡尔加里大学毕业证(uc毕业证)如何办理
一比一原版卡尔加里大学毕业证(uc毕业证)如何办理
oaxefes
 
A gentle exploration of Retrieval Augmented Generation
A gentle exploration of Retrieval Augmented GenerationA gentle exploration of Retrieval Augmented Generation
A gentle exploration of Retrieval Augmented Generation
dataschool1
 
How To Control IO Usage using Resource Manager
How To Control IO Usage using Resource ManagerHow To Control IO Usage using Resource Manager
How To Control IO Usage using Resource Manager
Alireza Kamrani
 
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
Timothy Spann
 
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
ihavuls
 
一比一原版南昆士兰大学毕业证如何办理
一比一原版南昆士兰大学毕业证如何办理一比一原版南昆士兰大学毕业证如何办理
一比一原版南昆士兰大学毕业证如何办理
ugydym
 
Econ3060_Screen Time and Success_ final_GroupProject.pdf
Econ3060_Screen Time and Success_ final_GroupProject.pdfEcon3060_Screen Time and Success_ final_GroupProject.pdf
Econ3060_Screen Time and Success_ final_GroupProject.pdf
blueshagoo1
 
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
aguty
 
Build applications with generative AI on Google Cloud
Build applications with generative AI on Google CloudBuild applications with generative AI on Google Cloud
Build applications with generative AI on Google Cloud
Márton Kodok
 
一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理
keesa2
 
一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理
一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理
一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理
hqfek
 
一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理
一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理
一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理
agdhot
 
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...
Marlon Dumas
 
Template xxxxxxxx ssssssssssss Sertifikat.pptx
Template xxxxxxxx ssssssssssss Sertifikat.pptxTemplate xxxxxxxx ssssssssssss Sertifikat.pptx
Template xxxxxxxx ssssssssssss Sertifikat.pptx
TeukuEriSyahputra
 
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
Vietnam Cotton & Spinning Association
 
一比一原版(uob毕业证书)伯明翰大学毕业证如何办理
一比一原版(uob毕业证书)伯明翰大学毕业证如何办理一比一原版(uob毕业证书)伯明翰大学毕业证如何办理
一比一原版(uob毕业证书)伯明翰大学毕业证如何办理
9gr6pty
 
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
eoxhsaa
 
DATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docx
DATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docxDATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docx
DATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docx
SaffaIbrahim1
 
Sample Devops SRE Product Companies .pdf
Sample Devops SRE  Product Companies .pdfSample Devops SRE  Product Companies .pdf
Sample Devops SRE Product Companies .pdf
Vineet
 

Recently uploaded (20)

一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理
一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理
一比一原版英属哥伦比亚大学毕业证(UBC毕业证书)学历如何办理
 
一比一原版卡尔加里大学毕业证(uc毕业证)如何办理
一比一原版卡尔加里大学毕业证(uc毕业证)如何办理一比一原版卡尔加里大学毕业证(uc毕业证)如何办理
一比一原版卡尔加里大学毕业证(uc毕业证)如何办理
 
A gentle exploration of Retrieval Augmented Generation
A gentle exploration of Retrieval Augmented GenerationA gentle exploration of Retrieval Augmented Generation
A gentle exploration of Retrieval Augmented Generation
 
How To Control IO Usage using Resource Manager
How To Control IO Usage using Resource ManagerHow To Control IO Usage using Resource Manager
How To Control IO Usage using Resource Manager
 
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
06-12-2024-BudapestDataForum-BuildingReal-timePipelineswithFLaNK AIM
 
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
 
一比一原版南昆士兰大学毕业证如何办理
一比一原版南昆士兰大学毕业证如何办理一比一原版南昆士兰大学毕业证如何办理
一比一原版南昆士兰大学毕业证如何办理
 
Econ3060_Screen Time and Success_ final_GroupProject.pdf
Econ3060_Screen Time and Success_ final_GroupProject.pdfEcon3060_Screen Time and Success_ final_GroupProject.pdf
Econ3060_Screen Time and Success_ final_GroupProject.pdf
 
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
一比一原版澳洲西澳大学毕业证(uwa毕业证书)如何办理
 
Build applications with generative AI on Google Cloud
Build applications with generative AI on Google CloudBuild applications with generative AI on Google Cloud
Build applications with generative AI on Google Cloud
 
一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理一比一原版悉尼大学毕业证如何办理
一比一原版悉尼大学毕业证如何办理
 
一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理
一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理
一比一原版爱尔兰都柏林大学毕业证(本硕)ucd学位证书如何办理
 
一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理
一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理
一比一原版加拿大麦吉尔大学毕业证(mcgill毕业证书)如何办理
 
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...
Discovering Digital Process Twins for What-if Analysis: a Process Mining Appr...
 
Template xxxxxxxx ssssssssssss Sertifikat.pptx
Template xxxxxxxx ssssssssssss Sertifikat.pptxTemplate xxxxxxxx ssssssssssss Sertifikat.pptx
Template xxxxxxxx ssssssssssss Sertifikat.pptx
 
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
[VCOSA] Monthly Report - Cotton & Yarn Statistics March 2024
 
一比一原版(uob毕业证书)伯明翰大学毕业证如何办理
一比一原版(uob毕业证书)伯明翰大学毕业证如何办理一比一原版(uob毕业证书)伯明翰大学毕业证如何办理
一比一原版(uob毕业证书)伯明翰大学毕业证如何办理
 
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
一比一原版多伦多大学毕业证(UofT毕业证书)学历如何办理
 
DATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docx
DATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docxDATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docx
DATA COMMS-NETWORKS YR2 lecture 08 NAT & CLOUD.docx
 
Sample Devops SRE Product Companies .pdf
Sample Devops SRE  Product Companies .pdfSample Devops SRE  Product Companies .pdf
Sample Devops SRE Product Companies .pdf
 

Sequential Monte Carlo algorithms for agent-based models of disease transmission

  • 1. Sequential Monte Carlo algorithms for agent-based models of disease transmission Jeremy Heng ESSEC Business School Joint work with Phyllis Ju (Purdue) and Pierre Jacob (ESSEC) KAUST 1 September 2021 JH Agent-based models 1/ 42
  • 2. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 2/ 42
  • 3. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 2/ 42
  • 4. Agent-based models • Agent-based models specify how a population of agents interact and evolve over time • Flexible, interpretable and widely employed in many fields (e.g. ecology, epidemiology, transportation) • Can render realistic macroscopic phenomena from simple microscopic rules Figure: SimCity by Electronic Arts JH Agent-based models 3/ 42
  • 5. A brief history of agent-based models • Dates back to work by von Neumann and Ulam in 1940s on cellular automata • Popularized in many disciplines during the 1990s for various reasons • Growing computational power made it possible to simulate such models • Low levels of mathematical sophistication required to build such models JH Agent-based models 4/ 42
  • 6. Software for agent-based models JH Agent-based models 5/ 42
  • 7. Calibration of agent-based models • These models are typically calibrated by matching key features of simulated and actual data • Can be computationally intensive and difficult to calibrate 126 CHAPTER 5 Figure 5.3. Simulated and historical settlement patterns, in red, for Long House Valley in A.D. 1125. North is to the top of the page. of the 1270–1450 period could have supported a reduced but substantial population in small settlements dispersed across suitable farming habitats located primarily in areas of high potential crop production in the Figure: Simulated and historical settlement patterns in long house valley JH Agent-based models 6/ 42
  • 8. Statistical inference for agent-based models • Given occasional noisy measurements of the population, we could consider statistical inference for such models • Few works have addressed this important topic as likelihood-based inference is computationally challenging • We propose sequential Monte Carlo algorithms for some classical agent-based models in epidemiology • The general principle is to ‘open the black box’ nature of these models and exploit its inherent structure JH Agent-based models 7/ 42
  • 9. Compartmental models in epidemiology • A population-level approach assigns the population to compartments and models the number of people in each compartment over time SIR model Susceptible Infected Recovered λ γ SIS model Susceptible Infected λ γ JH Agent-based models 8/ 42
  • 10. Agent-based models in epidemiology • The agent-based approach assumes agents can take these states and models the state of each agent n over time SIR model Susceptible Infected Recovered λn γn SIS model Susceptible Infected λn γn JH Agent-based models 9/ 42
  • 11. Why agent-based models? • May be unrealistic to assume agents are interchangeable 0.0 0.1 0.2 0.3 0.4 0.5 0 10 20 30 Incubation period Density (days) Gender Men Women 0.0 0.1 0.2 0.3 0 10 20 30 Incubation period Density (days) Age <50 >=50 Figure: Gender-specific (left) and age-specific (right) distributions of COVID-19 incubation period (Zhao et al. 2020, AoAS) JH Agent-based models 10/ 42
  • 12. Why agent-based models? • May be unrealistic to assume all agents interact 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Figure: Fully connected network versus small world network JH Agent-based models 11/ 42
  • 13. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 11/ 42
  • 14. Agent-based SIS model • We consider the agent-based SIS model and encode Susceptible = 0 and Infected = 1 • Let Xt = (Xn t )n∈[1:N] ∈ {0, 1}N denote the state of a closed population of N agents at time t ∈ [0 : T] • Initialization X0 ∼ µθ given by Xn 0 ∼ Ber(αn 0), independently for n ∈ [1 : N] • Markov transition Xt ∼ fθ(·|Xt−1) at time t ∈ [1 : T] is given by Xn t ∼ Ber(αn (Xt−1)), independently for n ∈ [1 : N] JH Agent-based models 12/ 42
  • 15. Agent-based SIS model • Transition probability specified as αn (Xt−1) = ( λnD(n)−1 P m∈N(n) Xm t−1, if Xn t−1 = 0 1 − γn, if Xn t−1 = 1 • Interactions specified by an undirected network: D(n) and N(n) denote the degree and neighbours of agent n • Infection and recovery rates are modelled using agent-specific attributes λn = (1 + exp(−β> λ wn ))−1 , γn = (1 + exp(−β> γ wn ))−1 , where βλ, βγ ∈ Rd are parameters and wn ∈ Rd are the covariates of agent n (similarly αn 0 depends on β0) JH Agent-based models 13/ 42
  • 16. Agent-based SIS model • If the network is fully connected D(n) = N, N(n) = [1 : N] and the agents are homogeneous λn = λ, γn = γ • We recover the classical SIS model of Kermack and McKendrick (1927), which has a deterministic limit as N → ∞ • These simpler models offer dimension reduction which facilitates inference • However, one cannot incorporate network information and agent attributes • We will use these simplifications to construct efficient SMC proposal distributions for the agent-based model JH Agent-based models 14/ 42
  • 17. Agent-based SIS model • Observations (Yt)t∈[0:T] are the number of infections reported over time • Modelled as conditionally independent given (Xt)t∈[0:T], and Yt ∼ gθ(·|Xt) = Bin(I(Xt), ρ) • I(Xt) = PN n=1 Xn t is the number of infections and ρ ∈ (0, 1) is the reporting rate • Parameters to be inferred θ = (β0, βλ, βγ, ρ) JH Agent-based models 15/ 42
  • 18. Graphical model representation 0 1 0 2 0 3 0 1 1 1 2 1 3 1 2 1 2 2 2 3 2 3 1 3 2 3 3 3 4 1 4 2 4 3 4 ρ βγ βλ β0 Figure: T = 4 time steps, a fully connected network with N = 3 agents JH Agent-based models 16/ 42
  • 19. Likelihood of agent-based SIS model • We have a standard hidden Markov model pθ(x0:T , y0:T ) = µθ(x0) T Y t=1 fθ(xt|xt−1) T Y t=0 gθ(yt|xt) • The marginal likelihood is pθ(y0:T ) = X x0:T ∈{0,1}N×(T+1) pθ(x0:T , y0:T ), • Maximum likelihood estimation computes arg max θ pθ(y0:T ) • Bayesian inference samples from p(θ|y0:T ) ∝ p(θ)pθ(y0:T ) JH Agent-based models 17/ 42
  • 20. Likelihood of agent-based SIS model • We have a hidden Markov model on a discrete state-space • We can employ forward algorithm to compute the marginal likelihood exactly • The cost is of order (no. of states)2 × (no. of observations) = O(22N T) • For large N, we have to rely on sequential Monte Carlo (SMC) methods to approximate the marginal likelihood JH Agent-based models 18/ 42
  • 21. Outline 1 Agent-based models 2 Agent-based SIS model 3 Sequential Monte Carlo JH Agent-based models 18/ 42
  • 22. Sequential Monte Carlo • Sequential Monte Carlo (SMC) methods, aka particle filters, are now quite advanced and well-understood since its introduction in the 90s • The idea is to recursively simulate an interacting particle system of size P • For time t ∈ [0 : T], we have P states and ancestor indexes (X (1) t , . . . , X (P) t ), (A (1) t , . . . , A (P) t ) JH Agent-based models 19/ 42
  • 23. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ For time t = 0 and particle p ∈ [1 : P] sample X (p) 0 ∼ q0(x0|θ) JH Agent-based models 20/ 42
  • 24. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ For time t = 0 and particle p ∈ [1 : P] weight W (p) 0 ∝ w0(X (p) 0 ) JH Agent-based models 20/ 42
  • 25. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X For time t = 0 and particle p ∈ [1 : P] sample ancestor A (p) 0 ∼ R W (1) 0 , . . . , W (P) 0 , resampled particle: X A (p) 0 0 JH Agent-based models 20/ 42
  • 26. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X For time t = 1 and particle p ∈ [1 : P] sample X (p) 1 ∼ q1(x1|X A (p) 0 0 , θ) JH Agent-based models 20/ 42
  • 27. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X For time t = 1 and particle p ∈ [1 : P] weight W n 1 ∝ w1(X A (p) 0 0 , X (p) 1 ) JH Agent-based models 20/ 42
  • 28. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X X For time t = 1 and particle p ∈ [1 : P] sample ancestor A (p) 1 ∼ R W (1) 1 , . . . , W (P) 1 , resampled particle: X A (p) 1 1 JH Agent-based models 20/ 42
  • 29. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X X X X X Repeat for time t ∈ [2 : T]. JH Agent-based models 20/ 42
  • 30. Sequential Monte Carlo … X0 X1 XT Y0 Y1 YT … ✓ X X X X X X X Repeat for time t ∈ [2 : T]. Note this is for a given θ! JH Agent-based models 20/ 42
  • 31. Likelihood estimation • Weight functions (wt)t∈[0:T] and proposals distributions (qt)t∈[0:T] have to satisfy w0(x0) T Y t=1 wt(xt−1, xt) = pθ(x0:T , y0:T ) q(x0:T |θ) where q(x0:T |θ) = q0(x0|θ) QT t=1 qt(xt|xt−1, θ) • We can compute a marginal likelihood estimator p̂θ(y0:T ) =    1 P P X p=1 w0(X (p) 0 )       T Y t=1 1 P P X p=1 wt(X (A (p) t−1) t−1 , X (p) t )    • Unbiasedness and consistency as P → ∞ follow from Del Moral (2004) JH Agent-based models 21/ 42
  • 32. Bootstrap particle filter • The bootstrap particle filter (BPF) of Gordon et al. (1993) employs the proposal distributions q0(x0|θ) = µθ(x0), qt(xt|xt−1, θ) = fθ(xt|xt−1) and weight functions wt(xt) = gθ(yt|xt) • BPF can be readily implemented as simulating the latent process is straightforward • However, it suffers from curse of dimensionality for large N – need large P to control variance of p̂θ(y0:T ) – p̂θ(y0:T ) can collapse to zero JH Agent-based models 22/ 42
  • 33. Likelihood estimation • Efficiency of SMC crucially relies on the choice of proposal distributions • Poor performance of BPF is not surprising, as it does not use any information from the observations • We show how to implement the fully adapted auxiliary particle filter that accounts for the next observation • We propose a novel controlled SMC method that takes the entire observation sequence into account JH Agent-based models 23/ 42
  • 34. Auxiliary particle filter • The auxiliary particle filter (APF) was introduced in Pitt and Shephard (1999) and Carpenter et al. (1999) • It employs the proposal distributions q0(x0|θ) = pθ(x0|y0), qt(xt|xt−1, θ) = pθ(xt|xt−1, yt) and weight functions wt(xt−1) = pθ(yt|xt−1) • Sampling from these proposals and evaluating these weights are not always tractable JH Agent-based models 24/ 42
  • 35. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) JH Agent-based models 25/ 42
  • 36. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) = X xt ∈{0,1}N N Y n=1 Ber(xn t ; αn (xt−1))Bin(yt; I(xt), ρ) JH Agent-based models 25/ 42
  • 37. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) = X xt ∈{0,1}N N Y n=1 Ber(xn t ; αn (xt−1))Bin(yt; I(xt), ρ) = N X it =yt PoiBin(it; αn (xt−1))Bin(yt; it, ρ) since the sum of independent Bernoulli with non-identical success probabilities follows a Poisson binomial distribution JH Agent-based models 25/ 42
  • 38. Auxiliary particle filter • The predictive likelihood is pθ(yt|xt−1) = X xt ∈{0,1}N fθ(xt|xt−1)gθ(yt|xt) = X xt ∈{0,1}N N Y n=1 Ber(xn t ; αn (xt−1))Bin(yt; I(xt), ρ) = N X it =yt PoiBin(it; αn (xt−1))Bin(yt; it, ρ) since the sum of independent Bernoulli with non-identical success probabilities follows a Poisson binomial distribution • Poisson binomial PMF costs O(N2) to compute (Chen and Liu, 1997) JH Agent-based models 25/ 42
  • 39. Auxiliary particle filter • To sample from pθ(xt|xt−1, yt), we augment It = I(Xt) as an auxiliary variable pθ(xt, it|xt−1, yt) = pθ(it|xt−1, yt)pθ(xt|xt−1, it) • Conditional distribution of the number of infections is pθ(it|xt−1, yt) = PoiBin(it; αn(xt−1))Bin(yt; it, ρ) pθ(yt|xt−1) • Distribution of agent states conditioned on their sum is a conditioned Bernoulli pθ(xt|xt−1, it) = CondBer(xt; α(xt−1), it), which costs O(N2) to sample (Chen and Liu, 1997) JH Agent-based models 26/ 42
  • 40. Auxiliary particle filter • Hence the overall cost of APF is O(N2TP) • We can reduce the cost to O(N log(N)TP) using two ideas • Reduce cost of Poisson binomial PMF evaluation to O(N) using translated Poisson approximation at a bias of O(N−1/2) (Barbour and Ćekanavićius, 2002) • Reduce cost of conditioned Bernoulli sampling to O(N log(N)) using Markov chain Monte Carlo (Heng, Jacob and Ju, 2020) JH Agent-based models 27/ 42
  • 41. Controlled sequential Monte Carlo • We introduce a novel implementation of the controlled SMC (cSMC) proposed by Heng et al. (2020) • The optimal proposal that gives a zero variance marginal likelihood estimator is the smoothing distribution pθ(x0:T |y0:T ) = pθ(x0|y0:T ) T Y t=1 pθ(xt|xt−1, yt:T ) • At time t ∈ [1 : T], the transition is pθ(xt|xt−1, yt:T ) = fθ(xt|xt−1)ψ? t (xt) fθ(ψ? t |xt−1) • ψ? t (xt) = p(yt:T |xt) is the backward information filter (BIF) and fθ(ψ? t |xt−1) = P xt ∈{0,1}N fθ(xt|xt−1)ψ? t (xt) JH Agent-based models 28/ 42
  • 42. Controlled sequential Monte Carlo • BIF satisfies the backward recursion ψ? T (xT ) = gθ(yT |xT ), ψ? t (xt) = gθ(yt|xt)fθ(ψ? t+1|xt), t ∈ [0 : T − 1] • This costs O(22NT) to compute, so approximations are necessary when N is large • Our approach is based on dimensionality reduction by coarse-graining the agent-based model • We approximate the model with heterogenous agents by a model with homogenous agents whose individual infection and recovery rates given by their population averages, i.e. λn ≈ λ̄ = N−1 PN n=1 λn and γn ≈ γ̄ = N−1 PN n=1 γn JH Agent-based models 29/ 42
  • 43. Controlled sequential Monte Carlo • BIF of the approximate model ψt(I(xt)) can be computed exactly in O(N3T) cost, and approximately in O(N2T) • We then define the SMC proposal transition as qt(xt|xt−1, θ) = fθ(xt|xt−1)ψt(I(xt)) fθ(ψt|xt−1) , and employ the weight function wt(xt) = gθ(yt|xt)fθ(ψt+1|xt) ψt(I(xt)) • Sampling and weighting can be done in a similar manner as APF JH Agent-based models 30/ 42
  • 44. Controlled sequential Monte Carlo • Quality of proposals depend on the coarse-graining approximation • We establish a bound on the Kullback–Leibler divergence from q(x0:T |θ) and pθ(x0:T |y0:T ) (Chatterjee and Diaconis, 2018) • Finer-grained approximations can be obtained using clustering of the infection and recovery rates, at the expense of increased cost JH Agent-based models 31/ 42
  • 45. Better performance with more information, but higher cost method ● ● ● BPF APF cSMC 0: O( ) O( 2 ) O( 2 ) + O( 3 ) ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 25 50 75 100 0 25 50 75 time ESS% method ● ● ● BPF APF cSMC Figure: Effective sample size for N = 100 fully connected agents and T = 90 time steps JH Agent-based models 32/ 42
  • 46. Informative observations ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● original outliers 0 25 50 75 90 0 25 50 75 100 0 25 50 75 100 time ESS% method ● ● ● BPF APF cSMC ∈ {25, 50, 75} 2 Figure: Bottom panel: observations at t ∈ {25, 50, 75} are replaced by min(2yt, N) JH Agent-based models 33/ 42
  • 47. Marginal likelihood estimation ˆθ( 0: ) θ = 100, = 90 = 2 Figure: Variance of log p̂θ(y0:T ) at data generating parameter JH Agent-based models 34/ 42
  • 48. Marginal likelihood estimation ˆθ( 0: ) βλ != β# λ β! λ = (−1, 2) βλ = (−3 = 90 = 2 Figure: Variance of log p̂θ(y0:T ) at a less likely parameter JH Agent-based models 35/ 42
  • 49. Numerical illustration • Estimated log-likelihood function as the number of observations increases ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● T = 10 T = 30 T = 90 −4 0 4 −4 0 4 −4 0 4 −4 0 4 βλ 1 β λ 2 −100 −20 −10−5 0 log−likelihood Figure: MLE (black dot) and DGP (red dot) JH Agent-based models 36/ 42
  • 50. Numerical illustration • Estimated log-likelihood function as the number of observations increases ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● T = 10 T = 30 T = 90 −4 0 4 −4 0 4 −4 0 4 −4 0 4 βλ 2 β γ 2 −100 −20 −10−5 0 log−likelihood Figure: MLE (black dot) and DGP (red dot) JH Agent-based models 37/ 42
  • 51. Influenza outbreak in a boarding school • SMC methods can be readily deployed within particle MCMC for parameter and state inference (Andrieu, Doucet and Holenstein, 2010) JH Agent-based models 38/ 42
  • 52. Influenza outbreak in a boarding school • SMC methods can be readily deployed within particle MCMC for parameter and state inference (Andrieu, Doucet and Holenstein, 2010) • Choice of network can impact inference results 0.0 0.2 0.4 0.6 0 2 4 6 8 λ γ Density Network Erdos−Renyi Full Small−world D=4 Small−world D=2 JH Agent-based models 38/ 42
  • 53. Smallpox outbreak in a church community • We considered APF and cSMC for the agent-based SIR model ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 10 20 30 0 30 60 89 date removal JH Agent-based models 39/ 42
  • 54. Smallpox outbreak in a church community • We considered APF and cSMC for the agent-based SIR model • We can analyze vaccine efficacy in this framework ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 10 20 30 0 30 60 89 date removal 0.000 0.025 0.050 0.075 0.100 0 1 2 3 4 5 6 R0 proportion vaccinated Yes No JH Agent-based models 39/ 42
  • 55. Concluding remarks • A general alternative to SMC methods is MCMC algorithms to sample from the smoothing distribution • Preprint https://arxiv.org/abs/2101.12156 • R package https://github.com/nianqiaoju/agents • Slides https://sites.google.com/view/jeremyheng/ JH Agent-based models 40/ 42
  • 56. References C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(3):269–342, 2010. A. Barbour and V. Ćekanavićius. Total variation asymptotics for sums of independent integer random variables. The Annals of Probability, 30(2):509–545, 2002. J. Carpenter, P. Clifford, and P. Fearnhead. Improved particle filter for nonlinear problems. IEE Proceedings-Radar, Sonar and Navigation, 146(1):2–7, 1999. S. Chatterjee and P. Diaconis. The sample size required in importance sampling. The Annals of Applied Probability, 28(2):1099–1135, 2018. S. Chen and J. Liu. Statistical applications of the Poisson-Binomial and conditional Bernoulli distributions. Statistica Sinica, 875–892, 1997. P. Del Moral. Feynman-kac formulae: Genealogical and Interacting Particle Systems with Applications. Springer-Verlag New York, 2004. JH Agent-based models 41/ 42
  • 57. References N. Gordon, D. Salmond, and A. Smith. Novel approach to nonlinear/non-gaussian Bayesian state estimation. In IEE proceedings F (radar and signal processing), volume 140, pages 107–113. IET, 1993. J. Heng, A. Bishop, G. Deligiannidis, and A. Doucet. Controlled sequential Monte Carlo. Annals of Statistics, 48(5):2904–2929, 2020. J. Heng, P. Jacob, and N. Ju. A simple Markov chain for independent Bernoulli variables conditioned on their sum. arXiv preprint arXiv:2012.03103, 2020. M. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters. Journal of the American statistical association, 94(446):590–599, 1999. JH Agent-based models 42/ 42