The document proposes a delayed acceptance method for accelerating Metropolis-Hastings algorithms. It begins with a motivating example of non-informative inference for mixture models where computing the prior density is costly. It then introduces the delayed acceptance approach which splits the acceptance probability into pieces that are evaluated sequentially, avoiding computing the full acceptance ratio each time. It validates that the delayed acceptance chain is reversible and provides bounds on its spectral gap and asymptotic variance compared to the original chain. Finally, it discusses optimizing the delayed acceptance approach by considering the expected square jump distance and cost per iteration to maximize efficiency.
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Chris Sherlock's slides
One of the central tasks in computational mathematics and statistics is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation in high dimensions with large data sets. The algorithms are of iterative nature and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Probabilistic Control of Switched Linear Systems with Chance ConstraintsLeo Asselborn
An approach to algorithmically synthesize control
strategies for set-to-set transitions of uncertain discrete-time
switched linear systems based on a combination of tree search
and reachable set computations in a stochastic setting is
proposed in this presentation. The initial state and disturbances
are assumed to be Gaussian distributed, and a time-variant
hybrid control law stabilizes the system towards a goal set.
The algorithmic solution computes sequences of discrete states
via tree search and the continuous controls are obtained
from solving embedded semi-definite programs (SDP). These
program taking polytopic input constraints as well as timevarying
probabilistic state constraints into account. An example
for demonstrating the principles of the solution procedure with
focus on handling the chance constraints is included.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Yandex
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piecewise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, the total cost of which depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.
We show that the updating technique can be efficiently coupled with the simplest subgradient methods. Similar results can be obtained for a new non-smooth random variant of a coordinate descent scheme. We also present promising results of preliminary computational experiments.
In the study of probabilistic integrators for deterministic ordinary differential equations, one goal is to establish the convergence (in an appropriate topology) of the random solutions to the true deterministic solution of an initial value problem defined by some operator. The challenge is to identify the right conditions on the additive noise with which one constructs the probabilistic integrator, so that the convergence of the random solutions has the same order as the underlying deterministic integrator. In the context of ordinary differential equations, Conrad et. al. (Stat.
Comput., 2017), established the mean square convergence of the solutions for globally Lipschitz vector fields, under the assumptions of i.i.d., state-independent, mean-zero Gaussian noise. We extend their analysis by considering vector fields that need not be globally Lipschitz, and by
considering non-Gaussian, non-i.i.d. noise that can depend on the state and that can have nonzero mean. A key assumption is a uniform moment bound condition on the noise. We obtain convergence in the stronger topology of the uniform norm, and establish results that connect this topology to the regularity of the additive noise. Joint work with A. M. Stuart (Caltech), T. J. Sullivan (Free University of Berlin).
Gaps between the theory and practice of large-scale matrix-based network comp...David Gleich
I discuss some runtimes for the personalized PageRank vector and how it relates to open questions in how we should tackle these network based measures via matrix computations.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
Distributed solution of stochastic optimal control problem on GPUsPantelis Sopasakis
Stochastic optimal control problems arise in many
applications and are, in principle,
large-scale involving up to millions of decision variables. Their
applicability in control applications is often limited by the
availability of algorithms that can solve them efficiently and within
the sampling time of the controlled system.
In this paper we propose a dual accelerated proximal
gradient algorithm which is amenable to parallelization and
demonstrate that its GPU implementation affords high speed-up
values (with respect to a CPU implementation) and greatly outperforms
well-established commercial optimizers such as Gurobi.
Reinforcement learning: hidden theory, and new super-fast algorithms
Lecture presented at the Center for Systems and Control (CSC@USC) and Ming Hsieh Institute for Electrical Engineering,
February 21, 2018
Stochastic Approximation algorithms are used to approximate solutions to fixed point equations that involve expectations of functions with respect to possibly unknown distributions. The most famous examples today are TD- and Q-learning algorithms. The first half of this lecture will provide an overview of stochastic approximation, with a focus on optimizing the rate of convergence. A new approach to optimize the rate of convergence leads to the new Zap Q-learning algorithm. Analysis suggests that its transient behavior is a close match to a deterministic Newton-Raphson implementation, and numerical experiments confirm super fast convergence.
Based on
@article{devmey17a,
Title = {Fastest Convergence for {Q-learning}},
Author = {Devraj, Adithya M. and Meyn, Sean P.},
Journal = {NIPS 2017 and ArXiv e-prints},
Year = 2017}
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Delayed acceptance for Metropolis-Hastings algorithms
1. Delayed acceptance for accelerating
Metropolis-Hastings
Christian P. Robert
Universit´e Paris-Dauphine, Paris & University of Warwick, Coventry
Joint with M. Banterle, C. Grazian, & A. Lee
2. The next MCMSkv meeting:
• Computational Bayes section of
ISBA major meeting:
• MCMSki V in Lenzerheide,
Switzerland, Jan. 5-7, 2016
• MCMC, pMCMC, SMC2
, HMC,
ABC, (ultra-) high-dimensional
computation, BNP, QMC, deep
learning, &tc
• Plenary speakers: S. Scott, S.
Fienberg, D. Dunson, M.
Jordan, K. Latuszynski, T.
Leli`evre
• Call for contributed posters and
“breaking news” session opened
• “Switzerland in January, where
else...?!”
5. Non-informative inference for mixture models
Standard mixture of distributions model
k
i=1
wi f (x|θi ) , with
k
i=1
wi = 1 . (1)
[Titterington et al., 1985; Fr¨uhwirth-Schnatter (2006)]
Jeffreys’ prior for mixture not available due to computational
reasons : it has not been tested so far
[Jeffreys, 1939]
Warning: Jeffreys’ prior improper in some settings
[Grazian & Robert, 2015]
6. Non-informative inference for mixture models
Grazian & Robert (2015) consider genuine Jeffreys’ prior for
complete set of parameters in (1), deduced from Fisher’s
information matrix
Computation of prior density costly, relying on many integrals like
ˆ
X
∂2
log
k
i=1 wi f (x|θi )
∂θh∂θj
k
i=1
wi f (x|θi ) dx
Integrals with no analytical expression, hence involving numerical
or Monte Carlo (costly) integration
7. Non-informative inference for mixture models
When building Metropolis-Hastings proposal over (wi , θi )’s, prior
ratio more expensive than likelihood and proposal ratios
Suggestion: split the acceptance rule
α(x, y) := 1 ∧ r(x, y), r(x, y) :=
π(y|D)q(y, x)
π(x|D)q(x, y)
into
˜α(x, y) := 1 ∧
f (D|y)q(y, x)
f (D|x)q(x, y)
× 1 ∧
π(y)
π(x)
8. The “Big Data” plague
Simulation from posterior distribution with large sample size n
• Computing time at least of order O(n)
• solutions using likelihood decomposition
n
i=1
(θ|xi )
and handling subsets on different processors (CPU), graphical
units (GPU), or computers
[Korattikara et al. (2013), Scott et al. (2013)]
• no consensus on method of choice, with instabilities from
removing most prior input and uncalibrated approximations
[Neiswanger et al. (2013), Wang and Dunson (2013)]
9. Proposed solution
“There is no problem an absence of decision cannot
solve.” Anonymous
1 Motivating example
2 Proposed solution
3 Validation of the method
4 Optimizing DA
10. Delayed acceptance
Given α(x, y) := 1 ∧ r(x, y), factorise
r(x, y) =
d
k=1
ρk(x, y)
under constraint ρk(x, y) = ρk(y, x)−1
Delayed Acceptance Markov kernel given by
˜P(x, A) :=
ˆ
A
q(x, y)˜α(x, y)dy + 1 −
ˆ
X
q(x, y)˜α(x, y)dy 1A(x)
where
˜α(x, y) :=
d
k=1
{1 ∧ ρk(x, y)}.
11. Delayed acceptance
Algorithm 1 Delayed Acceptance
To sample from ˜P(x, ·):
1 Sample y ∼ Q(x, ·).
2 For k = 1, . . . , d:
• with probability 1 ∧ ρk (x, y) continue
• otherwise stop and output x
3 Output y
Arrange terms in product so that most computationally intensive
ones calculated ‘at the end’ hence least often
12. Delayed acceptance
Algorithm 1 Delayed Acceptance
To sample from ˜P(x, ·):
1 Sample y ∼ Q(x, ·).
2 For k = 1, . . . , d:
• with probability 1 ∧ ρk (x, y) continue
• otherwise stop and output x
3 Output y
Generalization of Fox and Nicholls (1997) and Christen and Fox
(2005), where testing for acceptance with approximation before
computing exact likelihood first suggested
More recent occurences in literature
[Golightly et al. (2014), Shestopaloff and Neal (2013)]
13. Potential drawbacks
• Delayed Acceptance efficiently reduces computing cost only
when approximation ˜π is “good enough” or “flat enough”
• Probability of acceptance always smaller than in the original
Metropolis–Hastings scheme
• Decomposition of original data in likelihood bits may however
lead to deterioration of algorithmic properties without
impacting computational efficiency...
• ...e.g., case of a term explosive in x = 0 and computed by
itself: leaving x = 0 near impossible
14. Potential drawbacks
Figure : (left) Fit of delayed Metropolis–Hastings algorithm on a
Beta-binomial posterior p|x ∼ Be(x + a, n + b − x) when N = 100,
x = 32, a = 7.5 and b = .5. Binomial B(N, p) likelihood replaced with
product of 100 Bernoulli terms. Histogram based on 105
iterations, with
overall acceptance rate of 9%; (centre) raw sequence of p’s in Markov
chain; (right) autocorrelogram of the above sequence.
15. The “Big Data” plague
Delayed Acceptance intended for likelihoods or priors, but
not a clear solution for “Big Data” problems
1 all product terms must be computed
2 all terms previously computed either stored for future
comparison or recomputed
3 sequential approach limits parallel gains...
4 ...unless prefetching scheme added to delays
[Angelino et al. (2014), Strid (2010)]
16. Validation of the method
1 Motivating example
2 Proposed solution
3 Validation of the method
4 Optimizing DA
17. Validation of the method
Lemma (1)
For any Markov chain with transition kernel Π of the form
Π(x, A) =
ˆ
A
q(x, y)a(x, y)dy + 1 −
ˆ
X
q(x, y)a(x, y)dy 1A(x),
and satisfying detailed balance, the function a(·) satisfies (for
π-a.e. x, y)
a(x, y)
a(y, x)
= r(x, y).
18. Validation of the method
Lemma (2)
( ˜Xn)n≥1, the Markov chain associated with ˜P, is a π-reversible
Markov chain.
Proof.
From Lemma 1 we just need to check that
˜α(x, y)
˜α(y, x)
=
d
k=1
1 ∧ ρk(x, y)
1 ∧ ρk(y, x)
=
d
k=1
ρk(x, y) = r(x, y),
since ρk(y, x) = ρk(x, y)−1 and (1 ∧ a)/(1 ∧ a−1) = a
19. Comparisons of P and ˜P
The acceptance probability ordering
˜α(x, y) =
d
k=1
{1∧ρk(x, y)} ≤ 1∧
d
k=1
ρk(x, y) = 1∧r(x, y) = α(x, y),
follows from (1 ∧ a)(1 ∧ b) ≤ (1 ∧ ab) for a, b ∈ R+.
Remark
By construction of ˜P,
var(f , P) ≤ var(f , ˜P)
for any f ∈ L2(X, π), using Peskun ordering (Peskun, 1973,
Tierney, 1998), since ˜α(x, y) ≤ α(x, y) for any (x, y) ∈ X2.
20. Comparisons of P and ˜P
Condition (1)
Defining A := {(x, y) ∈ X2 : r(x, y) ≥ 1}, there exists c such that
inf(x,y)∈A mink∈{1,...,d} ρk(x, y) ≥ c.
Ensures that when
α(x, y) = 1
then acceptance probability ˜α(x, y) uniformly lower-bounded by
positive constant.
Reversibility implies ˜α(x, y) uniformly lower-bounded by a constant
multiple of α(x, y) for all x, y ∈ X.
21. Comparisons of P and ˜P
Condition (1)
Defining A := {(x, y) ∈ X2 : r(x, y) ≥ 1}, there exists c such that
inf(x,y)∈A mink∈{1,...,d} ρk(x, y) ≥ c.
Proposition (1)
Under Condition (1), Lemma 34 in Andrieu et al. (2013) implies
Gap ˜P ≥ Gap P and
var f , ˜P ≤ ( −1
− 1)varπ(f ) + −1
var (f , P)
with f ∈ L2
0(E, π), = cd−1.
22. Comparisons of P and ˜P
Proposition (1)
Under Condition (1), Lemma 34 in Andrieu et al. (2013) implies
Gap ˜P ≥ Gap P and
var f , ˜P ≤ ( −1
− 1)varπ(f ) + −1
var (f , P)
with f ∈ L2
0(E, π), = cd−1.
Hence if P has right spectral gap, then so does ˜P.
Plus, quantitative bounds on asymptotic variance of MCMC
estimates using ( ˜Xn)n≥1 in relation to those using (Xn)n≥1
available
23. Comparisons of P and ˜P
Easiest use of above: modify any candidate factorisation
Given factorisation of r
r(x, y) =
d
k=1
˜ρk(x, y) ,
satisfying the balance condition, define a sequence of functions ρk
such that both r(x, y) = d
k=1 ρk(x, y) and Condition 1 holds.
24. Comparisons of P and ˜P
Take c ∈ (0, 1], define b = c
1
d−1 and set
˜ρk(x, y) := min
1
b
, max {b, ρk(x, y)} , k ∈ {1, . . . , d − 1},
and
˜ρd (x, y) :=
r(x, y)
d−1
k=1 ˜ρk(x, y)
.
Then:
Proposition (2)
Under this scheme, previous proposition holds with
= c2
= b2(d−1)
.
25. Proof of Proposition 1
optimising decomposition For any f ∈ L2(E, µ) define Dirichlet form
associated with a µ-reversible Markov kernel Π : E × B(E) as
EΠ(f ) :=
1
2
ˆ
µ(dx)Π(x, dy) [f (x) − f (y)]2
.
The (right) spectral gap of a generic µ-reversible Markov kernel
has the following variational representation
Gap (Π) := inf
f ∈L2
0(E,µ)
EΠ(f )
f , f µ
.
26. Proof of Proposition 1
optimising decomposition
Lemma (Andrieu et al., 2013, Lemma 34)
Let Π1 and Π2 be µ-reversible Markov transition kernels of
µ-irreducible and aperiodic Markov chains, and assume that there
exists > 0 such that for any f ∈ L2
0 E, µ
EΠ2 f ≥ EΠ1 f ,
then
Gap Π2 ≥ Gap Π1
and
var f , Π2 ≤ ( −1
− 1)varµ(f ) + −1
var (f , Π1) f ∈ L2
0(E, µ).
28. Proposal Optimisation
Explorative performances of random–walk MCMC strongly
dependent on proposal distribution
Finding optimal scale parameter leads to efficient ‘jumps’ in state
space and smaller...
1 expected square jump distance (ESJD)
2 overall acceptance rate (α)
3 asymptotic variance of ergodic average var f , K
[Roberts et al. (1997), Sherlock and Roberts (2009)]
Provides practitioners with ‘auto-tune’ version of resulting
random–walk MCMC algorithm
29. Proposal Optimisation
Quest for optimisation focussing on two main cases:
1 d → ∞: Roberts et al. (1997) give conditions under which
each marginal chain converges toward a Langevin diffusion
Maximising speed of that diffusion implies minimisation of the
ACT and also τ free from the functional
Remember var f , K = τf × varπ(f ) where
τf = 1 + 2
∞
i=1
Cor(f (X0), f (Xi ))
30. Proposal Optimisation
Quest for optimisation focussing on two main cases:
2 finite d: Sherlock and Roberts (2009) consider unimodal
elliptically symmetric targets and show proxy for ACT is
Expected Square Jumping Distance (ESJD), defined as
E X − X 2
β = E
d
i=1
β−2
i (Xi − X)2
As d → ∞, ESJD converges to the speed of the diffusion
process described in Roberts et al. (1997) [close asymptotia:
d 5]
31. Proposal Optimisation
“For a moment, nothing happened. Then, after a
second or so, nothing continued to happen.” —
D. Adams, THGG
When considering efficiency for Delayed Acceptance, focus on
execution time as well
Eff := ESJD cost per iteration
similar to Sherlock et al. (2013) for pseudo-Marginal MCMC
32. Delayed Acceptance optimisation
Set of assumptions:
(H1) Assume [for simplicity’s sake] that Delayed Acceptance
operates on two factors only, i.e.,
r(x, y) = ρ1(x, y) × ρ2(x, y),
˜α(x, y) =
2
i=1
(1 ∧ ρi (x, y))
Restriction also considers ideal setting where a computationally
cheap approximation ˜f (·) is available and precise enough so that
ρ2(x, y) = r(x, y)/ρ1(x, y) = π(y) π(x) × ˜f (x) ˜f (y) = 1
.
33. Delayed Acceptance optimisation
(H2) Assume that target distribution satisfies (A1) and (A2) in
Roberts et al. (1997), which are regularity conditions on π and its
first and second derivatives, and that
π(x) =
n
i=1
f (xi )
.
(H3) Consider only a random walk proposal
y = x + 2/d Z
where
Z ∼ N(0, Id )
34. Delayed Acceptance optimisation
(H4) Assume that cost of computing ˜f (·), c say, proportional to
cost of computing π(·), C say, with c = δC.
Normalising by C = 1, average total cost per iteration of DA chain
is
δ + E [˜α]
and efficiency of proposed method under above conditions is
Eff(δ, ) =
ESJD
δ + E [˜α]
35. Delayed Acceptance optimisation
Lemma
Under conditions (H1)–(H4) on π(·), q(·, ·) and on
˜α(·, ·) = (1 ∧ ρ1(x, y))
As d → ∞
Eff(δ, ) ≈
h( )
δ + E [˜α]
=
2 2Φ(−
√
I/2)
δ + 2Φ(−
√
I/2)
a( ) ≈ E [˜α] = 2Φ(−
√
I/2)
where I := E ( π(x) )
π(x)
2
as in Roberts et al. (1997).
36. Delayed Acceptance optimisation
Proposition (3)
Under conditions of Lemma 3, optimal average acceptance rate
α∗(δ) is independent of I.
Proof.
Consider Eff(δ, ) in terms of (δ, a( )):
a = g( ) = 2Φ −
√
I/2 = g−1
(a) = −Φ−1
(a/2)
2
√
I
Eff(δ, a) =
4
I Φ−1 a
2
2
a
δ + a
=
4
I
1
δ + a
Φ−1 a
2
2
a
37. Delayed Acceptance optimisation
Figure : top panels: ∗
(δ) and α∗
(δ) as relative cost varies. For δ >> 1
the optimal values converges to values computed for the standard M-H
(red, dashed). bottom panels: close-up of interesting region for
0 < δ < 1.
38. Robustness wrt (H1)–(H4)
Figure : Efficiency of various DA wrt acceptance rate of the chain;
colours represent scaling factor in variance of ρ1 wrt π; δ = 0.3.
39. Practical optimisation
If computing cost comparable for all terms in
(x, y) =
K
i=1
ξi (x, y)
• rank entries according to the success rates observed on
preliminary run
• start with ratios with highest variances
• rank factors by correlation with full Metropolis–Hastings ratio
40. Illustrations
Logistic regression:
• 106 simulated observations with a 100-dimensional parameter
space
• optimised Metropolis–Hastings with α = 0.234
• DA optimised via empirical correlation
• split the data into subsamples of 10 elements
• include smallest number of subsamples to achieve 0.85
correlation
• optimise Σ against acceptance rate
algo ESS (av.) ESJD (av.)
DA-MH over MH 5.47 56.18
41. Illustrations
geometric MALA:
• proposal
θ = θ(i−1)
+ ε2
AT
A θ log(π(θ(i−1)
|y))/2 + εAυ
with position specific A
[Girolami and Calderhead (2011), Roberts and Stramer (2002)]
• computational bottleneck in computating 3rd derivative of π
in proposal
• G-MALA variance set to σ2
d =
2
d1/3
• 102 simulated observations with a 10-dimensional parameter
space
• DA optimised via acceptance rate
Eff(δ, a) = − (2/K)2/3 aΦ−1
(a/2)2/3
δ + a(1 − δ)
.
algo accept ESS/time (av.) ESJD/time (av.)
MALA 0.661 0.04 0.03
DA-MALA 0.09 0.35 0.31
42. Illustrations
Jeffreys for mixtures:
• numerical integration for each term in Fisher information
matrix
• split between likelihood (cheap) and prior (expensive) unstable
• saving 5% of the sample for second step
• MH and DA optimised via acceptance rate
• actual averaged gain ( ESSDA/ESSMH
timeDA/timeMH
) of 9.58
Algorithm ESS (aver.) ESJD (aver.) time (aver.)
MH 1575.963 0.226 513.95
MH + DA 628.767 0.215 42.22