We examine the effectiveness of randomized quasi Monte Carlo (RQMC) to improve the convergence rate of the mean integrated square error, compared with crude Monte Carlo (MC), when estimating the density of a random variable X defined as a function over the s-dimensional unit cube (0,1)^s. We consider histograms and kernel density estimators. We show both theoretically and empirically that RQMC estimators can achieve faster convergence rates in
some situations.
This is joint work with Amal Ben Abdellah, Art B. Owen, and Florian Puchhammer.
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
I am Esther David. I love exploring new topics. Academic writing seemed an exciting option for me. After working for many years with statisticsexamhelp.com, I have assisted many students with their Probability and Statistics exams. I can proudly say, each student I have served is happy with the quality of the solution that I have provided. I have acquired my bachelor's from Seattle University of USA.
Probability
Random variables and Probability Distributions
The Normal Probability Distributions and Related Distributions
Sampling Distributions for Samples from a Normal Population
Classical Statistical Inferences
Properties of Estimators
Testing of Hypotheses
Relationship between Confidence Interval Procedures and Tests of Hypotheses.
I am Jacinta Lawrence Currently associated with statisticsassignmenthelp.com as a Probability and Statistics assignment helper. After completing my master's from Princeton University, USA. I was in search of an opportunity that would expand my area of knowledge hence I decided to help students with their assignments. I have written several statistics assignments to date to help students overcome numerous difficulties they face.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Inference in generative models using the Wasserstein distance [[INI]
1. Inference in generative models using the
Wasserstein distance
Christian P. Robert
(Paris Dauphine PSL & Warwick U.)
joint work with E. Bernton (Harvard), P.E. Jacob
(Harvard), and M. Gerber (Bristol)
INI, July 2017
2. Outline
1 ABC and distance between samples
2 Wasserstein distance and computational aspects
3 Asymptotics
4 Handling time series
3. Outline
1 ABC and distance between
samples
2 Wasserstein distance and
computational aspects
3 Asymptotics
4 Handling time series
4. Generative model
Assumption of a data-generating distribution µ
(n)
for data
y1:n = y1, . . . , yn ∈ Yn
Parametric generative model
M = {µ
(n)
θ : θ ∈ H}
such that sampling z1:n from µ
(n)
θ is feasible
Prior distribution π(θ) available as density and generative model
Goal: inference on parameters θ given observations y1:n
5. Generic ABC
Basic (summary-less) ABC posterior with density
(θ, z1:n) ∼ π(θ)
µ
(n)
θ 1 ( y1:n − z1:n < ε)
Yn 1 ( y1:n − z1:n < ε) dz1:n
.
and ABC marginal
qε
(θ) = Yn
n
i=1 µ(dzi|θ)1 ( y1:n − z1:n < ε)
Yn 1 ( y1:n − z1:n < ε) dz1:n
unbiasedly estimated by π(θ)1 ( y1:n − z1:n < ε) where
z1:n ∼ µ
(n)
θ
Reminder: ABC-posterior goes to posterior as ε → 0
6. Summary statistics
Since random variable y1:n − z1:n may have large variance,
{ y1:n − z1:n < ε}
gets rare as ε → 0
When using
|η(y1:n) − η(z1:n) < ε
based on (insufficient) summary statistic η, variance and
dimension decrease but q-likelihood differs from likelihood
Arbitrariness and impact of summaries
[X et al., 2011; Fearnhead & Prangle, 2012; Li & Fearnhead, 2016]
7. Distances between samples
Aim: Use alternate distances D between samples y1:n and z1:n
such that
D(y1:n, z1:n)
has smaller variance than
y1:n − z1:n
while resulting ABC-posterior still converges to posterior when
ε → 0
8. ABC with order statistics
For univariate i.i.d. observations, order statistics are sufficient
1. sort observed and generated samples y1:n and z1:n
2. compute
yσy(1:n) − zσz(1:n) p =
n
i=1
|y(i) − z(i)|p
1/p
for order p (e.g. 1 or 2) instead of
y1:n − z1:n =
n
i=1
|yi − zi|p
1/p
9. Toy example
Data-generating process given by
Y1:1000 ∼ Gamma(10, 5)
Hypothesised model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
Prior µ ∼ N(0, 1) and σ ∼ Gamma(2, 1)
ABC-Rejection sampling: 105 draws, using Euclidean
distance, on sorted vs. unsorted samples and keeping 102
draws with smallest distances
10. Toy example
0
2
4
6
1.6 2.0 2.4
µ
density
distance: L2 sorted L2
0
2
4
0.00 0.25 0.50 0.75 1.00
σ
density
distance: L2 sorted L2
11. ABC with transport distances
Distance
D(y1:n, z1:n) =
1
n
n
i=1
|y(i) − z(i)|p
1/p
is p-Wasserstein distance between empirical
ˆµn(dy) =
1
n
n
i=1
δyi (dy) and ˆνn(dy) =
1
n
n
i=1
δzi (dy)
Rather than comparing samples as vectors, alternative
representation as empirical distributions
c Novel ABC methods, which does not require summary
statistics, available with multivariate or dependent data
12. Outline
1 ABC and distance between
samples
2 Wasserstein distance and
computational aspects
3 Asymptotics
4 Handling time series
13. Wasserstein distance
Ground distance ρ (x, y) → ρ(x, y) on Y along with order p ≥ 1
leads to Wasserstein distance between µ, ν ∈ Pp(Y), p ≥ 1:
Wp(µ, ν) = inf
γ∈Γ(µ,ν) Y×Y
ρ(x, y)p
dγ(x, y)
1/p
where Γ(µ, ν) set of joints with marginals µ, ν and Pp(Y) set of
distributions µ for which Eµ[ρ(Y, y0)p] < ∞ for one y0
14. Wasserstein distance: univariate case
1 23 123
Two empirical distributions on R with 3 atoms:
1
3
3
i=1
δyi and
1
3
3
j=1
δzj
Matrix of pair-wise costs:
ρ(y1, z1)p ρ(y1, z2)p ρ(y1, z3)p
ρ(y2, z1)p ρ(y2, z2)p ρ(y2, z3)p
ρ(y3, z1)p ρ(y3, z2)p ρ(y3, z3)p
15. Wasserstein distance: univariate case
1 23 123
Joint distribution
γ =
γ1,1 γ1,2 γ1,3
γ2,1 γ2,2 γ2,3
γ3,1 γ3,2 γ3,3
,
with marginals (1/3 1/3 1/3), corresponds to a transport
cost of
3
i,j=1
γi,jρ(yi, zj)p
16. Wasserstein distance: univariate case
1 23 123
Optimal assignment:
y1 ←→ z3
y2 ←→ z1
y3 ←→ z2
corresponds to choice of joint distribution γ
γ =
0 0 1/3
1/3 0 0
0 1/3 0
,
with marginals (1/3 1/3 1/3) and cost 3
i=1 ρ(y(i), z(i))p
17. Wasserstein distance: bivariate case
1
2
3
1
2
3
there exists a joint distribution γ minimizing cost
3
i,j=1
γi,jρ(yi, zj)p
with various algorithms to compute/approximate it
18. Wasserstein distance
also called Earth Mover, Gini, Mallows, Kantorovich, &tc.
can be defined between arbitrary distributions
actual distance
statistically sound:
ˆθn = arginf
θ∈H
Wp(
1
n
n
i=1
δyi , µθ) → θ = arginf
θ∈H
Wp(µ , µθ),
at rate
√
n, plus asymptotic distribution
[Bassetti & al., 2006]
19. Computing Wasserstein distances
when Y = R, computing Wp(µn, νn) costs O(n log n)
when Y = Rd, exact calculation is O(n3 log n)
For entropic regularization, with δ > 0
Wp,δ(ˆµn, ˆνn)p
= inf
γ∈Γ(ˆµn,ˆνn) Y×Y
ρ(x, y)p
dγ(x, y) − δH(γ) ,
where H(γ) = − ij γij log γij entropy of γ, existence of
Sinkhorn’s algorithm that yields cost O(n2)
[Genevay et al., 2016]
20. Computing Wasserstein distances
other approximations, like Ye et al. (2016) using Simulated
Annealing
regularized Wasserstein not a distance, but as δ goes to
zero,
Wp,δ(ˆµn, ˆνn) → Wp(ˆµn, ˆνn)
for δ small enough, Wp,δ(ˆµn, ˆνn) = Wp(ˆµn, ˆνn) (exact)
in practice, δ 5% of median(ρ(yi, zj)p)i,j
[Cuturi, 2013]
21. Computing Wasserstein distances
cost linear in the dimension of
observations
distance calculations
model-independent
other transport distances
calculated in only O(n log n),
based on different
generalizations of “sorting”
22. Computing Wasserstein distances
cost linear in the dimension of
observations
distance calculations
model-independent
other transport distances
calculated in only O(n log n),
based on different
generalizations of “sorting”
[source: Wikipedia]
23. Transport distance via Hilbert curve
Sort multivariate data via space-filling curves, like Hilbert
space-filling curve
H : [0, 1] → [0, 1]d
continuous mapping, with pseudo-inverse
h : [0, 1]d
→ [0, 1]
Compute order σ ∈ S of projected points, and compute
hp(y1:n, z1:n) =
1
n
n
i=1
ρ(yσy(i), zσz(i))p
1/p
,
called Hilbert ordering transport distance
[Gerber & Chopin, 2015]
24. Transport distance via Hilbert curve
Fact: hp(y1:n, z1:n) is a distance between empirical distributions
with n atoms, for all p ≥ 1
Hence, hp(y1:n, z1:n) = 0 if and only if y1:n = zσ(1:n), for a
permutation σ, with hope to retrieve posterior as ε → 0
Cost O(n log n) per calculation, but encompassing sampler
might be more costly than with regularized or exact
Wasserstein distances
25. Adaptive SMC with r-hit moves
Start with ε0 = ∞
1. ∀k ∈ 1 : N, sample θk
0 ∼ π(θ) (prior)
2. ∀k ∈ 1 : N, sample zk
1:n from µ
(n)
θk
3. ∀k ∈ 1 : N, compute the distance dk
0 = D(y1:n, zk
1:n)
4. based on (θk
0)N
k=1 and (dk
0)N
k=1, compute ε1, s.t.
resampled particles have at least 50% unique values
At step t ≥ 1, weight wk
t ∝ 1(dk
t−1 ≤ εt), resample, and perform
r-hit MCMC with adaptive independent proposals
[Lee, 2012]
26. Toy example: bivariate Normal
100 observations from bivariate Normal with variance 1 and
covariance 0.55
Compare WABC with ABC versions based on raw Euclidean
distance and Euclidean distance between (sufficient) sample
means on 106 model simulations.
27. Toy example: multivariate Normal
θ6 θ7 θ8
−2 −1 0 1 2 −2 −1 0 1 2 −2 −1 0 1 2
0
1
2
3
4
value
density
distance: H2 W2
ABC-posterior densities, after 5 × 105 distance calculations
Posterior density in dashed lines
28. Outline
1 ABC and distance between
samples
2 Wasserstein distance and
computational aspects
3 Asymptotics
4 Handling time series
29. Minimum Wasserstein estimator
Under some assumptions, ˆθn exists and
lim sup
n→∞
argmin
θ∈H
Wp(ˆµn, µθ) ⊂ argmin
θ∈H
Wp(µ , µθ),
almost surely
In particular, if θ = argminθ∈H Wp(µ , µθ) is unique, then
ˆθn
a.s.
−→ θ
30. Minimum Wasserstein estimator
Under stronger assumptions, incl. well-specification,
dim(Y) = 1, and p = 1
√
n(ˆθn − θ )
w
−→ argmin
u∈H R
|G (t) − u, D (t) |dt,
where G is a µ -Brownian bridge, and D ∈ (L1(R))dθ satisfies
R
|Fθ(t) − F (t) − θ − θ , D (t) |dt = o( θ − θ H)
[Pollard, 1980; del Barrio et al., 1999, 2005]
Hard to use for confidence intervals, but the bootstrap is an
intersting alternative.
31. Toy Gamma example
Data-generating process:
y1:n
i.i.d.
∼ Gamma(10, 5)
Model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
MLE converges to
argminθ∈H KL(µ , µθ)
MLE top, MWE bottom
32. Toy Gamma example
Data-generating process:
y1:n
i.i.d.
∼ Gamma(10, 5)
Model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
MLE converges to
argminθ∈H KL(µ , µθ)
0
20
40
60
1.96 1.98 2.00 2.02
µ
density
0
20
40
60
80
0.60 0.61 0.62 0.63 0.64 0.65
σ
density
n = 10, 000, MLE dashed,
MWE solid
33. Asymptotics of WABC-posterior
For fixed n and ε → 0, for i.i.d. data, assuming
sup
y,θ
µθ(y) < ∞
y → µθ(y) continuous, the Wasserstein ABC-posterior converges
to the posterior irrespective of the choice of ρ and p
Concentration as both n → ∞ and ε → ε = inf Wp(µ , µθ)
[Frazier et al., 2016]
Concentration on neighborhoods of θ = arginf Wp(µ , µθ),
whereas posterior concentrates on arginf KL(µ , µθ)
34. Asymptotics of WABC-posterior
Rate of posterior concentration (and choice of εn) relates to
rate of convergence of the distance, e.g.
µ
(n)
θ Wp µθ,
1
n
n
i=1
δzi > u ≤ c(θ)fn(u),
[Fournier & Guillin, 2015]
Rate of convergence decays with the dimension of Y,
35. Asymptotics of WABC-posterior
Rate of posterior concentration (and choice of εn) relates to
rate of convergence of the distance, e.g.
µ
(n)
θ Wp µθ,
1
n
n
i=1
δzi > u ≤ c(θ)fn(u),
[Fournier & Guillin, 2015]
Rate of convergence decays with the dimension of Y, fast or
slow, depending on moments of µθ and choice of p
36. Toy example: univariate
Data-generating process:
Y1:n
i.i.d.
∼ Gamma(10, 5), n = 100,
with mean 2 and standard
deviation ≈ 0.63 0.0
0.1
0.2
0 10 20
step
ε
Evolution of εt against t, the step
index in the adaptive SMC sampler
37. Toy example: univariate
Data-generating process:
Y1:n
i.i.d.
∼ Gamma(10, 5), n = 100,
with mean 2 and standard
deviation ≈ 0.63 0.0
0.1
0.2
0 10 20
step
ε
Evolution of εt against t, the step
index in the adaptive SMC sampler
Theoretical model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
Prior: µ ∼ N(0, 1) and
σ ∼ Gamma(2, 1)
0
3
6
9
0 1 2
σ
density
Convergence to posterior
38. Toy example: multivariate
Observation space: Y = R10
Model: Yi ∼ N10(θ, S), for
i ∈ 1 : 100, where Skj = 0.5|k−j| for
k, j ∈ 1 : 10
Data generated with θ defined as
a 10-vector, chosen by drawing
standard Normal variables
Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
39. Toy example: multivariate
Observation space: Y = R10
Model: Yi ∼ N10(θ, S), for
i ∈ 1 : 100, where Skj = 0.5|k−j| for
k, j ∈ 1 : 10
Data generated with θ defined as
a 10-vector, chosen by drawing
standard Normal variables
Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
1e+04
1e+06
0 10 20 30
step
#distancescalculated
Evolution of number of distances
calculated up to t, step index in
adaptive SMC sampler
40. Toy example: multivariate
Observation space: Y = R10
Model: Yi ∼ N10(θ, S), for
i ∈ 1 : 100, where Skj = 0.5|k−j| for
k, j ∈ 1 : 10
Data generated with θ defined as
a 10-vector, chosen by drawing
standard Normal variables
Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
Bivariate marginal of (θ3, θ7)
approximated by SMC sampler
(posterior contours in yellow, θ
indicated by black lines)
41. Outline
1 ABC and distance between
samples
2 Wasserstein distance and
computational aspects
3 Asymptotics
4 Handling time series
42. Method 1 (0?): ignoring dependencies
Consider only marginal distribution
AR(1) example:
y0 ∼ N 0,
σ2
1 − φ2
, yt+1 ∼ N φyt, σ2
.
Marginally
yt ∼ N 0, σ2
/ 1 − φ2
which identifies σ2/ 1 − φ2 but
not (φ, σ)
Produces a region of plausible
parameters
For n = 1, 000, generated with
φ = 0.7 and log σ = 0.9
43. Method 2: delay reconstruction
Introduce ˜yt = (yt, yt−1, . . . , yt−k) for lag k, and treat ˜yt as data
AR(1) example: ˜yt = (yt, yt−1)
with marginal distribution
N
0
0
,
σ2
1 − φ2
1 φ
φ 1
,
identifies both φ and σ
Related to Takens’ theorem in
dynamical systems literature
For
n = 1, 000, generated with φ = 0.7
and log σ = 0.9.
44. Method 3: residual reconstruction
Time series y1:n deterministic transform of θ and w1:n
Given y1:n and θ, reconstruct w1:n
Cosine example:
yt = A cos(2πωt + φ) + σwt
wt ∼ N(0, 1)
wt = (yt − A cos(2πωt + φ))/σ
and calculate distance between
reconstructed w1:n and Normal
sample
[Mengersen et al., 2013]
45. Method 3: residual reconstruction
Time series y1:n deterministic transform of θ and w1:n
Given y1:n and θ, reconstruct w1:n
Cosine example:
yt = A cos(2πωt + φ) + σwt
n = 500 observations with
ω = 1/80, φ = π/4,
σ = 1, A = 2, under prior
U[0, 0.1] and U[0, 2π] for ω and φ,
and N(0, 1) for log σ, log A
−2.5
0.0
2.5
0 100 200 300 400 500
time
y
46. Cosine example with delay reconstruction, k = 3
0
10
20
30
40
0.000 0.025 0.050 0.075 0.100
ω
density
0.00
0.05
0.10
0.15
0.20
0 2 4 6
φ
density
0
2
4
6
−4 −2 0 2
log(σ)
density
0.0
2.5
5.0
7.5
−2 0 2
log(A)
density
47. and with residual and delay reconstructions,
k = 1
0
2000
4000
6000
0.000 0.025 0.050 0.075 0.100
ω
density
0.0
0.5
1.0
1.5
2.0
0 2 4 6
φ
density
0.0
2.5
5.0
7.5
10.0
−2 0 2
log(σ)
density
0
1
2
3
−2 0 2
log(A)
density
48. Method 4: curve matching
Define ˜yt = (t, yt) for all t ∈ 1 : n.
Define a metric on {1, . . . , T} × Y. e.g.
ρ((t, yt), (s, zs)) = λ|t − s| + |yt − zs|, for some λ
Use distance D to compare ˜y1:n = (t, yt)n
t=1 and ˜z1:n = (s, zs)n
s=1
If λ 1, optimal transport will associate each (t, yt) with (t, zt)
We get back the “vector” norm y1:n − z1:n .
If λ = 0, time indices are ignored: identical to Method 1
For any λ > 0, there is hope to retrieve the posterior as ε → 0
50. Discussion
Transport metrics can be used to compare samples
Various complexities from n3 log n to n2 to n log n
Asymptotic guarantees as ε → 0 for fixed n,
and as n → ∞ and ε → ε
Various ways of applying these ideas to time series and
maybe spatial data, maps, images. . . ?