SlideShare a Scribd company logo
1 of 187
Download to read offline
Sparse-Bayesian Approach to Inverse Problems with
Partial Differential Equations
Topics:
Bayesian identification of sound sources with the Helmholtz equation.
Optimal control and Bayesian inversion for the wave equation with BV-functions in time.
29.08.18
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 1 / 64
Contents
1 Motivation
2 Bayesian Inversion
3 Sparse-Bayesian Approach - Helmholtz Equation
4 Sparse-Bayesian Approach - Wave Equation
5 Outlook
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 2 / 64
Motivation
Bayesian inversion (Coutant,2)
Models from physics, economics, biology, medicine, engineering or
other fields can include inherent errors!
In general: Parameters can not directly be measured and other
perhaps less meaningful data have to be used for the validation of
required model parameters!
Data can include inherent measurement errors!
How to deal with that?
How to infer the transmission of error from data to parameter?
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 3 / 64
Motivation
Bayesian inversion (Coutant,2)
Models from physics, economics, biology, medicine, engineering or
other fields can include inherent errors!
In general: Parameters can not directly be measured and other
perhaps less meaningful data have to be used for the validation of
required model parameters!
Data can include inherent measurement errors!
How to deal with that?
How to infer the transmission of error from data to parameter?
Thinking probabilistically enables us to overcome these difficulties!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 3 / 64
Motivation
Bayesian inversion (Coutant,2)
What if we have an a priori knowledge about parameters and noise in
the data?
The Bayesian approach gives a formalism to introduce:
To introduce an a priori knowledge on parameters (Admissible Set).
To define a description of data and model errors.
To deal with a non-deterministic or non-exact model.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 4 / 64
Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
3) yd :=Collected data.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
3) yd :=Collected data.
4) η :=measurement error.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
3) yd :=Collected data.
4) η :=measurement error.
If the parameters u or the noise η are random variables with values in
an infinite dimensional space, we are in the infinite dimensional
Bayesian inversion setting.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
Bayesian Inversion in Finite Dimension
Assumptions
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
2) Noise independence u ⊥ η and P(η ∈ A) =
A
p(η)dη.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
2) Noise independence u ⊥ η and P(η ∈ A) =
A
p(η)dη.
3) Likelihood: (1) , (2) and (P) imply P(yd |u ∈ A) =
A
p(yd − G(u))dyd .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
2) Noise independence u ⊥ η and P(η ∈ A) =
A
p(η)dη.
3) Likelihood: (1) , (2) and (P) imply P(yd |u ∈ A) =
A
p(yd − G(u))dyd .
4) (u, yd ) as random variable: (1) and (3) imply
P((u, yd ) ∈ A × B) =
A B
p(yd − G(u))p0(u)dyd du.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
with Φ(u; yd ) := −log(p(yd − G(u))) (Potential).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
with Φ(u; yd ) := −log(p(yd − G(u))) (Potential).
exp(−Φ(u; yd )) allows to change and concentrate the prior
distribution on the most important areas in the support of the prior.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
with Φ(u; yd ) := −log(p(yd − G(u))) (Potential).
exp(−Φ(u; yd )) changes and concentrates the prior distribution on
the most important areas in the support of the prior.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 8 / 64
Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
Noise: η ∼ N(0, 2)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
Noise: η ∼ N(0, 2)
yd = G(u) + η with G : R → R.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
Noise: η ∼ N(0, 2)
yd = G(u) + η with G : R → R.
dPu|yd
(u) ∝ exp(− 1
2 2 |G(u) − yd |2)dPu(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m
.
yd = G(u) + η with G : H1
([0, 1]2
) → Rm
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m
.
yd = G(u) + η with G : H1
([0, 1]2
) → Rm
.
Posterior:
dPu|yd
(u) ∝ exp(−Φ(u; yd ))dPu(u) with
Φ(u; yd ) = 1
2 Σ−1/2
(G(u) − yd ) 2
Rm =: 1
2 G(u) − yd
2
Σ.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
Consider the prior measure
dPu(u) = 1
Λu
exp −
m
i=1
αi ∂tui M(I) dµpr (u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
Consider the prior measure
dPu(u) = 1
Λu
exp −
m
i=1
αi ∂tui M(I) dµpr (u).
yd = G(u) + η with G : H1
(0, T)m
→ Rm
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
Consider the prior measure
dPu(u) = 1
Λu
exp −
m
i=1
αi ∂tui M(I) dµpr (u).
yd = G(u) + η with G : H1
(0, T)m
→ Rm
Posterior measure:
dPu|yd
(u) = 1
Λyd
exp −1
2 G(u) − yd
2
Σ −
m
i=1
αi ∂tui M(0,T) dPu(u)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
MAP example 2:
Solve tesdadasdamin
u∈E
1
2 (G(u) − yd ) 2
Σ + 1
2 u 2
E .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
MAP example 2:
Solve tesdadasdamin
u∈E
1
2 (G(u) − yd ) 2
Σ + 1
2 u 2
E .
MAP example 3:
Solve tesdadasdamin
u∈E
1
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
E
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
MAP example 2:
Solve tesdadasdamin
u∈E
1
2 (G(u) − yd ) 2
Σ + 1
2 u 2
E .
MAP example 3:
Solve tesdadasdamin
u∈E
1
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
E
Example: E = H1
0 (0, T)m
C =



− −1
...
− −1


 : L2(0, T)m → (H2(0, T) ∩ H1(0, T))m ⊂ L2(0, T)m.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Identification of Sound Sources with the Helmholtz
Equation
Engel, S., Hafemeyer, H., Münch, C., Schaden, D.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 13 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Propagation of Acoustic Waves:
1
c2 ∂tty(t, x) − y(t, x) = F(t, x) in D ⊂ Rn
y :=pressure fluctuation, c :=sound speed, F :=
inner source
(loudspeakers)
.
Primary noise source acting on a part ΓN on the boundary ∂D:
∂y(t, x)
∂ν
= G(t, x) on ΓN
Assume that G and F are harmonic sound sources with the same
angular frequence ω, i.e.
G(t, x) = Re[g(x)e−iωt], F(t, x) = Re[f (x)e−iωt],
f (x), g(x) ∈ C f(x),g(x):=Amplitudes.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 14 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Helmholtz Equation ⇒ C−Pressure Amplitude:
Solution of the acoustic wave equation:
y(t, x) = Re[H(x)e−iωt]
Helmholtz Equation:



− H − ω
c
2
H = f in D,
∂νH − i ωρ
γ(ω) H = 0 on ΓZ := ∂D  ΓN,
∂νH = g on ΓN.
γ(ω) =wall impedance of the wall ΓZ ,
ρ :=density of the fluid.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 15 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
Sparse Sound Sources: Samples of N(0, − −α
) are not sparse →
Markov-kernels with rejection sampling, used in SMC or MH, show up
a high rejection behavior (low performance, Cotter et al.).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
Sparse Sound Sources: Samples of N(0, − −α
) are not sparse →
Markov-kernels with rejection sampling, used in SMC or MH, show up
a high rejection behavior (low performance, Cotter et al.).
Karhunen-Loève expansion: It can be expensive to sample from the
Prior!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
Sparse Sound Sources: Samples of N(0, − −α
) are not sparse →
Markov-kernels with rejection sampling, used in SMC or MH, show up
a high rejection behavior (low performance, Cotter et al.).
Karhunen-Loève expansion: It can be expensive to sample from the
Prior!
We need a clear relationship between the number, positions and
amplitudes of the sound sources ⇒ Non-Gaussian Prior!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Desired Prior: f should be a finite linear combination Dirac delta
measures:
f =
k
i=1
αi δxi with αi ∈ C, and xi inside D.
f =loudspeakers as acoustic monopoles.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 17 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
Observation Operator
G : D → Cm,
u → (Hu(zj))m
j=1
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
Observation Operator
G : D → Cm,
u → (Hu(zj))m
j=1
where D is the space of finite linear combinations of Diracs with
support inside D, and zj ∈ D as measurement points.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
Observation Operator
G : D → Cm,
u → (Hu(zj))m
j=1
where D is the space of finite linear combinations of Diracs with
support inside D, and zj ∈ D as measurement points.
Green’s function solves Helmholtz equation ⇒ Assumption on prior:
We expect no sound sources in a small neighborhood of the
microphones (H2-regularity; pointwise measurements!)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior
Formally, we consider a prior of the form:
u =
k
j=1
αk
j δxk
j
∈ D
k :=random variable with values in N,
αk
j =random variable with values in C,
xk
j =random variable with values in the Helmholtz domain D.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 19 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
αk
j = αk
j1 + i · αk
j2 ∈ C with αk
jr
iid
∼ N(µr , σr ), r = 1, 2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
αk
j = αk
j1 + i · αk
j2 ∈ C with αk
jr
iid
∼ N(µr , σr ), r = 1, 2.
xk
j
iid
∼ Unif (Dκ) with
−→
Z = (zi )m
i=1 ⊂ D measurement positions,
Dκ ⊂ D open, and dist(Dκ, ∂D ∪
−→
Z ) > κ > 0
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
αk
j = αk
j1 + i · αk
j2 ∈ C with αk
jr
iid
∼ N(µr , σr ), r = 1, 2.
xk
j
iid
∼ Unif (Dκ) with
−→
Z = (zi )m
i=1 ⊂ D measurement positions,
Dκ ⊂ D open, and dist(Dκ, ∂D ∪
−→
Z ) > κ > 0
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Negative Log Likelihood:
Φ : D × Cm → R
(u, yd ) → 1
2 yd − G(u) 2
Γη,Cη
with
yd − G(u) 2
Γη,Cη
:= z1
Γη Cη
Cη Γη
−1
z2
z1 := (yd − G(u))
T
, (yd − G(u))T , z2 :=



yd − G(u)
(yd − G(u))


 .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 21 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Posterior
Solution of the inverse problem: u|yd (Posterior distribution)
Pu|yd
(A) =
1
Λ(yd )
A
exp(−Φ(u, yd ))dPu(u),
where A ⊂ D, Z(y) is a normalization constant
Λ(yd ) =
D
exp(−Φ(u, yd ))dPu(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 22 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Posterior - Stability Results
Assume that the prior measure has 2nd moment. Then for all r > 0,
there exists C > 0 s.t.
dHell (Pu|y1
, Pu|y2
) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
Assume that the prior measure has 2nd moment. Let r > 0 and f be a
X−valued function, which is square integrable with respect to Pu|y
for all y ∈ Br (0) ⊂ Cm. Then, there is a C > 0 such that
EPu|y1
(f ) − EPu|y2
(f ) X ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 23 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Posterior - Finite Element Approximation
We approximate the very weak solution of the Helmholtz Equation by
a Finite Element Method ⇒
H(u) − Hh(u) H2(Dκ) ∈ O(|ln(h)|h2)
Posterior - Finite Element Approximation
Pu|yd ,h(A) =
1
Λh(yd )
A
exp(−Φh(u, yd ))dPu(u),
with Φh(u, yd ) = 1
2 yd − Gh(u) 2
Γη,Cη
, and Gh(u) = (Hu,h(zj))m
j=1.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 24 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Posterior - Stability Results
Assume that the prior measure has 4th moment. For yd ∈ Cm holds,
dHell (Pu|yd ,h, Pu|yd
) ∈ O(|ln(h)|h2).
Assume that the prior measure Pu has 4th moment. Let r > 0 and f
be a X−valued function, which has second moment with respect to
Pu|yd
and Pu|yd ,h. Then, there holds
EPu|yd ,h
(f ) − EPu|yd
(f ) X ∈ O(|ln(h)|h2).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 25 / 64
Sparse-Bayesian Approach - Helmholtz Equation
10 -2
10 -1
10 0
h
10 -2
10 -1
10 0
HellingerDistance
10 -2
10 -1
10 0
h
10 -7
10 -6
10 -5
10 -4
10 -3
10 -2
Variance
Left Figure
"o-o"=
distHell (Pu|yd ,h, Pu|yd
).
Average of Hellinger
distance over 50 runs.
"- -"= O(| ln h|h2)
Right Figure
distHell (Pu|yd ,h, Pu|yd
)
variance for different h from
50 runs.
Mesh size: h =
√
2 · 2−k with k = 2, ..., 6
SMC method with fixed N = 5 · 105 prior particles.
Reference measure kref = 7.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 26 / 64
Sparse-Bayesian Approach - Helmholtz Equation
10 -2
10 -1
10 0
h
10 -4
10 -3
10 -2
10 -1
10 0
10 1
Error
10 -2
10 -1
10 0
h
10 -16
10 -14
10 -12
10 -10
10 -8
10 -6
10 -4
10 -2
Variance
f1
f2
f
3
f
4
f
5
Left Figure
EPu|yd
(f ) − EPu|yd ,h
(f ) X
Average of 50 runs.
Functions:
f1(u) := u 1 :
First moment.
f2(u) := 1{two sources}(u):
Probability two sources.
(bounded!)
f3(u) := |yu(z0)|:texdddfiiiExpected pressure amplitude at z0.
f4(u) := Var(|yu(z0)|):textVariance of pressure amplitude at z0.
f5(u) := 10 log10(max(1, | Re(yu(z0) exp(−iζt))|)):
Decibel function at z0.
Computation: See Hellinger distance.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 27 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Idea
Approximating a probability measure µ with Dirac measures:
µ ≈ 1
N
N
i=1
δui , for (ui )N
i=1 ⊂ supp(µ).
Approximate the Posterior measure with intermediate measures:
dµj(u) := 1
Λj
exp (−βjψ(u, yd )) dPu(u),
with 0 = β0 < β1 < · · · < βJ = 1.
Sequential updates:
µ0 → µ1 → · · · → µJ−1 → µJ,
with additional redrawing steps for each µj
(µj-Invariant Markov-Kernel)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 28 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
4. Apply Markov kernel u
(n)
j+1 ∼ Pj(u
(n)
j , ·)
(Redraw Step: Speed up possible!).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
4. Apply Markov kernel u
(n)
j+1 ∼ Pj(u
(n)
j , ·)
(Redraw Step: Speed up possible!).
5. Define w
(n)
j+1 = w
(n)
j+1/
N
˜n=1
w
(˜n)
j+1, with
w
(n)
j+1 = exp (βj − βj+1)Ψ(u
(n)
j+1, y) w
(n)
j and µN
j+1 :=
N
n=1
w
(n)
j+1δu
(n)
j+1
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
4. Apply Markov kernel u
(n)
j+1 ∼ Pj(u
(n)
j , ·)
(Redraw Step: Speed up possible!).
5. Define w
(n)
j+1 = w
(n)
j+1/
N
˜n=1
w
(˜n)
j+1, with
w
(n)
j+1 = exp (βj − βj+1)Ψ(u
(n)
j+1, y) w
(n)
j and µN
j+1 :=
N
n=1
w
(n)
j+1δu
(n)
j+1
.
6. j ← j + 1 and go to 2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Mean Square Error
Theorem
For every measurable and bounded function f the measure µN
J satisfies
ESMC[|EµN
J
[f ] − EPu|y
[f ]|2
] ≤


J
j=1
2Λ−1
J
j


2
f 2
∞
N
,
where ESMC is the expectation with respect to the randomness in the SMC
algorithm.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 30 / 64
Sparse-Bayesian Approach - Helmholtz Equation
200 800 3200 12800 51200
N
10 -6
10 -5
10 -4
10 -3
10 -2
10 -1
10 0
10 1
MeanSquareError
200 800 3200 12800 51200
N
10 -12
10 -10
10 -8
10 -6
10 -4
10 -2
10 0
10 2
Variance
f 1
f 2
f 3
f 4
f 5
Left Figure
ESMC[|EµN
J
[f ] − EPu|y
[f ]|2]
100 runs for each N.
Fixed mesh size
h =
√
2 · 2−7.
Reference measure Pu|yd
with Nref = 107.
f2(u) := 1{two sources}(u):
Probability of two sources
is bounded!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 31 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Experimental Parameters - Helmholtz Equation:
Helmholtz domain:
D = [0, 1]2.
Exact sources:
(10 + 10i) · δ(0.25,0.75) + (10 + 10i) · δ(0.75,0.75).
Helmholtz boundary conditions:
ΓZ = ∂D.
Helmholtz-Parameter:
ρ = 1 (fluid density), ζ = 30 (freq.), c = 5 (sound speed), and
(α(ζ), β(ζ)) = (1, 1
30) (isolating material in ∂D).
Microphone positions:
z1 = (0.1, 0.5), z2 = (0.5, 0.5), z3 = (0.9, 0.5).
Measurement:
yd = Gh(uexact) + ˜η ∈ C3 where ˜ηi = ˜ηi,re + ˜ηi,im · i
with i.i.d. ˜ηi,re, ˜ηi,im ∼ N(0, 0.05), for i = 1, 2, 3.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 32 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Experimental Parameters - Prior Measure:
k(ω)
i=1
(αk
i,re + αk
i,re · i) · δxk
i
k ∼ Pois(2)
αk
i,re, αk
i,re i.i.d. N(10, 1) distributed, for i = 1, · · · , k.
xk
i i.i.d. Unif(Dκ) distributed with Dκ = [0.1, 0.9] × [0.6, 0.9].
We assume the independence of all random variables.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 33 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Experimental Parameters - SMC Algorithm:
Non-uniform tempering steps:
β0 = 0, β1 = 0.03, β2 = 1.
Number of approximating Diracs:
N = 107
Redraw step parameters:
mα = 10 + 10i, γα = 0.4, γx = 0.1,
ξi = ξi,re + ξi,im · i with i.i.d. ξi,re, ξi,im ∈ N(0, 1).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 34 / 64
Sparse-Bayesian Approach - Helmholtz Equation
a) Smoothed posterior distribution Pu|yd
(·) restricted to the source
positions.
b) Smoothed distribution restricted to the source positions
of Pu|yd
(·| at least one sound source in [0.2, 0.3] × [0.7, 0.8]).
c) Smoothed distribution restricted to the source positions
of Pu|yd
(·| at least one sound source in [0.7, 0.8] × [0.7, 0.8]).
We assume that all random variables are independent of each other.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 35 / 64
Sparse-Bayesian Approach - Helmholtz Equation
a) Smoothed distribution of Pu|yd
(·|k = 2) in [0, 1]2.
b) Smoothed distribution of Pu|yd
(·|k = 3) in [0, 1]2.
We observe significant differences!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 36 / 64
Sparse-Bayesian Approach - Helmholtz Equation
No. k PµN
J
(k) x(nk
MAP )
α(nk
MAP )
w(nk
MAP )
2 77.8 %
(0.37, 0.62)
(0.82, 0.69)
10.04 + 11.03i
8.317 + 9.77i
1.09 · 10−6
3 21.9%
(0.17, 0.85)
(0.90, 0.72)
(0.46, 0.75)
7.98 + 9.44i
8.13 + 10.04i
8.81 + 9.49i
1.11 · 10−6
Empirical MAP Estimator
This example shows that the empirical MAP estimator is not always
the best choice!
Two sources are most likely.
Global empirical MAP estimator leads to three sources!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 37 / 64
Sparse-Bayesian Approach - Helmholtz Equation
Top left figure: True solution |Huexact,h|.
Top right figure: Posterior mean EPu|yd
[|Hu|]
Lower left/right figure: Conditioned MAP estimator for k = 3 & 2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 38 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal control and Bayesian inversion for the wave equation
with BV −functions in time
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 39 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal Control - Wave Equation - BV-functions in time
Let us introduce the optimal control problem (PΠ) for the linear wave
equation with homogeneous Dirichlet boundary conditions:
(PΠ
)



min
u∈BV (0,T)m



β
2 Π(yu(
−→
t )) − yd
2
R
+
m
j=1
αj Dtuj M(0,T)



=: JΠ(y, u)
subject to the weak solution of
∂ttyu − yu =
m
j=1
ujgj in (0, T) × Ω
yu = 0 on (0, T) × ∂Ω
(yu, ∂tyu) = (y0, y1) in {0} × Ω
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 40 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal Control - Wave Equation - BV-functions in time
yu(
−→
t ) := (yu(ti ))r
i=1 ⊂ (H1
0 )r with 0 < t1 < · · · < tr ≤ T.
Π : L2(Ω)r → R linear bounded.
(gj)m
j ⊂ L∞(Ω)  {0} with pairwise disjoint supports.
Assume that (Π(ygj (
−→
t )))m
j=1 are linear independent in R .
Example for Π - Patch measurements:
Π2 : L2(Ω)r → Rk·r
(ϕi )r
i=1 →


 1
|Ωl |
Ωl
ϕi dx
k
l=1



r
i=1
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 41 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
Problem (PΠ) has a solution in BV (I)m.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
Problem (PΠ) has a solution in BV (I)m.
Numerics: Regularization of (PΠ):
(PΠ
γ ) min
u∈H1(0,T)m
JΠ(y, u) + γ
2
m
j=1
∂tuj L2(0,T) =: JΠ
γ (y, u)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
Problem (PΠ) has a solution in BV (I)m.
Numerics: Regularization of (PΠ):
(PΠ
γ ) min
u∈H1(0,T)m
JΠ(y, u) + γ
2
m
j=1
∂tuj L2(0,T) =: JΠ
γ (y, u)
Optimality conditions, a discussion on sparsity, and asymptotic
behavior for (PΠ) and (PΠ
γ ) can be found in my PhD thesis.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Optimal Control:
Wave Equation Parameters:
D = [0, 1]2, T = 1, (y0, y1) = (0, 0), τ = 2−8, h ≈ 2−6.
Optimal Control Parameters:
m = 1, g(x, y) = 500 · 1B0.05(x0), x0 = (0.5, 0.5),
−→
t = (0.0977, 0.1992, 0.2969, 0.3477, 0.3984, 1)
Operator Π = Π2:
Π2 : L2(Ω)6 → R30
(ϕi )6
i=1 →

 1
|Ωl |
Ωl
ϕi dx
5
l=1


6
i=1
Ω1 = (1/9, 2/9) × (1/9, 2/9),
Ω2 = (1/9, 2/9) × (7/9, 8/9),
Ω3 = (7/9, 8/9) × (1/9, 2/9),
Ω4 = (7/9, 8/9) × (7/9, 8/9),
Ω5 = (4/9, 5/9) × (4/9, 5/9).
Desired value:
yd = Π(yˇu) + η with ˇu = 5 · 1(0.1328,0.2656) and η ∼ N(0, idR30 ).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 43 / 64
Sparse-Bayesian Approach - Wave Equation
By a BV-path-following method, we solved for γ → ≈ 0 the
corresponding problems (˜PΠ
γ ) with a semi-smooth Newton method.
From left to right, we see the optimal controls for TV −regularization
parameters α = 0.6, α = 0.18, and α = 0.006.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 44 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
3) tki
ji
is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
3) tki
ji
is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
4) ci is a R−random variable for i = 1, · · · , m.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
3) tki
ji
is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
4) ci is a R−random variable for i = 1, · · · , m.
This kind of prior can have a dense support in BV (0, T)m with
respect to the strict BV-topology.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior
dPu|yd
(u) = 1
Λyd
exp(−β
2 Π(yu(
−→
t )) − yd
2
R
)dPu(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior
dPu|yd
(u) = 1
Λyd
exp(−β
2 Π(yu(
−→
t )) − yd
2
R
)dPu(u).
Posterior - Stability Results
By the same assumptions as in the Helmholtz equation, it holds:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior
dPu|yd
(u) = 1
Λyd
exp(−β
2 Π(yu(
−→
t )) − yd
2
R
)dPu(u).
Posterior - Stability Results
By the same assumptions as in the Helmholtz equation, it holds:
dHell (Pu|y1
, Pu|y2
) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior
dPu|yd
(u) = 1
Λyd
exp(−β
2 Π(yu(
−→
t )) − yd
2
R
)dPu(u).
Posterior - Stability Results
By the same assumptions as in the Helmholtz equation, it holds:
dHell (Pu|y1
, Pu|y2
) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
EPu|y1
(f ) − EPu|y2
(f ) X ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
By the same assumptions as in the Helmholtz equation, it holds:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
By the same assumptions as in the Helmholtz equation, it holds:
dHell (Pu|yd ,τ,h, Pu|yd
) ∈ O(τ2 + h2).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
By the same assumptions as in the Helmholtz equation, it holds:
dHell (Pu|yd ,τ,h, Pu|yd
) ∈ O(τ2 + h2).
EPu|yd ,τ,h
(f ) − EPu|yd
(f ) X ∈ O(τ2 + h2).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
αk1
j1
∼ N(0, 5) i.i.d.,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
αk1
j1
∼ N(0, 5) i.i.d.,
c1 ∼ N(0, 0.1) (all random variables are independent of each other).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
αk1
j1
∼ N(0, 5) i.i.d.,
c1 ∼ N(0, 0.1) (all random variables are independent of each other).
Algorithm:
Similar to the Helmholtz Problem, we considered a SMC method with
additional invariant Metropolis-Hastings algorithm that obtained
similar convergence results as in the Helmholtz case.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
Sparse-Bayesian Approach - Wave Equation
N = 5000.
The left figure represents the support of the approximated posterior.
In the right figure, we can compare the empirical MAP estimator with
the optimal control for TV −regularization parameter α = 0.18.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 49 / 64
Sparse-Bayesian Approach - Wave Equation
Mean:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 50 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
Strong initial value dependence (not needed in the SMC).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
Strong initial value dependence (not needed in the SMC).
Weak predictions with randomized initial values:
(9) used actually 106 − 108 samples for their experiments!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
Strong initial value dependence (not needed in the SMC).
Weak predictions with randomized initial values:
(9) used actually 106 − 108 samples for their experiments!
Our initial value: we considered the ’true function’, projected on the
mesh, and perturbed it with N(0, β) on each node
(No information in (9)).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
Sparse-Bayesian Approach - Wave Equation
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 52 / 64
Outlook
Identification of moving acoustic sound sources with prior measures
that have a support in L2(0, T; M(D)).
Analytical expression of the MAP estimator for our Sparse-Bayesian
approach.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 53 / 64
Literature
(1) S.L. Cotter, G.O. Roberts, A.M. Stuart, D. White, MCMC Methods
for Functions: Modifying Old Algorithms to Make Them Faster.
Statist Sci. 28(3):424– 446, 2013.
(2) O. Coutant, Bayesian inversion, Lecture notes, Joint Inversion in
Geophysics summer school, Barcelonnette (France), 2015.
(3) R. Rodriguez A. Bermudez, P. Gamallo. Finite element methods in
local active control of sound. SIAM J. CONTROL OPTIM. Vol. 43,
No. 2, pp. 437–465, Society for Industrial and Applied Mathematics,
2004.
(4) M. Dashti and A. M. Stuart. The Bayesian Approach To Inverse
Problems. ArXiv e-prints, February 2013.
(5) R. Dautray and J. L. Lions. Mathematical analysis and numerical
methods for science and technology. Springer-Verlag, Berlin,
1984-1985.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 54 / 64
Literature
(6) K. Pieper, B. Tang, P. Trautmann, and D. Walter. Inverse point
source location with the helmholtz equation. Not Published yet, TBA.
(7) A. Schatz. An observation concerning ritz-galerkin methods with
idefinite bilinear forms. Mathematics of computation, Volume 28,
number 128, pages 959-962, 1974.
(8) A. M. Stuart. Inverse problems: A Bayesian perspective. Acta
Numerica, 19, , pp 451-559 doi:10.1017/ S0962492910000061, 2010.
(9) Z. Yao, Z. Hu, J. Li, A TV-Gaussian prior for infinite-dimensional
Bayesian inverse problems and its numerical implementations, Inverse
Problems, 34 ,2018.
(10) A. A. Zlotnik, Convergence rate estimates of finite-element methods
for second-order hyperbolic equations. numerical methods and
applications, p.153 et.seq. Guri I. Marchuk, CRC Press, 1994.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 55 / 64
Thank you.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 56 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪
−→
Z ) > κ > 0.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪
−→
Z ) > κ > 0.
Replace D with 1(C, D) ⊂ 1(C, Rd ) =: 1 with norm
(αn, xn)n∈N0 1 :=
n∈N0
|αn|C + |xn|Rd .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪
−→
Z ) > κ > 0.
Replace D with 1(C, D) ⊂ 1(C, Rd ) =: 1 with norm
(αn, xn)n∈N0 1 :=
n∈N0
|αn|C + |xn|Rd .
Define the probability measure µ0
n on 1
κ := 1(C, Dκ) (=open) with
the Borel σ-algebra and support in
1
n,κ := 1
κ−sequence is zero after index n.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and
η ∈ N(0, idRk ).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and
η ∈ N(0, idRk ).
Accept u with probability min{0, exp(βj(Φ(u; yd ) − Φ(u; yd )))}.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and
η ∈ N(0, idRk ).
Accept u with probability min{0, exp(βj(Φ(u; yd ) − Φ(u; yd )))}.
This redraw step is µj invariant!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
Appendix
The function G : (X, · X ) → Rm fulfills the following assumptions in the
examples above:
Assumptions on G:
i) (X, · X ) separable Banach space.
ii) For every > 0 there is M = M( ) ∈ R such that, for all u ∈ X,
G(u) Σ ≤ exp( u 2
X + M),
iii) for every r > 0, there is K = K(r) > 0 such that, for all u1, u2 ∈ X
with max{ u1 X , u2 X } < r,
G(u1) − G(u2) Σ ≤ K u1 − u2 X .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 59 / 64
Appendix
Why not choose X as non-separable Banach space?
For a non-separable Banach space X, it is not clear what a
"natural"σ−Algebra on X is.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
Appendix
Why not choose X as non-separable Banach space?
For a non-separable Banach space X, it is not clear what a
"natural"σ−Algebra on X is.
On candidate: Cylindrical σ−Algebra = smallest σ−Algebra s.t. all
∈ X∗ are measurable.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
Appendix
Why not choose X as non-separable Banach space?
For a non-separable Banach space X, it is not clear what a
"natural"σ−Algebra on X is.
On candidate: Cylindrical σ−Algebra = smallest σ−Algebra s.t. all
∈ X∗ are measurable.
In separable Banach spaces hold: Borel σ-Algebra = cylindrical
σ−Algebra.
This holds in general not in non-separable Banach spaces.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
C is trace class symmetric operator.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
C is trace class symmetric operator.
Cameron Martin Space HC = {u ∈ B|C1/2x = u, x ∈ B} of C.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
C is trace class symmetric operator.
Cameron Martin Space HC = {u ∈ B|C1/2x = u, x ∈ B} of C.
Tr(C) =
B
exp(α u 2)dµ(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
Appendix
Definition (Maximum a Posteriori Estimator)
Let zδ = arg max
z∈H
Jδ(z), with Jδ(z) := Pu|yd
(Bδ(z)), where Bδ(z) ⊂ H is a
ball that is centered in z ∈ H with radius δ > 0. Any point ˜z ∈ H
satisfying lim
δ→0
Jδ(˜z)/Jδ(zδ) = 1, is a MAP estimator for the posterior
measure Pu|yd
on H.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 62 / 64
Appendix
Definition (Hellinger Distance)
Let µ1, µ2, ν be measures on X such that µ1, µ2 have a Radon-Nikodym
Derivative with respect to ν. Then the Hellinger distance between µ1 and
µ2 is defined as
dHell(µ1, µ2) =
X
1
2
dµ1
dν
1
2
− dµ2
dν
1
2
2
dν
1
2
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 63 / 64
Appendix
Theorem
Let the prior given k ∈ N0 sources satisfy
µ0
(du|k) = µ0
(dα, dx|k) = µ0
α(dα|k)µ0
x (dx|k),
µ0
x (·|k) = U(Dk
κ), µ0
α(·|k) = N(mα, Γ, C).
Let q(u, ·) be the proposal distribution associated with the redraw step
and define the acceptance probability as follows
aj(u, u ) = min{1, exp(
j
J
(Ψ(u) − Ψ(u )))}.
Then the redraw algorithm is µy
j -invariant.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 64 / 64

More Related Content

What's hot

Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Christian Robert
 
Important Cuts and (p,q)-clustering
Important Cuts and (p,q)-clusteringImportant Cuts and (p,q)-clustering
Important Cuts and (p,q)-clusteringASPAK2014
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Guillaume Costeseque
 
An overview of Bayesian testing
An overview of Bayesian testingAn overview of Bayesian testing
An overview of Bayesian testingChristian Robert
 
Bidimensionality
BidimensionalityBidimensionality
BidimensionalityASPAK2014
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsChristian Robert
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Dahua Lin
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannolli0601
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsFrank Nielsen
 

What's hot (20)

the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]
 
Important Cuts and (p,q)-clustering
Important Cuts and (p,q)-clusteringImportant Cuts and (p,q)-clustering
Important Cuts and (p,q)-clustering
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...
 
An overview of Bayesian testing
An overview of Bayesian testingAn overview of Bayesian testing
An overview of Bayesian testing
 
Bidimensionality
BidimensionalityBidimensionality
Bidimensionality
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified models
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 

Similar to Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equations, KFU 2018

ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
Habilitation à diriger des recherches
Habilitation à diriger des recherchesHabilitation à diriger des recherches
Habilitation à diriger des recherchesPierre Pudlo
 
Threshold network models
Threshold network modelsThreshold network models
Threshold network modelsNaoki Masuda
 
Sensors and Samples: A Homological Approach
Sensors and Samples:  A Homological ApproachSensors and Samples:  A Homological Approach
Sensors and Samples: A Homological ApproachDon Sheehy
 
Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...Duyen Do
 
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd Iaetsd
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
Optimal Transport vs. Fisher-Rao distance between Copulas
Optimal Transport vs. Fisher-Rao distance between CopulasOptimal Transport vs. Fisher-Rao distance between Copulas
Optimal Transport vs. Fisher-Rao distance between CopulasGautier Marti
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerChristian Robert
 
Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014
Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014
Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014Jurgen Riedel
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Guillaume Costeseque
 
Accelerating Pseudo-Marginal MCMC using Gaussian Processes
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesAccelerating Pseudo-Marginal MCMC using Gaussian Processes
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesMatt Moores
 
Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...
Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...
Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...PadmaGadiyar
 
A new axisymmetric finite element
A new axisymmetric finite elementA new axisymmetric finite element
A new axisymmetric finite elementStefan Duprey
 

Similar to Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equations, KFU 2018 (20)

ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Habilitation à diriger des recherches
Habilitation à diriger des recherchesHabilitation à diriger des recherches
Habilitation à diriger des recherches
 
Threshold network models
Threshold network modelsThreshold network models
Threshold network models
 
Sensors and Samples: A Homological Approach
Sensors and Samples:  A Homological ApproachSensors and Samples:  A Homological Approach
Sensors and Samples: A Homological Approach
 
Bayesian computation with INLA
Bayesian computation with INLABayesian computation with INLA
Bayesian computation with INLA
 
Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...Probability based learning (in book: Machine learning for predictve data anal...
Probability based learning (in book: Machine learning for predictve data anal...
 
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detection
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Optimal Transport vs. Fisher-Rao distance between Copulas
Optimal Transport vs. Fisher-Rao distance between CopulasOptimal Transport vs. Fisher-Rao distance between Copulas
Optimal Transport vs. Fisher-Rao distance between Copulas
 
Climate Extremes Workshop - A Semiparametric Bayesian Clustering Model for S...
Climate Extremes Workshop -  A Semiparametric Bayesian Clustering Model for S...Climate Extremes Workshop -  A Semiparametric Bayesian Clustering Model for S...
Climate Extremes Workshop - A Semiparametric Bayesian Clustering Model for S...
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014
Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014
Gauss–Bonnet Boson Stars in AdS, Bielefeld, Germany, 2014
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...
 
Hsu etal-2009
Hsu etal-2009Hsu etal-2009
Hsu etal-2009
 
Climate Extremes Workshop - Semiparametric Estimation of Heavy Tailed Density...
Climate Extremes Workshop - Semiparametric Estimation of Heavy Tailed Density...Climate Extremes Workshop - Semiparametric Estimation of Heavy Tailed Density...
Climate Extremes Workshop - Semiparametric Estimation of Heavy Tailed Density...
 
Accelerating Pseudo-Marginal MCMC using Gaussian Processes
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesAccelerating Pseudo-Marginal MCMC using Gaussian Processes
Accelerating Pseudo-Marginal MCMC using Gaussian Processes
 
Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...
Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...
Discrete Logarithm Problem over Prime Fields, Non-canonical Lifts and Logarit...
 
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
Deep Learning Opening Workshop - Horseshoe Regularization for Machine Learnin...
 
A new axisymmetric finite element
A new axisymmetric finite elementA new axisymmetric finite element
A new axisymmetric finite element
 

Recently uploaded

DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSSLeenakshiTyagi
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)Areesha Ahmad
 
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisRaman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisDiwakar Mishra
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)Areesha Ahmad
 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfrohankumarsinghrore1
 
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡anilsa9823
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sérgio Sacani
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...Sérgio Sacani
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencySheetal Arora
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )aarthirajkumar25
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfSumit Kumar yadav
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)Areesha Ahmad
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...RohitNehra6
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PPRINCE C P
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.Nitya salvi
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticssakshisoni2385
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...anilsa9823
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSarthak Sekhar Mondal
 

Recently uploaded (20)

DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)
 
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral AnalysisRaman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
Raman spectroscopy.pptx M Pharm, M Sc, Advanced Spectral Analysis
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)GBSN - Biochemistry (Unit 1)
GBSN - Biochemistry (Unit 1)
 
Forensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdfForensic Biology & Its biological significance.pdf
Forensic Biology & Its biological significance.pdf
 
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service  🪡
CALL ON ➥8923113531 🔝Call Girls Kesar Bagh Lucknow best Night Fun service 🪡
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )
 
Chemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdfChemistry 4th semester series (krishna).pdf
Chemistry 4th semester series (krishna).pdf
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
❤Jammu Kashmir Call Girls 8617697112 Personal Whatsapp Number 💦✅.
 
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceuticsPulmonary drug delivery system M.pharm -2nd sem P'ceutics
Pulmonary drug delivery system M.pharm -2nd sem P'ceutics
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
 

Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equations, KFU 2018

  • 1. Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equations Topics: Bayesian identification of sound sources with the Helmholtz equation. Optimal control and Bayesian inversion for the wave equation with BV-functions in time. 29.08.18 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 1 / 64
  • 2. Contents 1 Motivation 2 Bayesian Inversion 3 Sparse-Bayesian Approach - Helmholtz Equation 4 Sparse-Bayesian Approach - Wave Equation 5 Outlook (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 2 / 64
  • 3. Motivation Bayesian inversion (Coutant,2) Models from physics, economics, biology, medicine, engineering or other fields can include inherent errors! In general: Parameters can not directly be measured and other perhaps less meaningful data have to be used for the validation of required model parameters! Data can include inherent measurement errors! How to deal with that? How to infer the transmission of error from data to parameter? (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 3 / 64
  • 4. Motivation Bayesian inversion (Coutant,2) Models from physics, economics, biology, medicine, engineering or other fields can include inherent errors! In general: Parameters can not directly be measured and other perhaps less meaningful data have to be used for the validation of required model parameters! Data can include inherent measurement errors! How to deal with that? How to infer the transmission of error from data to parameter? Thinking probabilistically enables us to overcome these difficulties! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 3 / 64
  • 5. Motivation Bayesian inversion (Coutant,2) What if we have an a priori knowledge about parameters and noise in the data? The Bayesian approach gives a formalism to introduce: To introduce an a priori knowledge on parameters (Admissible Set). To define a description of data and model errors. To deal with a non-deterministic or non-exact model. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 4 / 64
  • 6. Bayesian Inversion in Finite Dimension Consider the following problem: (P) yd observed data = G (u) unknown + η noise , u ∈ Rn , and yd , η ∈ RJ Example: 1) u :=Some unknown parameters. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
  • 7. Bayesian Inversion in Finite Dimension Consider the following problem: (P) yd observed data = G (u) unknown + η noise , u ∈ Rn , and yd , η ∈ RJ Example: 1) u :=Some unknown parameters. 2) G(u) :=Some quantifiable properties depending on u (model!) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
  • 8. Bayesian Inversion in Finite Dimension Consider the following problem: (P) yd observed data = G (u) unknown + η noise , u ∈ Rn , and yd , η ∈ RJ Example: 1) u :=Some unknown parameters. 2) G(u) :=Some quantifiable properties depending on u (model!) 3) yd :=Collected data. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
  • 9. Bayesian Inversion in Finite Dimension Consider the following problem: (P) yd observed data = G (u) unknown + η noise , u ∈ Rn , and yd , η ∈ RJ Example: 1) u :=Some unknown parameters. 2) G(u) :=Some quantifiable properties depending on u (model!) 3) yd :=Collected data. 4) η :=measurement error. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
  • 10. Bayesian Inversion in Finite Dimension Consider the following problem: (P) yd observed data = G (u) unknown + η noise , u ∈ Rn , and yd , η ∈ RJ Example: 1) u :=Some unknown parameters. 2) G(u) :=Some quantifiable properties depending on u (model!) 3) yd :=Collected data. 4) η :=measurement error. If the parameters u or the noise η are random variables with values in an infinite dimensional space, we are in the infinite dimensional Bayesian inversion setting. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
  • 11. Bayesian Inversion in Finite Dimension Assumptions (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
  • 12. Bayesian Inversion in Finite Dimension Assumptions 1) Prior distribution density: P(u ∈ A) = A p0(u)du (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
  • 13. Bayesian Inversion in Finite Dimension Assumptions 1) Prior distribution density: P(u ∈ A) = A p0(u)du ≈ Admissible set with a specific distribution. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
  • 14. Bayesian Inversion in Finite Dimension Assumptions 1) Prior distribution density: P(u ∈ A) = A p0(u)du ≈ Admissible set with a specific distribution. 2) Noise independence u ⊥ η and P(η ∈ A) = A p(η)dη. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
  • 15. Bayesian Inversion in Finite Dimension Assumptions 1) Prior distribution density: P(u ∈ A) = A p0(u)du ≈ Admissible set with a specific distribution. 2) Noise independence u ⊥ η and P(η ∈ A) = A p(η)dη. 3) Likelihood: (1) , (2) and (P) imply P(yd |u ∈ A) = A p(yd − G(u))dyd . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
  • 16. Bayesian Inversion in Finite Dimension Assumptions 1) Prior distribution density: P(u ∈ A) = A p0(u)du ≈ Admissible set with a specific distribution. 2) Noise independence u ⊥ η and P(η ∈ A) = A p(η)dη. 3) Likelihood: (1) , (2) and (P) imply P(yd |u ∈ A) = A p(yd − G(u))dyd . 4) (u, yd ) as random variable: (1) and (3) imply P((u, yd ) ∈ A × B) = A B p(yd − G(u))p0(u)dyd du. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
  • 17. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 18. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 19. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 20. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du with Z := Rn p(yd − G(u))p0(u)du. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 21. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du with Z := Rn p(yd − G(u))p0(u)du. Negative Log-Likelihood: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 22. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du with Z := Rn p(yd − G(u))p0(u)du. Negative Log-Likelihood: P(u|yd ∈ A) = 1 Z A exp(−Φ(u; yd ))p0(u)du (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 23. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du with Z := Rn p(yd − G(u))p0(u)du. Negative Log-Likelihood: P(u|yd ∈ A) = 1 Z A exp(−Φ(u; yd ))p0(u)du with Φ(u; yd ) := −log(p(yd − G(u))) (Potential). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 24. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du with Z := Rn p(yd − G(u))p0(u)du. Negative Log-Likelihood: P(u|yd ∈ A) = 1 Z A exp(−Φ(u; yd ))p0(u)du with Φ(u; yd ) := −log(p(yd − G(u))) (Potential). exp(−Φ(u; yd )) allows to change and concentrate the prior distribution on the most important areas in the support of the prior. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
  • 25. Bayesian Inversion in Finite Dimension Bayes’ Theorem: Bayes: P(u|yd ) = P(yd |u)P(u) P(yd ) . 5) u|yd = Solution of the inverse problem (P) given data yd (Posterior). P(u|yd ∈ A) = P(yd |u∈A)P(u∈A) P(yd ) = 1 Z A p(yd − G(u))p0(u)du with Z := Rn p(yd − G(u))p0(u)du. Negative Log Likelihood: P(u|yd ∈ A) = 1 Z A exp(−Φ(u; yd ))p0(u)du with Φ(u; yd ) := −log(p(yd − G(u))) (Potential). exp(−Φ(u; yd )) changes and concentrates the prior distribution on the most important areas in the support of the prior. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 8 / 64
  • 26. Bayesian Inversion Examples and Optimal Control Finite dimensional example - Gaussian Prior: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
  • 27. Bayesian Inversion Examples and Optimal Control Finite dimensional example - Gaussian Prior: Example 1: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
  • 28. Bayesian Inversion Examples and Optimal Control Finite dimensional example - Gaussian Prior: Example 1: Prior: u ∼ N(µ, σ2) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
  • 29. Bayesian Inversion Examples and Optimal Control Finite dimensional example - Gaussian Prior: Example 1: Prior: u ∼ N(µ, σ2) Noise: η ∼ N(0, 2) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
  • 30. Bayesian Inversion Examples and Optimal Control Finite dimensional example - Gaussian Prior: Example 1: Prior: u ∼ N(µ, σ2) Noise: η ∼ N(0, 2) yd = G(u) + η with G : R → R. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
  • 31. Bayesian Inversion Examples and Optimal Control Finite dimensional example - Gaussian Prior: Example 1: Prior: u ∼ N(µ, σ2) Noise: η ∼ N(0, 2) yd = G(u) + η with G : R → R. dPu|yd (u) ∝ exp(− 1 2 2 |G(u) − yd |2)dPu(u). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
  • 32. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 33. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 34. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 35. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. Let (µi , ψi )∞ i=1 be the eigenvalues and eigenfunctions of − −1 : L2 (D) → H1 0 (D) ∩ H2 (D) →→ L2 (D). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 36. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. Let (µi , ψi )∞ i=1 be the eigenvalues and eigenfunctions of − −1 : L2 (D) → H1 0 (D) ∩ H2 (D) →→ L2 (D). Prior: u = ∞ i=0 N(0, µα i ) indep. ψi with α ≥ 1 ⇒ u ∼ N(0, − −α ). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 37. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. Let (µi , ψi )∞ i=1 be the eigenvalues and eigenfunctions of − −1 : L2 (D) → H1 0 (D) ∩ H2 (D) →→ L2 (D). Prior: u = ∞ i=0 N(0, µα i ) indep. ψi with α ≥ 1 ⇒ u ∼ N(0, − −α ). Realizations of u are almost surely in C(D)! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 38. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. Let (µi , ψi )∞ i=1 be the eigenvalues and eigenfunctions of − −1 : L2 (D) → H1 0 (D) ∩ H2 (D) →→ L2 (D). Prior: u = ∞ i=0 N(0, µα i ) indep. ψi with α ≥ 1 ⇒ u ∼ N(0, − −α ). Realizations of u are almost surely in C(D)! Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 39. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. Let (µi , ψi )∞ i=1 be the eigenvalues and eigenfunctions of − −1 : L2 (D) → H1 0 (D) ∩ H2 (D) →→ L2 (D). Prior: u = ∞ i=0 N(0, µα i ) indep. ψi with α ≥ 1 ⇒ u ∼ N(0, − −α ). Realizations of u are almost surely in C(D)! Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m . yd = G(u) + η with G : H1 ([0, 1]2 ) → Rm . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 40. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - Gaussian Prior: Example 2, Dasthi,Stuart: − : H1 0 (D) ∩ H2 (D) ⊆ L2 (D) → L2 (D), D ⊂ Rn , n = 1, 2, 3, open, bounded with C2 -boundary. Let (µi , ψi )∞ i=1 be the eigenvalues and eigenfunctions of − −1 : L2 (D) → H1 0 (D) ∩ H2 (D) →→ L2 (D). Prior: u = ∞ i=0 N(0, µα i ) indep. ψi with α ≥ 1 ⇒ u ∼ N(0, − −α ). Realizations of u are almost surely in C(D)! Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m . yd = G(u) + η with G : H1 ([0, 1]2 ) → Rm . Posterior: dPu|yd (u) ∝ exp(−Φ(u; yd ))dPu(u) with Φ(u; yd ) = 1 2 Σ−1/2 (G(u) − yd ) 2 Rm =: 1 2 G(u) − yd 2 Σ. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
  • 41. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - TV-Gaussian Prior: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
  • 42. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - TV-Gaussian Prior: Example 3, Yao, Hu, Li: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
  • 43. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - TV-Gaussian Prior: Example 3, Yao, Hu, Li: Let µpr be a Gaussian measure with mean 0, support in H1 (0, T)m , and covariance operator λ · CY , with λ > 0. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
  • 44. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - TV-Gaussian Prior: Example 3, Yao, Hu, Li: Let µpr be a Gaussian measure with mean 0, support in H1 (0, T)m , and covariance operator λ · CY , with λ > 0. Consider the prior measure dPu(u) = 1 Λu exp − m i=1 αi ∂tui M(I) dµpr (u). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
  • 45. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - TV-Gaussian Prior: Example 3, Yao, Hu, Li: Let µpr be a Gaussian measure with mean 0, support in H1 (0, T)m , and covariance operator λ · CY , with λ > 0. Consider the prior measure dPu(u) = 1 Λu exp − m i=1 αi ∂tui M(I) dµpr (u). yd = G(u) + η with G : H1 (0, T)m → Rm (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
  • 46. Bayesian Inversion Examples and Optimal Control Infinite dimensional example - TV-Gaussian Prior: Example 3, Yao, Hu, Li: Let µpr be a Gaussian measure with mean 0, support in H1 (0, T)m , and covariance operator λ · CY , with λ > 0. Consider the prior measure dPu(u) = 1 Λu exp − m i=1 αi ∂tui M(I) dµpr (u). yd = G(u) + η with G : H1 (0, T)m → Rm Posterior measure: dPu|yd (u) = 1 Λyd exp −1 2 G(u) − yd 2 Σ − m i=1 αi ∂tui M(0,T) dPu(u) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
  • 47. Bayesian Inversion Examples and Optimal Control Denote by E the Cameron-Martin space of a covariance op. C : H → H: E := {u = C1/2x |x ∈ H}, with u 2 C := C−1/2·, C−1/2· H. Maximum a Posteriori estimator (MAP) and Optimal Control: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
  • 48. Bayesian Inversion Examples and Optimal Control Denote by E the Cameron-Martin space of a covariance op. C : H → H: E := {u = C1/2x |x ∈ H}, with u 2 C := C−1/2·, C−1/2· H. Maximum a Posteriori estimator (MAP) and Optimal Control: MAP example 1: Solve tesdadasdamin x∈R 1 2 2 |G(x) − yd |2 + 1 2σ2 |x|2 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
  • 49. Bayesian Inversion Examples and Optimal Control Denote by E the Cameron-Martin space of a covariance op. C : H → H: E := {u = C1/2x |x ∈ H}, with u 2 C := C−1/2·, C−1/2· H. Maximum a Posteriori estimator (MAP) and Optimal Control: MAP example 1: Solve tesdadasdamin x∈R 1 2 2 |G(x) − yd |2 + 1 2σ2 |x|2 MAP example 2: Solve tesdadasdamin u∈E 1 2 (G(u) − yd ) 2 Σ + 1 2 u 2 E . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
  • 50. Bayesian Inversion Examples and Optimal Control Denote by E the Cameron-Martin space of a covariance op. C : H → H: E := {u = C1/2x |x ∈ H}, with u 2 C := C−1/2·, C−1/2· H. Maximum a Posteriori estimator (MAP) and Optimal Control: MAP example 1: Solve tesdadasdamin x∈R 1 2 2 |G(x) − yd |2 + 1 2σ2 |x|2 MAP example 2: Solve tesdadasdamin u∈E 1 2 (G(u) − yd ) 2 Σ + 1 2 u 2 E . MAP example 3: Solve tesdadasdamin u∈E 1 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 E (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
  • 51. Bayesian Inversion Examples and Optimal Control Denote by E the Cameron-Martin space of a covariance op. C : H → H: E := {u = C1/2x |x ∈ H}, with u 2 C := C−1/2·, C−1/2· H. Maximum a Posteriori estimator (MAP) and Optimal Control: MAP example 1: Solve tesdadasdamin x∈R 1 2 2 |G(x) − yd |2 + 1 2σ2 |x|2 MAP example 2: Solve tesdadasdamin u∈E 1 2 (G(u) − yd ) 2 Σ + 1 2 u 2 E . MAP example 3: Solve tesdadasdamin u∈E 1 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 E Example: E = H1 0 (0, T)m C =    − −1 ... − −1    : L2(0, T)m → (H2(0, T) ∩ H1(0, T))m ⊂ L2(0, T)m. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
  • 52. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Identification of Sound Sources with the Helmholtz Equation Engel, S., Hafemeyer, H., Münch, C., Schaden, D. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 13 / 64
  • 53. Sparse-Bayesian Approach - Helmholtz Equation Propagation of Acoustic Waves: 1 c2 ∂tty(t, x) − y(t, x) = F(t, x) in D ⊂ Rn y :=pressure fluctuation, c :=sound speed, F := inner source (loudspeakers) . Primary noise source acting on a part ΓN on the boundary ∂D: ∂y(t, x) ∂ν = G(t, x) on ΓN Assume that G and F are harmonic sound sources with the same angular frequence ω, i.e. G(t, x) = Re[g(x)e−iωt], F(t, x) = Re[f (x)e−iωt], f (x), g(x) ∈ C f(x),g(x):=Amplitudes. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 14 / 64
  • 54. Sparse-Bayesian Approach - Helmholtz Equation Helmholtz Equation ⇒ C−Pressure Amplitude: Solution of the acoustic wave equation: y(t, x) = Re[H(x)e−iωt] Helmholtz Equation:    − H − ω c 2 H = f in D, ∂νH − i ωρ γ(ω) H = 0 on ΓZ := ∂D ΓN, ∂νH = g on ΓN. γ(ω) =wall impedance of the wall ΓZ , ρ :=density of the fluid. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 15 / 64
  • 55. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 56. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 57. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 58. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 59. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 60. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. Advantage: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 61. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. Advantage: Advanced theory is available for this kind of distributions! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 62. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. Advantage: Advanced theory is available for this kind of distributions! Disadvantage: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 63. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. Advantage: Advanced theory is available for this kind of distributions! Disadvantage: Sparse Sound Sources: Samples of N(0, − −α ) are not sparse → Markov-kernels with rejection sampling, used in SMC or MH, show up a high rejection behavior (low performance, Cotter et al.). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 64. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. Advantage: Advanced theory is available for this kind of distributions! Disadvantage: Sparse Sound Sources: Samples of N(0, − −α ) are not sparse → Markov-kernels with rejection sampling, used in SMC or MH, show up a high rejection behavior (low performance, Cotter et al.). Karhunen-Loève expansion: It can be expensive to sample from the Prior! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 65. Sparse-Bayesian Approach - Helmholtz Equation Gaussian Prior for the Helmholtz problem: We want to identify a probability distribution for positions and amplitudes of an unknown number of sound sources. We need a Prior for f which models sound sources. Problem sparsity: A Gaussian prior of the form N(0, − −α ) is often used in practice. Advantage: Advanced theory is available for this kind of distributions! Disadvantage: Sparse Sound Sources: Samples of N(0, − −α ) are not sparse → Markov-kernels with rejection sampling, used in SMC or MH, show up a high rejection behavior (low performance, Cotter et al.). Karhunen-Loève expansion: It can be expensive to sample from the Prior! We need a clear relationship between the number, positions and amplitudes of the sound sources ⇒ Non-Gaussian Prior! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
  • 66. Sparse-Bayesian Approach - Helmholtz Equation Desired Prior: f should be a finite linear combination Dirac delta measures: f = k i=1 αi δxi with αi ∈ C, and xi inside D. f =loudspeakers as acoustic monopoles. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 17 / 64
  • 67. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 68. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 69. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm η = η1 + i · η2 with ηj ∼ N(0, Σj). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 70. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm η = η1 + i · η2 with ηj ∼ N(0, Σj). η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation matrix Cη = Σ1 − Σ2. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 71. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm η = η1 + i · η2 with ηj ∼ N(0, Σj). η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation matrix Cη = Σ1 − Σ2. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 72. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm η = η1 + i · η2 with ηj ∼ N(0, Σj). η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation matrix Cη = Σ1 − Σ2. Observation Operator G : D → Cm, u → (Hu(zj))m j=1 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 73. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm η = η1 + i · η2 with ηj ∼ N(0, Σj). η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation matrix Cη = Σ1 − Σ2. Observation Operator G : D → Cm, u → (Hu(zj))m j=1 where D is the space of finite linear combinations of Diracs with support inside D, and zj ∈ D as measurement points. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 74. Sparse-Bayesian Approach - Helmholtz Equation Bayesian Inversion - Helmholtz Equation yd := (ydj)d j=1 = G(u) + η ∈ Cm η = η1 + i · η2 with ηj ∼ N(0, Σj). η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation matrix Cη = Σ1 − Σ2. Observation Operator G : D → Cm, u → (Hu(zj))m j=1 where D is the space of finite linear combinations of Diracs with support inside D, and zj ∈ D as measurement points. Green’s function solves Helmholtz equation ⇒ Assumption on prior: We expect no sound sources in a small neighborhood of the microphones (H2-regularity; pointwise measurements!) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
  • 75. Sparse-Bayesian Approach - Helmholtz Equation Sparse prior Formally, we consider a prior of the form: u = k j=1 αk j δxk j ∈ D k :=random variable with values in N, αk j =random variable with values in C, xk j =random variable with values in the Helmholtz domain D. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 19 / 64
  • 76. Sparse-Bayesian Approach - Helmholtz Equation Sparse prior - Example (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
  • 77. Sparse-Bayesian Approach - Helmholtz Equation Sparse prior - Example k ∼ Poi(Ek) with Ek = expected number of sources. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
  • 78. Sparse-Bayesian Approach - Helmholtz Equation Sparse prior - Example k ∼ Poi(Ek) with Ek = expected number of sources. αk j = αk j1 + i · αk j2 ∈ C with αk jr iid ∼ N(µr , σr ), r = 1, 2. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
  • 79. Sparse-Bayesian Approach - Helmholtz Equation Sparse prior - Example k ∼ Poi(Ek) with Ek = expected number of sources. αk j = αk j1 + i · αk j2 ∈ C with αk jr iid ∼ N(µr , σr ), r = 1, 2. xk j iid ∼ Unif (Dκ) with −→ Z = (zi )m i=1 ⊂ D measurement positions, Dκ ⊂ D open, and dist(Dκ, ∂D ∪ −→ Z ) > κ > 0 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
  • 80. Sparse-Bayesian Approach - Helmholtz Equation Sparse prior - Example k ∼ Poi(Ek) with Ek = expected number of sources. αk j = αk j1 + i · αk j2 ∈ C with αk jr iid ∼ N(µr , σr ), r = 1, 2. xk j iid ∼ Unif (Dκ) with −→ Z = (zi )m i=1 ⊂ D measurement positions, Dκ ⊂ D open, and dist(Dκ, ∂D ∪ −→ Z ) > κ > 0 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
  • 81. Sparse-Bayesian Approach - Helmholtz Equation Negative Log Likelihood: Φ : D × Cm → R (u, yd ) → 1 2 yd − G(u) 2 Γη,Cη with yd − G(u) 2 Γη,Cη := z1 Γη Cη Cη Γη −1 z2 z1 := (yd − G(u)) T , (yd − G(u))T , z2 :=    yd − G(u) (yd − G(u))    . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 21 / 64
  • 82. Sparse-Bayesian Approach - Helmholtz Equation Posterior Solution of the inverse problem: u|yd (Posterior distribution) Pu|yd (A) = 1 Λ(yd ) A exp(−Φ(u, yd ))dPu(u), where A ⊂ D, Z(y) is a normalization constant Λ(yd ) = D exp(−Φ(u, yd ))dPu(u). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 22 / 64
  • 83. Sparse-Bayesian Approach - Helmholtz Equation Posterior - Stability Results Assume that the prior measure has 2nd moment. Then for all r > 0, there exists C > 0 s.t. dHell (Pu|y1 , Pu|y2 ) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0). Assume that the prior measure has 2nd moment. Let r > 0 and f be a X−valued function, which is square integrable with respect to Pu|y for all y ∈ Br (0) ⊂ Cm. Then, there is a C > 0 such that EPu|y1 (f ) − EPu|y2 (f ) X ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 23 / 64
  • 84. Sparse-Bayesian Approach - Helmholtz Equation Posterior - Finite Element Approximation We approximate the very weak solution of the Helmholtz Equation by a Finite Element Method ⇒ H(u) − Hh(u) H2(Dκ) ∈ O(|ln(h)|h2) Posterior - Finite Element Approximation Pu|yd ,h(A) = 1 Λh(yd ) A exp(−Φh(u, yd ))dPu(u), with Φh(u, yd ) = 1 2 yd − Gh(u) 2 Γη,Cη , and Gh(u) = (Hu,h(zj))m j=1. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 24 / 64
  • 85. Sparse-Bayesian Approach - Helmholtz Equation Posterior - Stability Results Assume that the prior measure has 4th moment. For yd ∈ Cm holds, dHell (Pu|yd ,h, Pu|yd ) ∈ O(|ln(h)|h2). Assume that the prior measure Pu has 4th moment. Let r > 0 and f be a X−valued function, which has second moment with respect to Pu|yd and Pu|yd ,h. Then, there holds EPu|yd ,h (f ) − EPu|yd (f ) X ∈ O(|ln(h)|h2). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 25 / 64
  • 86. Sparse-Bayesian Approach - Helmholtz Equation 10 -2 10 -1 10 0 h 10 -2 10 -1 10 0 HellingerDistance 10 -2 10 -1 10 0 h 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 Variance Left Figure "o-o"= distHell (Pu|yd ,h, Pu|yd ). Average of Hellinger distance over 50 runs. "- -"= O(| ln h|h2) Right Figure distHell (Pu|yd ,h, Pu|yd ) variance for different h from 50 runs. Mesh size: h = √ 2 · 2−k with k = 2, ..., 6 SMC method with fixed N = 5 · 105 prior particles. Reference measure kref = 7. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 26 / 64
  • 87. Sparse-Bayesian Approach - Helmholtz Equation 10 -2 10 -1 10 0 h 10 -4 10 -3 10 -2 10 -1 10 0 10 1 Error 10 -2 10 -1 10 0 h 10 -16 10 -14 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 Variance f1 f2 f 3 f 4 f 5 Left Figure EPu|yd (f ) − EPu|yd ,h (f ) X Average of 50 runs. Functions: f1(u) := u 1 : First moment. f2(u) := 1{two sources}(u): Probability two sources. (bounded!) f3(u) := |yu(z0)|:texdddfiiiExpected pressure amplitude at z0. f4(u) := Var(|yu(z0)|):textVariance of pressure amplitude at z0. f5(u) := 10 log10(max(1, | Re(yu(z0) exp(−iζt))|)): Decibel function at z0. Computation: See Hellinger distance. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 27 / 64
  • 88. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Idea Approximating a probability measure µ with Dirac measures: µ ≈ 1 N N i=1 δui , for (ui )N i=1 ⊂ supp(µ). Approximate the Posterior measure with intermediate measures: dµj(u) := 1 Λj exp (−βjψ(u, yd )) dPu(u), with 0 = β0 < β1 < · · · < βJ = 1. Sequential updates: µ0 → µ1 → · · · → µJ−1 → µJ, with additional redrawing steps for each µj (µj-Invariant Markov-Kernel) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 28 / 64
  • 89. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 90. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update 1. Let µN 0 = µ0 and set j = 0. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 91. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update 1. Let µN 0 = µ0 and set j = 0. 2. Resample u (n) j ∼ µN j , n = 1, · · · , N. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 92. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update 1. Let µN 0 = µ0 and set j = 0. 2. Resample u (n) j ∼ µN j , n = 1, · · · , N. 3. Set w (n) j = 1 N , n = 1, · · · , N and define µN j = N n=1 w (n) j δu (n) j . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 93. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update 1. Let µN 0 = µ0 and set j = 0. 2. Resample u (n) j ∼ µN j , n = 1, · · · , N. 3. Set w (n) j = 1 N , n = 1, · · · , N and define µN j = N n=1 w (n) j δu (n) j . 4. Apply Markov kernel u (n) j+1 ∼ Pj(u (n) j , ·) (Redraw Step: Speed up possible!). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 94. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update 1. Let µN 0 = µ0 and set j = 0. 2. Resample u (n) j ∼ µN j , n = 1, · · · , N. 3. Set w (n) j = 1 N , n = 1, · · · , N and define µN j = N n=1 w (n) j δu (n) j . 4. Apply Markov kernel u (n) j+1 ∼ Pj(u (n) j , ·) (Redraw Step: Speed up possible!). 5. Define w (n) j+1 = w (n) j+1/ N ˜n=1 w (˜n) j+1, with w (n) j+1 = exp (βj − βj+1)Ψ(u (n) j+1, y) w (n) j and µN j+1 := N n=1 w (n) j+1δu (n) j+1 . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 95. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Sequential Update 1. Let µN 0 = µ0 and set j = 0. 2. Resample u (n) j ∼ µN j , n = 1, · · · , N. 3. Set w (n) j = 1 N , n = 1, · · · , N and define µN j = N n=1 w (n) j δu (n) j . 4. Apply Markov kernel u (n) j+1 ∼ Pj(u (n) j , ·) (Redraw Step: Speed up possible!). 5. Define w (n) j+1 = w (n) j+1/ N ˜n=1 w (˜n) j+1, with w (n) j+1 = exp (βj − βj+1)Ψ(u (n) j+1, y) w (n) j and µN j+1 := N n=1 w (n) j+1δu (n) j+1 . 6. j ← j + 1 and go to 2. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
  • 96. Sparse-Bayesian Approach - Helmholtz Equation Sequential Monte Carlo Method - Mean Square Error Theorem For every measurable and bounded function f the measure µN J satisfies ESMC[|EµN J [f ] − EPu|y [f ]|2 ] ≤   J j=1 2Λ−1 J j   2 f 2 ∞ N , where ESMC is the expectation with respect to the randomness in the SMC algorithm. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 30 / 64
  • 97. Sparse-Bayesian Approach - Helmholtz Equation 200 800 3200 12800 51200 N 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 MeanSquareError 200 800 3200 12800 51200 N 10 -12 10 -10 10 -8 10 -6 10 -4 10 -2 10 0 10 2 Variance f 1 f 2 f 3 f 4 f 5 Left Figure ESMC[|EµN J [f ] − EPu|y [f ]|2] 100 runs for each N. Fixed mesh size h = √ 2 · 2−7. Reference measure Pu|yd with Nref = 107. f2(u) := 1{two sources}(u): Probability of two sources is bounded! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 31 / 64
  • 98. Sparse-Bayesian Approach - Helmholtz Equation Experimental Parameters - Helmholtz Equation: Helmholtz domain: D = [0, 1]2. Exact sources: (10 + 10i) · δ(0.25,0.75) + (10 + 10i) · δ(0.75,0.75). Helmholtz boundary conditions: ΓZ = ∂D. Helmholtz-Parameter: ρ = 1 (fluid density), ζ = 30 (freq.), c = 5 (sound speed), and (α(ζ), β(ζ)) = (1, 1 30) (isolating material in ∂D). Microphone positions: z1 = (0.1, 0.5), z2 = (0.5, 0.5), z3 = (0.9, 0.5). Measurement: yd = Gh(uexact) + ˜η ∈ C3 where ˜ηi = ˜ηi,re + ˜ηi,im · i with i.i.d. ˜ηi,re, ˜ηi,im ∼ N(0, 0.05), for i = 1, 2, 3. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 32 / 64
  • 99. Sparse-Bayesian Approach - Helmholtz Equation Experimental Parameters - Prior Measure: k(ω) i=1 (αk i,re + αk i,re · i) · δxk i k ∼ Pois(2) αk i,re, αk i,re i.i.d. N(10, 1) distributed, for i = 1, · · · , k. xk i i.i.d. Unif(Dκ) distributed with Dκ = [0.1, 0.9] × [0.6, 0.9]. We assume the independence of all random variables. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 33 / 64
  • 100. Sparse-Bayesian Approach - Helmholtz Equation Experimental Parameters - SMC Algorithm: Non-uniform tempering steps: β0 = 0, β1 = 0.03, β2 = 1. Number of approximating Diracs: N = 107 Redraw step parameters: mα = 10 + 10i, γα = 0.4, γx = 0.1, ξi = ξi,re + ξi,im · i with i.i.d. ξi,re, ξi,im ∈ N(0, 1). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 34 / 64
  • 101. Sparse-Bayesian Approach - Helmholtz Equation a) Smoothed posterior distribution Pu|yd (·) restricted to the source positions. b) Smoothed distribution restricted to the source positions of Pu|yd (·| at least one sound source in [0.2, 0.3] × [0.7, 0.8]). c) Smoothed distribution restricted to the source positions of Pu|yd (·| at least one sound source in [0.7, 0.8] × [0.7, 0.8]). We assume that all random variables are independent of each other. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 35 / 64
  • 102. Sparse-Bayesian Approach - Helmholtz Equation a) Smoothed distribution of Pu|yd (·|k = 2) in [0, 1]2. b) Smoothed distribution of Pu|yd (·|k = 3) in [0, 1]2. We observe significant differences! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 36 / 64
  • 103. Sparse-Bayesian Approach - Helmholtz Equation No. k PµN J (k) x(nk MAP ) α(nk MAP ) w(nk MAP ) 2 77.8 % (0.37, 0.62) (0.82, 0.69) 10.04 + 11.03i 8.317 + 9.77i 1.09 · 10−6 3 21.9% (0.17, 0.85) (0.90, 0.72) (0.46, 0.75) 7.98 + 9.44i 8.13 + 10.04i 8.81 + 9.49i 1.11 · 10−6 Empirical MAP Estimator This example shows that the empirical MAP estimator is not always the best choice! Two sources are most likely. Global empirical MAP estimator leads to three sources! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 37 / 64
  • 104. Sparse-Bayesian Approach - Helmholtz Equation Top left figure: True solution |Huexact,h|. Top right figure: Posterior mean EPu|yd [|Hu|] Lower left/right figure: Conditioned MAP estimator for k = 3 & 2. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 38 / 64
  • 105. Sparse-Bayesian Approach - Wave Equation Optimal control and Bayesian inversion for the wave equation with BV −functions in time (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 39 / 64
  • 106. Sparse-Bayesian Approach - Wave Equation Optimal Control - Wave Equation - BV-functions in time Let us introduce the optimal control problem (PΠ) for the linear wave equation with homogeneous Dirichlet boundary conditions: (PΠ )    min u∈BV (0,T)m    β 2 Π(yu( −→ t )) − yd 2 R + m j=1 αj Dtuj M(0,T)    =: JΠ(y, u) subject to the weak solution of ∂ttyu − yu = m j=1 ujgj in (0, T) × Ω yu = 0 on (0, T) × ∂Ω (yu, ∂tyu) = (y0, y1) in {0} × Ω (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 40 / 64
  • 107. Sparse-Bayesian Approach - Wave Equation Optimal Control - Wave Equation - BV-functions in time yu( −→ t ) := (yu(ti ))r i=1 ⊂ (H1 0 )r with 0 < t1 < · · · < tr ≤ T. Π : L2(Ω)r → R linear bounded. (gj)m j ⊂ L∞(Ω) {0} with pairwise disjoint supports. Assume that (Π(ygj ( −→ t )))m j=1 are linear independent in R . Example for Π - Patch measurements: Π2 : L2(Ω)r → Rk·r (ϕi )r i=1 →    1 |Ωl | Ωl ϕi dx k l=1    r i=1 . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 41 / 64
  • 108. Sparse-Bayesian Approach - Wave Equation Optimal Control - Solution (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
  • 109. Sparse-Bayesian Approach - Wave Equation Optimal Control - Solution Problem (PΠ) has a solution in BV (I)m. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
  • 110. Sparse-Bayesian Approach - Wave Equation Optimal Control - Solution Problem (PΠ) has a solution in BV (I)m. Numerics: Regularization of (PΠ): (PΠ γ ) min u∈H1(0,T)m JΠ(y, u) + γ 2 m j=1 ∂tuj L2(0,T) =: JΠ γ (y, u) (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
  • 111. Sparse-Bayesian Approach - Wave Equation Optimal Control - Solution Problem (PΠ) has a solution in BV (I)m. Numerics: Regularization of (PΠ): (PΠ γ ) min u∈H1(0,T)m JΠ(y, u) + γ 2 m j=1 ∂tuj L2(0,T) =: JΠ γ (y, u) Optimality conditions, a discussion on sparsity, and asymptotic behavior for (PΠ) and (PΠ γ ) can be found in my PhD thesis. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
  • 112. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Optimal Control: Wave Equation Parameters: D = [0, 1]2, T = 1, (y0, y1) = (0, 0), τ = 2−8, h ≈ 2−6. Optimal Control Parameters: m = 1, g(x, y) = 500 · 1B0.05(x0), x0 = (0.5, 0.5), −→ t = (0.0977, 0.1992, 0.2969, 0.3477, 0.3984, 1) Operator Π = Π2: Π2 : L2(Ω)6 → R30 (ϕi )6 i=1 →   1 |Ωl | Ωl ϕi dx 5 l=1   6 i=1 Ω1 = (1/9, 2/9) × (1/9, 2/9), Ω2 = (1/9, 2/9) × (7/9, 8/9), Ω3 = (7/9, 8/9) × (1/9, 2/9), Ω4 = (7/9, 8/9) × (7/9, 8/9), Ω5 = (4/9, 5/9) × (4/9, 5/9). Desired value: yd = Π(yˇu) + η with ˇu = 5 · 1(0.1328,0.2656) and η ∼ N(0, idR30 ). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 43 / 64
  • 113. Sparse-Bayesian Approach - Wave Equation By a BV-path-following method, we solved for γ → ≈ 0 the corresponding problems (˜PΠ γ ) with a semi-smooth Newton method. From left to right, we see the optimal controls for TV −regularization parameters α = 0.6, α = 0.18, and α = 0.006. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 44 / 64
  • 114. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 115. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 116. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . We consider a prior of the form ki ji =1 αki ji 1(t ki ji ,T] + ci m i=1 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 117. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . We consider a prior of the form ki ji =1 αki ji 1(t ki ji ,T] + ci m i=1 1) ki is a N−random variable for i = 1, · · · , m, (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 118. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . We consider a prior of the form ki ji =1 αki ji 1(t ki ji ,T] + ci m i=1 1) ki is a N−random variable for i = 1, · · · , m, 2) αki ji is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki , (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 119. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . We consider a prior of the form ki ji =1 αki ji 1(t ki ji ,T] + ci m i=1 1) ki is a N−random variable for i = 1, · · · , m, 2) αki ji is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki , 3) tki ji is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki , (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 120. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . We consider a prior of the form ki ji =1 αki ji 1(t ki ji ,T] + ci m i=1 1) ki is a N−random variable for i = 1, · · · , m, 2) αki ji is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki , 3) tki ji is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki , 4) ci is a R−random variable for i = 1, · · · , m. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 121. Sparse-Bayesian Approach - Wave Equation Sparse-Bayesian Approach to (PΠ): Consider the Bayesian inverse problem yd = Π(yu( −→ t )) + η, with noise η ∼ N(0, 1 β idR ) with Σ := 1 β idR . We consider a prior of the form ki ji =1 αki ji 1(t ki ji ,T] + ci m i=1 1) ki is a N−random variable for i = 1, · · · , m, 2) αki ji is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki , 3) tki ji is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki , 4) ci is a R−random variable for i = 1, · · · , m. This kind of prior can have a dense support in BV (0, T)m with respect to the strict BV-topology. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
  • 122. Sparse-Bayesian Approach - Wave Equation Posterior (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
  • 123. Sparse-Bayesian Approach - Wave Equation Posterior dPu|yd (u) = 1 Λyd exp(−β 2 Π(yu( −→ t )) − yd 2 R )dPu(u). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
  • 124. Sparse-Bayesian Approach - Wave Equation Posterior dPu|yd (u) = 1 Λyd exp(−β 2 Π(yu( −→ t )) − yd 2 R )dPu(u). Posterior - Stability Results By the same assumptions as in the Helmholtz equation, it holds: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
  • 125. Sparse-Bayesian Approach - Wave Equation Posterior dPu|yd (u) = 1 Λyd exp(−β 2 Π(yu( −→ t )) − yd 2 R )dPu(u). Posterior - Stability Results By the same assumptions as in the Helmholtz equation, it holds: dHell (Pu|y1 , Pu|y2 ) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
  • 126. Sparse-Bayesian Approach - Wave Equation Posterior dPu|yd (u) = 1 Λyd exp(−β 2 Π(yu( −→ t )) − yd 2 R )dPu(u). Posterior - Stability Results By the same assumptions as in the Helmholtz equation, it holds: dHell (Pu|y1 , Pu|y2 ) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0). EPu|y1 (f ) − EPu|y2 (f ) X ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 46 / 64
  • 127. Sparse-Bayesian Approach - Wave Equation Posterior - Finite Element Approximation (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
  • 128. Sparse-Bayesian Approach - Wave Equation Posterior - Finite Element Approximation We approximate the weak solution of the wave equation by a Finite Element Method ⇒ yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2), with (g, y0, y1) ∈ C2(Ω) × C3 0 (Ω) × C2 0 (Ω). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
  • 129. Sparse-Bayesian Approach - Wave Equation Posterior - Finite Element Approximation We approximate the weak solution of the wave equation by a Finite Element Method ⇒ yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2), with (g, y0, y1) ∈ C2(Ω) × C3 0 (Ω) × C2 0 (Ω). By the same assumptions as in the Helmholtz equation, it holds: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
  • 130. Sparse-Bayesian Approach - Wave Equation Posterior - Finite Element Approximation We approximate the weak solution of the wave equation by a Finite Element Method ⇒ yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2), with (g, y0, y1) ∈ C2(Ω) × C3 0 (Ω) × C2 0 (Ω). By the same assumptions as in the Helmholtz equation, it holds: dHell (Pu|yd ,τ,h, Pu|yd ) ∈ O(τ2 + h2). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
  • 131. Sparse-Bayesian Approach - Wave Equation Posterior - Finite Element Approximation We approximate the weak solution of the wave equation by a Finite Element Method ⇒ yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2), with (g, y0, y1) ∈ C2(Ω) × C3 0 (Ω) × C2 0 (Ω). By the same assumptions as in the Helmholtz equation, it holds: dHell (Pu|yd ,τ,h, Pu|yd ) ∈ O(τ2 + h2). EPu|yd ,τ,h (f ) − EPu|yd (f ) X ∈ O(τ2 + h2). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
  • 132. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 133. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 134. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. Prior Parameters: k1 j1=1 αk1 j1 1(t k1 j1 ,T] + c1 (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 135. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. Prior Parameters: k1 j1=1 αk1 j1 1(t k1 j1 ,T] + c1 k1 ∼ Pois(2), (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 136. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. Prior Parameters: k1 j1=1 αk1 j1 1(t k1 j1 ,T] + c1 k1 ∼ Pois(2), tk1 j1 ∼ Unif(0, T) i.i.d., (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 137. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. Prior Parameters: k1 j1=1 αk1 j1 1(t k1 j1 ,T] + c1 k1 ∼ Pois(2), tk1 j1 ∼ Unif(0, T) i.i.d., αk1 j1 ∼ N(0, 5) i.i.d., (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 138. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. Prior Parameters: k1 j1=1 αk1 j1 1(t k1 j1 ,T] + c1 k1 ∼ Pois(2), tk1 j1 ∼ Unif(0, T) i.i.d., αk1 j1 ∼ N(0, 5) i.i.d., c1 ∼ N(0, 0.1) (all random variables are independent of each other). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 139. Sparse-Bayesian Approach - Wave Equation Experimental Parameters - Sparse-Bayesian Approach: Parameters of the Optimal Control Problem: All parameters with respect to the wave equation, operator Π = Π2, g, −→ t , and yd are the same. Prior Parameters: k1 j1=1 αk1 j1 1(t k1 j1 ,T] + c1 k1 ∼ Pois(2), tk1 j1 ∼ Unif(0, T) i.i.d., αk1 j1 ∼ N(0, 5) i.i.d., c1 ∼ N(0, 0.1) (all random variables are independent of each other). Algorithm: Similar to the Helmholtz Problem, we considered a SMC method with additional invariant Metropolis-Hastings algorithm that obtained similar convergence results as in the Helmholtz case. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
  • 140. Sparse-Bayesian Approach - Wave Equation N = 5000. The left figure represents the support of the approximated posterior. In the right figure, we can compare the empirical MAP estimator with the optimal control for TV −regularization parameter α = 0.18. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 49 / 64
  • 141. Sparse-Bayesian Approach - Wave Equation Mean: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 50 / 64
  • 142. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 143. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 144. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 145. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: 1) MCMC for Gaussian Prior with TV-depended rejection function (10 Steps). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 146. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: 1) MCMC for Gaussian Prior with TV-depended rejection function (10 Steps). 2) Fitting-term depending rejection function is used in a second step. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 147. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: 1) MCMC for Gaussian Prior with TV-depended rejection function (10 Steps). 2) Fitting-term depending rejection function is used in a second step. We used a truncated Karhunen-Loève expansion ⇒ It is computationally expensive for a high truncation index. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 148. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: 1) MCMC for Gaussian Prior with TV-depended rejection function (10 Steps). 2) Fitting-term depending rejection function is used in a second step. We used a truncated Karhunen-Loève expansion ⇒ It is computationally expensive for a high truncation index. Strong initial value dependence (not needed in the SMC). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 149. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: 1) MCMC for Gaussian Prior with TV-depended rejection function (10 Steps). 2) Fitting-term depending rejection function is used in a second step. We used a truncated Karhunen-Loève expansion ⇒ It is computationally expensive for a high truncation index. Strong initial value dependence (not needed in the SMC). Weak predictions with randomized initial values: (9) used actually 106 − 108 samples for their experiments! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 150. Sparse-Bayesian Approach - Wave Equation Experimental observations on the TV-Gaussian Prior: Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1): min u∈(H1 0 (0,T))m β 2 G(u) − yd 2 Σ + m i=1 αi ∂tui M(0,T) + 1 2λ u 2 (H1 0 (0,T))m . Experiment Parameters: m = 1, α = 500, λ = 1. Splitting pCN Algorithm: 1) MCMC for Gaussian Prior with TV-depended rejection function (10 Steps). 2) Fitting-term depending rejection function is used in a second step. We used a truncated Karhunen-Loève expansion ⇒ It is computationally expensive for a high truncation index. Strong initial value dependence (not needed in the SMC). Weak predictions with randomized initial values: (9) used actually 106 − 108 samples for their experiments! Our initial value: we considered the ’true function’, projected on the mesh, and perturbed it with N(0, β) on each node (No information in (9)). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
  • 151. Sparse-Bayesian Approach - Wave Equation (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 52 / 64
  • 152. Outlook Identification of moving acoustic sound sources with prior measures that have a support in L2(0, T; M(D)). Analytical expression of the MAP estimator for our Sparse-Bayesian approach. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 53 / 64
  • 153. Literature (1) S.L. Cotter, G.O. Roberts, A.M. Stuart, D. White, MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster. Statist Sci. 28(3):424– 446, 2013. (2) O. Coutant, Bayesian inversion, Lecture notes, Joint Inversion in Geophysics summer school, Barcelonnette (France), 2015. (3) R. Rodriguez A. Bermudez, P. Gamallo. Finite element methods in local active control of sound. SIAM J. CONTROL OPTIM. Vol. 43, No. 2, pp. 437–465, Society for Industrial and Applied Mathematics, 2004. (4) M. Dashti and A. M. Stuart. The Bayesian Approach To Inverse Problems. ArXiv e-prints, February 2013. (5) R. Dautray and J. L. Lions. Mathematical analysis and numerical methods for science and technology. Springer-Verlag, Berlin, 1984-1985. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 54 / 64
  • 154. Literature (6) K. Pieper, B. Tang, P. Trautmann, and D. Walter. Inverse point source location with the helmholtz equation. Not Published yet, TBA. (7) A. Schatz. An observation concerning ritz-galerkin methods with idefinite bilinear forms. Mathematics of computation, Volume 28, number 128, pages 959-962, 1974. (8) A. M. Stuart. Inverse problems: A Bayesian perspective. Acta Numerica, 19, , pp 451-559 doi:10.1017/ S0962492910000061, 2010. (9) Z. Yao, Z. Hu, J. Li, A TV-Gaussian prior for infinite-dimensional Bayesian inverse problems and its numerical implementations, Inverse Problems, 34 ,2018. (10) A. A. Zlotnik, Convergence rate estimates of finite-element methods for second-order hyperbolic equations. numerical methods and applications, p.153 et.seq. Guri I. Marchuk, CRC Press, 1994. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 55 / 64
  • 155. Thank you. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 56 / 64
  • 156. Appendix - Helmholtz Equation - Prior Sparse prior - Definition (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 157. Appendix - Helmholtz Equation - Prior Sparse prior - Definition (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 158. Appendix - Helmholtz Equation - Prior Sparse prior - Definition Pu(·) = n∈N q(n)µ0 n(·). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 159. Appendix - Helmholtz Equation - Prior Sparse prior - Definition Pu(·) = n∈N q(n)µ0 n(·). Let (q(n))n∈N0 be a sequence in [0, 1] with n∈N0 q(n) = 1. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 160. Appendix - Helmholtz Equation - Prior Sparse prior - Definition Pu(·) = n∈N q(n)µ0 n(·). Let (q(n))n∈N0 be a sequence in [0, 1] with n∈N0 q(n) = 1. We can identify u = k j=1 αk j δxk j ∈ D with (αk j , xk j )k j=1 ∈ (C × Dκ)k. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 161. Appendix - Helmholtz Equation - Prior Sparse prior - Definition Pu(·) = n∈N q(n)µ0 n(·). Let (q(n))n∈N0 be a sequence in [0, 1] with n∈N0 q(n) = 1. We can identify u = k j=1 αk j δxk j ∈ D with (αk j , xk j )k j=1 ∈ (C × Dκ)k. W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪ −→ Z ) > κ > 0. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 162. Appendix - Helmholtz Equation - Prior Sparse prior - Definition Pu(·) = n∈N q(n)µ0 n(·). Let (q(n))n∈N0 be a sequence in [0, 1] with n∈N0 q(n) = 1. We can identify u = k j=1 αk j δxk j ∈ D with (αk j , xk j )k j=1 ∈ (C × Dκ)k. W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪ −→ Z ) > κ > 0. Replace D with 1(C, D) ⊂ 1(C, Rd ) =: 1 with norm (αn, xn)n∈N0 1 := n∈N0 |αn|C + |xn|Rd . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 163. Appendix - Helmholtz Equation - Prior Sparse prior - Definition Pu(·) = n∈N q(n)µ0 n(·). Let (q(n))n∈N0 be a sequence in [0, 1] with n∈N0 q(n) = 1. We can identify u = k j=1 αk j δxk j ∈ D with (αk j , xk j )k j=1 ∈ (C × Dκ)k. W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪ −→ Z ) > κ > 0. Replace D with 1(C, D) ⊂ 1(C, Rd ) =: 1 with norm (αn, xn)n∈N0 1 := n∈N0 |αn|C + |xn|Rd . Define the probability measure µ0 n on 1 κ := 1(C, Dκ) (=open) with the Borel σ-algebra and support in 1 n,κ := 1 κ−sequence is zero after index n. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
  • 164. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 165. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 166. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 167. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 168. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, 2. xk i = xk i + γx ηi ∈ Dκ otherwise xk i = xk i , (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 169. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, 2. xk i = xk i + γx ηi ∈ Dκ otherwise xk i = xk i , 3. (αk i )k i=1 = (1 − γ2 α)1/2 ((αk i )k i=1 − mα) + mα + γαξ, (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 170. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, 2. xk i = xk i + γx ηi ∈ Dκ otherwise xk i = xk i , 3. (αk i )k i=1 = (1 − γ2 α)1/2 ((αk i )k i=1 − mα) + mα + γαξ, All random variables are independent of each other, (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 171. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, 2. xk i = xk i + γx ηi ∈ Dκ otherwise xk i = xk i , 3. (αk i )k i=1 = (1 − γ2 α)1/2 ((αk i )k i=1 − mα) + mα + γαξ, All random variables are independent of each other, with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and η ∈ N(0, idRk ). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 172. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, 2. xk i = xk i + γx ηi ∈ Dκ otherwise xk i = xk i , 3. (αk i )k i=1 = (1 − γ2 α)1/2 ((αk i )k i=1 − mα) + mα + γαξ, All random variables are independent of each other, with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and η ∈ N(0, idRk ). Accept u with probability min{0, exp(βj(Φ(u; yd ) − Φ(u; yd )))}. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 173. Appendix - Helmholtz Equation - SMC Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings Sample u is characterized by k =Number of Sources, αk i =Amplitudes of S., xk i =Positions of S. Redraw sample u by u with the following rule: 1. k = k, 2. xk i = xk i + γx ηi ∈ Dκ otherwise xk i = xk i , 3. (αk i )k i=1 = (1 − γ2 α)1/2 ((αk i )k i=1 − mα) + mα + γαξ, All random variables are independent of each other, with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and η ∈ N(0, idRk ). Accept u with probability min{0, exp(βj(Φ(u; yd ) − Φ(u; yd )))}. This redraw step is µj invariant! (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
  • 174. Appendix The function G : (X, · X ) → Rm fulfills the following assumptions in the examples above: Assumptions on G: i) (X, · X ) separable Banach space. ii) For every > 0 there is M = M( ) ∈ R such that, for all u ∈ X, G(u) Σ ≤ exp( u 2 X + M), iii) for every r > 0, there is K = K(r) > 0 such that, for all u1, u2 ∈ X with max{ u1 X , u2 X } < r, G(u1) − G(u2) Σ ≤ K u1 − u2 X . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 59 / 64
  • 175. Appendix Why not choose X as non-separable Banach space? For a non-separable Banach space X, it is not clear what a "natural"σ−Algebra on X is. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
  • 176. Appendix Why not choose X as non-separable Banach space? For a non-separable Banach space X, it is not clear what a "natural"σ−Algebra on X is. On candidate: Cylindrical σ−Algebra = smallest σ−Algebra s.t. all ∈ X∗ are measurable. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
  • 177. Appendix Why not choose X as non-separable Banach space? For a non-separable Banach space X, it is not clear what a "natural"σ−Algebra on X is. On candidate: Cylindrical σ−Algebra = smallest σ−Algebra s.t. all ∈ X∗ are measurable. In separable Banach spaces hold: Borel σ-Algebra = cylindrical σ−Algebra. This holds in general not in non-separable Banach spaces. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
  • 178. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 179. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: 1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0 such that B exp(α u 2)dµ(u) < ∞ (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 180. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: 1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0 such that B exp(α u 2)dµ(u) < ∞ 2) Fernique Theorem implies: Every Gaussian measure µ admits a compact covariance operator (2nd moment B u 2dµ(u) < ∞). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 181. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: 1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0 such that B exp(α u 2)dµ(u) < ∞ 2) Fernique Theorem implies: Every Gaussian measure µ admits a compact covariance operator (2nd moment B u 2dµ(u) < ∞). 3) In separable Hilbert space B, one can characterize µ completely by its mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC and C : B → B. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 182. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: 1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0 such that B exp(α u 2)dµ(u) < ∞ 2) Fernique Theorem implies: Every Gaussian measure µ admits a compact covariance operator (2nd moment B u 2dµ(u) < ∞). 3) In separable Hilbert space B, one can characterize µ completely by its mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC and C : B → B. C is trace class symmetric operator. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 183. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: 1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0 such that B exp(α u 2)dµ(u) < ∞ 2) Fernique Theorem implies: Every Gaussian measure µ admits a compact covariance operator (2nd moment B u 2dµ(u) < ∞). 3) In separable Hilbert space B, one can characterize µ completely by its mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC and C : B → B. C is trace class symmetric operator. Cameron Martin Space HC = {u ∈ B|C1/2x = u, x ∈ B} of C. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 184. Appendix Let B be a separable Banach space. In general, we lose the following properties for non-separable Banach space X: 1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0 such that B exp(α u 2)dµ(u) < ∞ 2) Fernique Theorem implies: Every Gaussian measure µ admits a compact covariance operator (2nd moment B u 2dµ(u) < ∞). 3) In separable Hilbert space B, one can characterize µ completely by its mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC and C : B → B. C is trace class symmetric operator. Cameron Martin Space HC = {u ∈ B|C1/2x = u, x ∈ B} of C. Tr(C) = B exp(α u 2)dµ(u). (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
  • 185. Appendix Definition (Maximum a Posteriori Estimator) Let zδ = arg max z∈H Jδ(z), with Jδ(z) := Pu|yd (Bδ(z)), where Bδ(z) ⊂ H is a ball that is centered in z ∈ H with radius δ > 0. Any point ˜z ∈ H satisfying lim δ→0 Jδ(˜z)/Jδ(zδ) = 1, is a MAP estimator for the posterior measure Pu|yd on H. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 62 / 64
  • 186. Appendix Definition (Hellinger Distance) Let µ1, µ2, ν be measures on X such that µ1, µ2 have a Radon-Nikodym Derivative with respect to ν. Then the Hellinger distance between µ1 and µ2 is defined as dHell(µ1, µ2) = X 1 2 dµ1 dν 1 2 − dµ2 dν 1 2 2 dν 1 2 . (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 63 / 64
  • 187. Appendix Theorem Let the prior given k ∈ N0 sources satisfy µ0 (du|k) = µ0 (dα, dx|k) = µ0 α(dα|k)µ0 x (dx|k), µ0 x (·|k) = U(Dk κ), µ0 α(·|k) = N(mα, Γ, C). Let q(u, ·) be the proposal distribution associated with the redraw step and define the acceptance probability as follows aj(u, u ) = min{1, exp( j J (Ψ(u) − Ψ(u )))}. Then the redraw algorithm is µy j -invariant. (IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 64 / 64