This document summarizes topics related to sparse-Bayesian approaches for inverse problems involving partial differential equations. Specifically, it discusses Bayesian inversion for identifying sound sources using the Helmholtz equation and optimal control/inversion for the wave equation with functions of bounded variation in time. The document provides motivation for Bayesian inversion to deal with inherent errors in models and data. It introduces the Bayesian framework for inverse problems, including prior distributions, likelihoods, and obtaining the posterior distribution using Bayes' theorem. Finite and infinite dimensional examples are presented using Gaussian priors.
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Sparse-Bayesian Approach to Inverse Problems with Partial Differential Equations, KFU 2018
1. Sparse-Bayesian Approach to Inverse Problems with
Partial Differential Equations
Topics:
Bayesian identification of sound sources with the Helmholtz equation.
Optimal control and Bayesian inversion for the wave equation with BV-functions in time.
29.08.18
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 1 / 64
3. Motivation
Bayesian inversion (Coutant,2)
Models from physics, economics, biology, medicine, engineering or
other fields can include inherent errors!
In general: Parameters can not directly be measured and other
perhaps less meaningful data have to be used for the validation of
required model parameters!
Data can include inherent measurement errors!
How to deal with that?
How to infer the transmission of error from data to parameter?
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 3 / 64
4. Motivation
Bayesian inversion (Coutant,2)
Models from physics, economics, biology, medicine, engineering or
other fields can include inherent errors!
In general: Parameters can not directly be measured and other
perhaps less meaningful data have to be used for the validation of
required model parameters!
Data can include inherent measurement errors!
How to deal with that?
How to infer the transmission of error from data to parameter?
Thinking probabilistically enables us to overcome these difficulties!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 3 / 64
5. Motivation
Bayesian inversion (Coutant,2)
What if we have an a priori knowledge about parameters and noise in
the data?
The Bayesian approach gives a formalism to introduce:
To introduce an a priori knowledge on parameters (Admissible Set).
To define a description of data and model errors.
To deal with a non-deterministic or non-exact model.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 4 / 64
6. Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
7. Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
8. Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
3) yd :=Collected data.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
9. Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
3) yd :=Collected data.
4) η :=measurement error.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
10. Bayesian Inversion in Finite Dimension
Consider the following problem:
(P) yd
observed data
= G (u)
unknown
+ η
noise
, u ∈ Rn
, and yd , η ∈ RJ
Example:
1) u :=Some unknown parameters.
2) G(u) :=Some quantifiable properties depending on u (model!)
3) yd :=Collected data.
4) η :=measurement error.
If the parameters u or the noise η are random variables with values in
an infinite dimensional space, we are in the infinite dimensional
Bayesian inversion setting.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 5 / 64
11. Bayesian Inversion in Finite Dimension
Assumptions
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
12. Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
13. Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
14. Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
2) Noise independence u ⊥ η and P(η ∈ A) =
A
p(η)dη.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
15. Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
2) Noise independence u ⊥ η and P(η ∈ A) =
A
p(η)dη.
3) Likelihood: (1) , (2) and (P) imply P(yd |u ∈ A) =
A
p(yd − G(u))dyd .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
16. Bayesian Inversion in Finite Dimension
Assumptions
1) Prior distribution density: P(u ∈ A) =
A
p0(u)du
≈ Admissible set with a specific distribution.
2) Noise independence u ⊥ η and P(η ∈ A) =
A
p(η)dη.
3) Likelihood: (1) , (2) and (P) imply P(yd |u ∈ A) =
A
p(yd − G(u))dyd .
4) (u, yd ) as random variable: (1) and (3) imply
P((u, yd ) ∈ A × B) =
A B
p(yd − G(u))p0(u)dyd du.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 6 / 64
18. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
19. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
20. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
21. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
22. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
23. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
with Φ(u; yd ) := −log(p(yd − G(u))) (Potential).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
24. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log-Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
with Φ(u; yd ) := −log(p(yd − G(u))) (Potential).
exp(−Φ(u; yd )) allows to change and concentrate the prior
distribution on the most important areas in the support of the prior.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 7 / 64
25. Bayesian Inversion in Finite Dimension
Bayes’ Theorem:
Bayes: P(u|yd ) = P(yd |u)P(u)
P(yd ) .
5) u|yd = Solution of the inverse problem (P) given data yd (Posterior).
P(u|yd ∈ A) = P(yd |u∈A)P(u∈A)
P(yd ) = 1
Z
A
p(yd − G(u))p0(u)du
with Z :=
Rn
p(yd − G(u))p0(u)du.
Negative Log Likelihood:
P(u|yd ∈ A) = 1
Z
A
exp(−Φ(u; yd ))p0(u)du
with Φ(u; yd ) := −log(p(yd − G(u))) (Potential).
exp(−Φ(u; yd )) changes and concentrates the prior distribution on
the most important areas in the support of the prior.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 8 / 64
26. Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
27. Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
28. Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
29. Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
Noise: η ∼ N(0, 2)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
30. Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
Noise: η ∼ N(0, 2)
yd = G(u) + η with G : R → R.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
31. Bayesian Inversion Examples and Optimal Control
Finite dimensional example - Gaussian Prior:
Example 1:
Prior: u ∼ N(µ, σ2)
Noise: η ∼ N(0, 2)
yd = G(u) + η with G : R → R.
dPu|yd
(u) ∝ exp(− 1
2 2 |G(u) − yd |2)dPu(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 9 / 64
32. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
33. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
34. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
35. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
36. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
37. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
38. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
39. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m
.
yd = G(u) + η with G : H1
([0, 1]2
) → Rm
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
40. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - Gaussian Prior:
Example 2, Dasthi,Stuart:
− : H1
0 (D) ∩ H2
(D) ⊆ L2
(D) → L2
(D),
D ⊂ Rn
, n = 1, 2, 3, open, bounded with C2
-boundary.
Let (µi , ψi )∞
i=1 be the eigenvalues and eigenfunctions of
− −1
: L2
(D) → H1
0 (D) ∩ H2
(D) →→ L2
(D).
Prior: u =
∞
i=0
N(0, µα
i )
indep.
ψi with α ≥ 1 ⇒ u ∼ N(0, − −α
).
Realizations of u are almost surely in C(D)!
Noise: η ∼ N(0, Σ) with covariance matrix Σ ∈ Rm×m
.
yd = G(u) + η with G : H1
([0, 1]2
) → Rm
.
Posterior:
dPu|yd
(u) ∝ exp(−Φ(u; yd ))dPu(u) with
Φ(u; yd ) = 1
2 Σ−1/2
(G(u) − yd ) 2
Rm =: 1
2 G(u) − yd
2
Σ.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 10 / 64
41. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
42. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
43. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
44. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
Consider the prior measure
dPu(u) = 1
Λu
exp −
m
i=1
αi ∂tui M(I) dµpr (u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
45. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
Consider the prior measure
dPu(u) = 1
Λu
exp −
m
i=1
αi ∂tui M(I) dµpr (u).
yd = G(u) + η with G : H1
(0, T)m
→ Rm
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
46. Bayesian Inversion Examples and Optimal Control
Infinite dimensional example - TV-Gaussian Prior:
Example 3, Yao, Hu, Li:
Let µpr be a Gaussian measure with mean 0, support in H1
(0, T)m
,
and covariance operator λ · CY , with λ > 0.
Consider the prior measure
dPu(u) = 1
Λu
exp −
m
i=1
αi ∂tui M(I) dµpr (u).
yd = G(u) + η with G : H1
(0, T)m
→ Rm
Posterior measure:
dPu|yd
(u) = 1
Λyd
exp −1
2 G(u) − yd
2
Σ −
m
i=1
αi ∂tui M(0,T) dPu(u)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 11 / 64
47. Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
48. Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
49. Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
MAP example 2:
Solve tesdadasdamin
u∈E
1
2 (G(u) − yd ) 2
Σ + 1
2 u 2
E .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
50. Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
MAP example 2:
Solve tesdadasdamin
u∈E
1
2 (G(u) − yd ) 2
Σ + 1
2 u 2
E .
MAP example 3:
Solve tesdadasdamin
u∈E
1
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
E
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
51. Bayesian Inversion Examples and Optimal Control
Denote by E the Cameron-Martin space of a covariance op. C : H → H:
E := {u = C1/2x |x ∈ H}, with u 2
C := C−1/2·, C−1/2· H.
Maximum a Posteriori estimator (MAP) and Optimal Control:
MAP example 1:
Solve tesdadasdamin
x∈R
1
2 2 |G(x) − yd |2 + 1
2σ2 |x|2
MAP example 2:
Solve tesdadasdamin
u∈E
1
2 (G(u) − yd ) 2
Σ + 1
2 u 2
E .
MAP example 3:
Solve tesdadasdamin
u∈E
1
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
E
Example: E = H1
0 (0, T)m
C =
− −1
...
− −1
: L2(0, T)m → (H2(0, T) ∩ H1(0, T))m ⊂ L2(0, T)m.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 12 / 64
52. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Identification of Sound Sources with the Helmholtz
Equation
Engel, S., Hafemeyer, H., Münch, C., Schaden, D.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 13 / 64
53. Sparse-Bayesian Approach - Helmholtz Equation
Propagation of Acoustic Waves:
1
c2 ∂tty(t, x) − y(t, x) = F(t, x) in D ⊂ Rn
y :=pressure fluctuation, c :=sound speed, F :=
inner source
(loudspeakers)
.
Primary noise source acting on a part ΓN on the boundary ∂D:
∂y(t, x)
∂ν
= G(t, x) on ΓN
Assume that G and F are harmonic sound sources with the same
angular frequence ω, i.e.
G(t, x) = Re[g(x)e−iωt], F(t, x) = Re[f (x)e−iωt],
f (x), g(x) ∈ C f(x),g(x):=Amplitudes.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 14 / 64
54. Sparse-Bayesian Approach - Helmholtz Equation
Helmholtz Equation ⇒ C−Pressure Amplitude:
Solution of the acoustic wave equation:
y(t, x) = Re[H(x)e−iωt]
Helmholtz Equation:
− H − ω
c
2
H = f in D,
∂νH − i ωρ
γ(ω) H = 0 on ΓZ := ∂D ΓN,
∂νH = g on ΓN.
γ(ω) =wall impedance of the wall ΓZ ,
ρ :=density of the fluid.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 15 / 64
55. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
56. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
57. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
58. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
59. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
60. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
61. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
62. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
63. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
Sparse Sound Sources: Samples of N(0, − −α
) are not sparse →
Markov-kernels with rejection sampling, used in SMC or MH, show up
a high rejection behavior (low performance, Cotter et al.).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
64. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
Sparse Sound Sources: Samples of N(0, − −α
) are not sparse →
Markov-kernels with rejection sampling, used in SMC or MH, show up
a high rejection behavior (low performance, Cotter et al.).
Karhunen-Loève expansion: It can be expensive to sample from the
Prior!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
65. Sparse-Bayesian Approach - Helmholtz Equation
Gaussian Prior for the Helmholtz problem:
We want to identify a probability distribution for positions and
amplitudes of an unknown number of sound sources.
We need a Prior for f which models sound sources.
Problem sparsity:
A Gaussian prior of the form N(0, − −α
) is often used in practice.
Advantage:
Advanced theory is available for this kind of distributions!
Disadvantage:
Sparse Sound Sources: Samples of N(0, − −α
) are not sparse →
Markov-kernels with rejection sampling, used in SMC or MH, show up
a high rejection behavior (low performance, Cotter et al.).
Karhunen-Loève expansion: It can be expensive to sample from the
Prior!
We need a clear relationship between the number, positions and
amplitudes of the sound sources ⇒ Non-Gaussian Prior!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 16 / 64
66. Sparse-Bayesian Approach - Helmholtz Equation
Desired Prior: f should be a finite linear combination Dirac delta
measures:
f =
k
i=1
αi δxi with αi ∈ C, and xi inside D.
f =loudspeakers as acoustic monopoles.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 17 / 64
69. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
70. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
71. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
72. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
Observation Operator
G : D → Cm,
u → (Hu(zj))m
j=1
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
73. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
Observation Operator
G : D → Cm,
u → (Hu(zj))m
j=1
where D is the space of finite linear combinations of Diracs with
support inside D, and zj ∈ D as measurement points.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
74. Sparse-Bayesian Approach - Helmholtz Equation
Bayesian Inversion - Helmholtz Equation
yd := (ydj)d
j=1 = G(u) + η ∈ Cm
η = η1 + i · η2 with ηj ∼ N(0, Σj).
η ∼ CN(0, Γη, Cη) with covariance matrix Γη = Σ1 + Σ2 and relation
matrix Cη = Σ1 − Σ2.
Observation Operator
G : D → Cm,
u → (Hu(zj))m
j=1
where D is the space of finite linear combinations of Diracs with
support inside D, and zj ∈ D as measurement points.
Green’s function solves Helmholtz equation ⇒ Assumption on prior:
We expect no sound sources in a small neighborhood of the
microphones (H2-regularity; pointwise measurements!)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 18 / 64
75. Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior
Formally, we consider a prior of the form:
u =
k
j=1
αk
j δxk
j
∈ D
k :=random variable with values in N,
αk
j =random variable with values in C,
xk
j =random variable with values in the Helmholtz domain D.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 19 / 64
77. Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
78. Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
αk
j = αk
j1 + i · αk
j2 ∈ C with αk
jr
iid
∼ N(µr , σr ), r = 1, 2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
79. Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
αk
j = αk
j1 + i · αk
j2 ∈ C with αk
jr
iid
∼ N(µr , σr ), r = 1, 2.
xk
j
iid
∼ Unif (Dκ) with
−→
Z = (zi )m
i=1 ⊂ D measurement positions,
Dκ ⊂ D open, and dist(Dκ, ∂D ∪
−→
Z ) > κ > 0
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
80. Sparse-Bayesian Approach - Helmholtz Equation
Sparse prior - Example
k ∼ Poi(Ek) with Ek = expected number of sources.
αk
j = αk
j1 + i · αk
j2 ∈ C with αk
jr
iid
∼ N(µr , σr ), r = 1, 2.
xk
j
iid
∼ Unif (Dκ) with
−→
Z = (zi )m
i=1 ⊂ D measurement positions,
Dκ ⊂ D open, and dist(Dκ, ∂D ∪
−→
Z ) > κ > 0
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 20 / 64
82. Sparse-Bayesian Approach - Helmholtz Equation
Posterior
Solution of the inverse problem: u|yd (Posterior distribution)
Pu|yd
(A) =
1
Λ(yd )
A
exp(−Φ(u, yd ))dPu(u),
where A ⊂ D, Z(y) is a normalization constant
Λ(yd ) =
D
exp(−Φ(u, yd ))dPu(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 22 / 64
83. Sparse-Bayesian Approach - Helmholtz Equation
Posterior - Stability Results
Assume that the prior measure has 2nd moment. Then for all r > 0,
there exists C > 0 s.t.
dHell (Pu|y1
, Pu|y2
) ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
Assume that the prior measure has 2nd moment. Let r > 0 and f be a
X−valued function, which is square integrable with respect to Pu|y
for all y ∈ Br (0) ⊂ Cm. Then, there is a C > 0 such that
EPu|y1
(f ) − EPu|y2
(f ) X ≤ C y1 − y2 Σ, ∀y1, y2 ∈ Br (0).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 23 / 64
84. Sparse-Bayesian Approach - Helmholtz Equation
Posterior - Finite Element Approximation
We approximate the very weak solution of the Helmholtz Equation by
a Finite Element Method ⇒
H(u) − Hh(u) H2(Dκ) ∈ O(|ln(h)|h2)
Posterior - Finite Element Approximation
Pu|yd ,h(A) =
1
Λh(yd )
A
exp(−Φh(u, yd ))dPu(u),
with Φh(u, yd ) = 1
2 yd − Gh(u) 2
Γη,Cη
, and Gh(u) = (Hu,h(zj))m
j=1.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 24 / 64
85. Sparse-Bayesian Approach - Helmholtz Equation
Posterior - Stability Results
Assume that the prior measure has 4th moment. For yd ∈ Cm holds,
dHell (Pu|yd ,h, Pu|yd
) ∈ O(|ln(h)|h2).
Assume that the prior measure Pu has 4th moment. Let r > 0 and f
be a X−valued function, which has second moment with respect to
Pu|yd
and Pu|yd ,h. Then, there holds
EPu|yd ,h
(f ) − EPu|yd
(f ) X ∈ O(|ln(h)|h2).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 25 / 64
86. Sparse-Bayesian Approach - Helmholtz Equation
10 -2
10 -1
10 0
h
10 -2
10 -1
10 0
HellingerDistance
10 -2
10 -1
10 0
h
10 -7
10 -6
10 -5
10 -4
10 -3
10 -2
Variance
Left Figure
"o-o"=
distHell (Pu|yd ,h, Pu|yd
).
Average of Hellinger
distance over 50 runs.
"- -"= O(| ln h|h2)
Right Figure
distHell (Pu|yd ,h, Pu|yd
)
variance for different h from
50 runs.
Mesh size: h =
√
2 · 2−k with k = 2, ..., 6
SMC method with fixed N = 5 · 105 prior particles.
Reference measure kref = 7.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 26 / 64
87. Sparse-Bayesian Approach - Helmholtz Equation
10 -2
10 -1
10 0
h
10 -4
10 -3
10 -2
10 -1
10 0
10 1
Error
10 -2
10 -1
10 0
h
10 -16
10 -14
10 -12
10 -10
10 -8
10 -6
10 -4
10 -2
Variance
f1
f2
f
3
f
4
f
5
Left Figure
EPu|yd
(f ) − EPu|yd ,h
(f ) X
Average of 50 runs.
Functions:
f1(u) := u 1 :
First moment.
f2(u) := 1{two sources}(u):
Probability two sources.
(bounded!)
f3(u) := |yu(z0)|:texdddfiiiExpected pressure amplitude at z0.
f4(u) := Var(|yu(z0)|):textVariance of pressure amplitude at z0.
f5(u) := 10 log10(max(1, | Re(yu(z0) exp(−iζt))|)):
Decibel function at z0.
Computation: See Hellinger distance.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 27 / 64
88. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Idea
Approximating a probability measure µ with Dirac measures:
µ ≈ 1
N
N
i=1
δui , for (ui )N
i=1 ⊂ supp(µ).
Approximate the Posterior measure with intermediate measures:
dµj(u) := 1
Λj
exp (−βjψ(u, yd )) dPu(u),
with 0 = β0 < β1 < · · · < βJ = 1.
Sequential updates:
µ0 → µ1 → · · · → µJ−1 → µJ,
with additional redrawing steps for each µj
(µj-Invariant Markov-Kernel)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 28 / 64
89. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
90. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
91. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
92. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
93. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
4. Apply Markov kernel u
(n)
j+1 ∼ Pj(u
(n)
j , ·)
(Redraw Step: Speed up possible!).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
94. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
4. Apply Markov kernel u
(n)
j+1 ∼ Pj(u
(n)
j , ·)
(Redraw Step: Speed up possible!).
5. Define w
(n)
j+1 = w
(n)
j+1/
N
˜n=1
w
(˜n)
j+1, with
w
(n)
j+1 = exp (βj − βj+1)Ψ(u
(n)
j+1, y) w
(n)
j and µN
j+1 :=
N
n=1
w
(n)
j+1δu
(n)
j+1
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
95. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Sequential Update
1. Let µN
0 = µ0 and set j = 0.
2. Resample u
(n)
j ∼ µN
j , n = 1, · · · , N.
3. Set w
(n)
j = 1
N , n = 1, · · · , N and define µN
j =
N
n=1
w
(n)
j δu
(n)
j
.
4. Apply Markov kernel u
(n)
j+1 ∼ Pj(u
(n)
j , ·)
(Redraw Step: Speed up possible!).
5. Define w
(n)
j+1 = w
(n)
j+1/
N
˜n=1
w
(˜n)
j+1, with
w
(n)
j+1 = exp (βj − βj+1)Ψ(u
(n)
j+1, y) w
(n)
j and µN
j+1 :=
N
n=1
w
(n)
j+1δu
(n)
j+1
.
6. j ← j + 1 and go to 2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 29 / 64
96. Sparse-Bayesian Approach - Helmholtz Equation
Sequential Monte Carlo Method - Mean Square Error
Theorem
For every measurable and bounded function f the measure µN
J satisfies
ESMC[|EµN
J
[f ] − EPu|y
[f ]|2
] ≤
J
j=1
2Λ−1
J
j
2
f 2
∞
N
,
where ESMC is the expectation with respect to the randomness in the SMC
algorithm.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 30 / 64
97. Sparse-Bayesian Approach - Helmholtz Equation
200 800 3200 12800 51200
N
10 -6
10 -5
10 -4
10 -3
10 -2
10 -1
10 0
10 1
MeanSquareError
200 800 3200 12800 51200
N
10 -12
10 -10
10 -8
10 -6
10 -4
10 -2
10 0
10 2
Variance
f 1
f 2
f 3
f 4
f 5
Left Figure
ESMC[|EµN
J
[f ] − EPu|y
[f ]|2]
100 runs for each N.
Fixed mesh size
h =
√
2 · 2−7.
Reference measure Pu|yd
with Nref = 107.
f2(u) := 1{two sources}(u):
Probability of two sources
is bounded!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 31 / 64
101. Sparse-Bayesian Approach - Helmholtz Equation
a) Smoothed posterior distribution Pu|yd
(·) restricted to the source
positions.
b) Smoothed distribution restricted to the source positions
of Pu|yd
(·| at least one sound source in [0.2, 0.3] × [0.7, 0.8]).
c) Smoothed distribution restricted to the source positions
of Pu|yd
(·| at least one sound source in [0.7, 0.8] × [0.7, 0.8]).
We assume that all random variables are independent of each other.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 35 / 64
102. Sparse-Bayesian Approach - Helmholtz Equation
a) Smoothed distribution of Pu|yd
(·|k = 2) in [0, 1]2.
b) Smoothed distribution of Pu|yd
(·|k = 3) in [0, 1]2.
We observe significant differences!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 36 / 64
103. Sparse-Bayesian Approach - Helmholtz Equation
No. k PµN
J
(k) x(nk
MAP )
α(nk
MAP )
w(nk
MAP )
2 77.8 %
(0.37, 0.62)
(0.82, 0.69)
10.04 + 11.03i
8.317 + 9.77i
1.09 · 10−6
3 21.9%
(0.17, 0.85)
(0.90, 0.72)
(0.46, 0.75)
7.98 + 9.44i
8.13 + 10.04i
8.81 + 9.49i
1.11 · 10−6
Empirical MAP Estimator
This example shows that the empirical MAP estimator is not always
the best choice!
Two sources are most likely.
Global empirical MAP estimator leads to three sources!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 37 / 64
104. Sparse-Bayesian Approach - Helmholtz Equation
Top left figure: True solution |Huexact,h|.
Top right figure: Posterior mean EPu|yd
[|Hu|]
Lower left/right figure: Conditioned MAP estimator for k = 3 & 2.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 38 / 64
105. Sparse-Bayesian Approach - Wave Equation
Optimal control and Bayesian inversion for the wave equation
with BV −functions in time
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 39 / 64
106. Sparse-Bayesian Approach - Wave Equation
Optimal Control - Wave Equation - BV-functions in time
Let us introduce the optimal control problem (PΠ) for the linear wave
equation with homogeneous Dirichlet boundary conditions:
(PΠ
)
min
u∈BV (0,T)m
β
2 Π(yu(
−→
t )) − yd
2
R
+
m
j=1
αj Dtuj M(0,T)
=: JΠ(y, u)
subject to the weak solution of
∂ttyu − yu =
m
j=1
ujgj in (0, T) × Ω
yu = 0 on (0, T) × ∂Ω
(yu, ∂tyu) = (y0, y1) in {0} × Ω
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 40 / 64
107. Sparse-Bayesian Approach - Wave Equation
Optimal Control - Wave Equation - BV-functions in time
yu(
−→
t ) := (yu(ti ))r
i=1 ⊂ (H1
0 )r with 0 < t1 < · · · < tr ≤ T.
Π : L2(Ω)r → R linear bounded.
(gj)m
j ⊂ L∞(Ω) {0} with pairwise disjoint supports.
Assume that (Π(ygj (
−→
t )))m
j=1 are linear independent in R .
Example for Π - Patch measurements:
Π2 : L2(Ω)r → Rk·r
(ϕi )r
i=1 →
1
|Ωl |
Ωl
ϕi dx
k
l=1
r
i=1
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 41 / 64
109. Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
Problem (PΠ) has a solution in BV (I)m.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
110. Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
Problem (PΠ) has a solution in BV (I)m.
Numerics: Regularization of (PΠ):
(PΠ
γ ) min
u∈H1(0,T)m
JΠ(y, u) + γ
2
m
j=1
∂tuj L2(0,T) =: JΠ
γ (y, u)
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
111. Sparse-Bayesian Approach - Wave Equation
Optimal Control - Solution
Problem (PΠ) has a solution in BV (I)m.
Numerics: Regularization of (PΠ):
(PΠ
γ ) min
u∈H1(0,T)m
JΠ(y, u) + γ
2
m
j=1
∂tuj L2(0,T) =: JΠ
γ (y, u)
Optimality conditions, a discussion on sparsity, and asymptotic
behavior for (PΠ) and (PΠ
γ ) can be found in my PhD thesis.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 42 / 64
113. Sparse-Bayesian Approach - Wave Equation
By a BV-path-following method, we solved for γ → ≈ 0 the
corresponding problems (˜PΠ
γ ) with a semi-smooth Newton method.
From left to right, we see the optimal controls for TV −regularization
parameters α = 0.6, α = 0.18, and α = 0.006.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 44 / 64
115. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
116. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
117. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
118. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
119. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
3) tki
ji
is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
120. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
3) tki
ji
is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
4) ci is a R−random variable for i = 1, · · · , m.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
121. Sparse-Bayesian Approach - Wave Equation
Sparse-Bayesian Approach to (PΠ):
Consider the Bayesian inverse problem
yd = Π(yu(
−→
t )) + η,
with noise η ∼ N(0, 1
β idR ) with Σ := 1
β idR .
We consider a prior of the form
ki
ji =1
αki
ji
1(t
ki
ji
,T]
+ ci
m
i=1
1) ki is a N−random variable for i = 1, · · · , m,
2) αki
ji
is a R-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
3) tki
ji
is a (0, T)-random variable for i = 1, · · · , m and ji = 1, · · · , ki ,
4) ci is a R−random variable for i = 1, · · · , m.
This kind of prior can have a dense support in BV (0, T)m with
respect to the strict BV-topology.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 45 / 64
128. Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
129. Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
By the same assumptions as in the Helmholtz equation, it holds:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
130. Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
By the same assumptions as in the Helmholtz equation, it holds:
dHell (Pu|yd ,τ,h, Pu|yd
) ∈ O(τ2 + h2).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
131. Sparse-Bayesian Approach - Wave Equation
Posterior - Finite Element Approximation
We approximate the weak solution of the wave equation by a Finite
Element Method ⇒
yu,τ,h − yu C(0,T;L2(Ω)) ∈ O(τ2 + h2),
with (g, y0, y1) ∈ C2(Ω) × C3
0 (Ω) × C2
0 (Ω).
By the same assumptions as in the Helmholtz equation, it holds:
dHell (Pu|yd ,τ,h, Pu|yd
) ∈ O(τ2 + h2).
EPu|yd ,τ,h
(f ) − EPu|yd
(f ) X ∈ O(τ2 + h2).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 47 / 64
133. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
134. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
135. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
136. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
137. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
αk1
j1
∼ N(0, 5) i.i.d.,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
138. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
αk1
j1
∼ N(0, 5) i.i.d.,
c1 ∼ N(0, 0.1) (all random variables are independent of each other).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
139. Sparse-Bayesian Approach - Wave Equation
Experimental Parameters - Sparse-Bayesian Approach:
Parameters of the Optimal Control Problem:
All parameters with respect to the wave equation, operator Π = Π2,
g,
−→
t , and yd are the same.
Prior Parameters:
k1
j1=1
αk1
j1
1(t
k1
j1
,T]
+ c1
k1 ∼ Pois(2),
tk1
j1
∼ Unif(0, T) i.i.d.,
αk1
j1
∼ N(0, 5) i.i.d.,
c1 ∼ N(0, 0.1) (all random variables are independent of each other).
Algorithm:
Similar to the Helmholtz Problem, we considered a SMC method with
additional invariant Metropolis-Hastings algorithm that obtained
similar convergence results as in the Helmholtz case.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 48 / 64
140. Sparse-Bayesian Approach - Wave Equation
N = 5000.
The left figure represents the support of the approximated posterior.
In the right figure, we can compare the empirical MAP estimator with
the optimal control for TV −regularization parameter α = 0.18.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 49 / 64
142. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
143. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
144. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
145. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
146. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
147. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
148. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
Strong initial value dependence (not needed in the SMC).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
149. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
Strong initial value dependence (not needed in the SMC).
Weak predictions with randomized initial values:
(9) used actually 106 − 108 samples for their experiments!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
150. Sparse-Bayesian Approach - Wave Equation
Experimental observations on the TV-Gaussian Prior:
Example 3 with MAP (Noise Σ = β · IdR˜kr , C = − −1):
min
u∈(H1
0 (0,T))m
β
2 G(u) − yd
2
Σ +
m
i=1
αi ∂tui M(0,T) + 1
2λ u 2
(H1
0 (0,T))m .
Experiment Parameters: m = 1, α = 500, λ = 1.
Splitting pCN Algorithm:
1) MCMC for Gaussian Prior with TV-depended rejection function (10
Steps).
2) Fitting-term depending rejection function is used in a second step.
We used a truncated Karhunen-Loève expansion ⇒ It is
computationally expensive for a high truncation index.
Strong initial value dependence (not needed in the SMC).
Weak predictions with randomized initial values:
(9) used actually 106 − 108 samples for their experiments!
Our initial value: we considered the ’true function’, projected on the
mesh, and perturbed it with N(0, β) on each node
(No information in (9)).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 51 / 64
152. Outlook
Identification of moving acoustic sound sources with prior measures
that have a support in L2(0, T; M(D)).
Analytical expression of the MAP estimator for our Sparse-Bayesian
approach.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 53 / 64
153. Literature
(1) S.L. Cotter, G.O. Roberts, A.M. Stuart, D. White, MCMC Methods
for Functions: Modifying Old Algorithms to Make Them Faster.
Statist Sci. 28(3):424– 446, 2013.
(2) O. Coutant, Bayesian inversion, Lecture notes, Joint Inversion in
Geophysics summer school, Barcelonnette (France), 2015.
(3) R. Rodriguez A. Bermudez, P. Gamallo. Finite element methods in
local active control of sound. SIAM J. CONTROL OPTIM. Vol. 43,
No. 2, pp. 437–465, Society for Industrial and Applied Mathematics,
2004.
(4) M. Dashti and A. M. Stuart. The Bayesian Approach To Inverse
Problems. ArXiv e-prints, February 2013.
(5) R. Dautray and J. L. Lions. Mathematical analysis and numerical
methods for science and technology. Springer-Verlag, Berlin,
1984-1985.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 54 / 64
154. Literature
(6) K. Pieper, B. Tang, P. Trautmann, and D. Walter. Inverse point
source location with the helmholtz equation. Not Published yet, TBA.
(7) A. Schatz. An observation concerning ritz-galerkin methods with
idefinite bilinear forms. Mathematics of computation, Volume 28,
number 128, pages 959-962, 1974.
(8) A. M. Stuart. Inverse problems: A Bayesian perspective. Acta
Numerica, 19, , pp 451-559 doi:10.1017/ S0962492910000061, 2010.
(9) Z. Yao, Z. Hu, J. Li, A TV-Gaussian prior for infinite-dimensional
Bayesian inverse problems and its numerical implementations, Inverse
Problems, 34 ,2018.
(10) A. A. Zlotnik, Convergence rate estimates of finite-element methods
for second-order hyperbolic equations. numerical methods and
applications, p.153 et.seq. Guri I. Marchuk, CRC Press, 1994.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 55 / 64
159. Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
160. Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
161. Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪
−→
Z ) > κ > 0.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
162. Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪
−→
Z ) > κ > 0.
Replace D with 1(C, D) ⊂ 1(C, Rd ) =: 1 with norm
(αn, xn)n∈N0 1 :=
n∈N0
|αn|C + |xn|Rd .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
163. Appendix - Helmholtz Equation - Prior
Sparse prior - Definition
Pu(·) =
n∈N
q(n)µ0
n(·).
Let (q(n))n∈N0 be a sequence in [0, 1] with
n∈N0
q(n) = 1.
We can identify u =
k
j=1
αk
j δxk
j
∈ D with (αk
j , xk
j )k
j=1 ∈ (C × Dκ)k.
W.l.o.g. assume 0 ∈ Dκ with dist(Dκ, ∂D ∪
−→
Z ) > κ > 0.
Replace D with 1(C, D) ⊂ 1(C, Rd ) =: 1 with norm
(αn, xn)n∈N0 1 :=
n∈N0
|αn|C + |xn|Rd .
Define the probability measure µ0
n on 1
κ := 1(C, Dκ) (=open) with
the Borel σ-algebra and support in
1
n,κ := 1
κ−sequence is zero after index n.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 57 / 64
164. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
165. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
166. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
167. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
168. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
169. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
170. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
171. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and
η ∈ N(0, idRk ).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
172. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and
η ∈ N(0, idRk ).
Accept u with probability min{0, exp(βj(Φ(u; yd ) − Φ(u; yd )))}.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
173. Appendix - Helmholtz Equation - SMC
Sequential Monte Carlo Method - Redraw Step - Metropolis-Hastings
Sample u is characterized by
k =Number of Sources, αk
i =Amplitudes of S., xk
i =Positions of S.
Redraw sample u by u with the following rule:
1. k = k,
2. xk
i = xk
i + γx ηi ∈ Dκ otherwise xk
i = xk
i ,
3. (αk
i )k
i=1 = (1 − γ2
α)1/2
((αk
i )k
i=1 − mα) + mα + γαξ,
All random variables are independent of each other,
with γx ≥ 0, γα ∈ [0, 1], mα ∈ Ck, ξ ∼ N(0, Γ, C), and
η ∈ N(0, idRk ).
Accept u with probability min{0, exp(βj(Φ(u; yd ) − Φ(u; yd )))}.
This redraw step is µj invariant!
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 58 / 64
174. Appendix
The function G : (X, · X ) → Rm fulfills the following assumptions in the
examples above:
Assumptions on G:
i) (X, · X ) separable Banach space.
ii) For every > 0 there is M = M( ) ∈ R such that, for all u ∈ X,
G(u) Σ ≤ exp( u 2
X + M),
iii) for every r > 0, there is K = K(r) > 0 such that, for all u1, u2 ∈ X
with max{ u1 X , u2 X } < r,
G(u1) − G(u2) Σ ≤ K u1 − u2 X .
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 59 / 64
175. Appendix
Why not choose X as non-separable Banach space?
For a non-separable Banach space X, it is not clear what a
"natural"σ−Algebra on X is.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
176. Appendix
Why not choose X as non-separable Banach space?
For a non-separable Banach space X, it is not clear what a
"natural"σ−Algebra on X is.
On candidate: Cylindrical σ−Algebra = smallest σ−Algebra s.t. all
∈ X∗ are measurable.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
177. Appendix
Why not choose X as non-separable Banach space?
For a non-separable Banach space X, it is not clear what a
"natural"σ−Algebra on X is.
On candidate: Cylindrical σ−Algebra = smallest σ−Algebra s.t. all
∈ X∗ are measurable.
In separable Banach spaces hold: Borel σ-Algebra = cylindrical
σ−Algebra.
This holds in general not in non-separable Banach spaces.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 60 / 64
178. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
179. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
180. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
181. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
182. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
C is trace class symmetric operator.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
183. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
C is trace class symmetric operator.
Cameron Martin Space HC = {u ∈ B|C1/2x = u, x ∈ B} of C.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
184. Appendix
Let B be a separable Banach space.
In general, we lose the following properties for non-separable Banach
space X:
1) Ferniques Theorem: Let µ be a Gaussian measure on B ⇒ ∃α > 0
such that
B
exp(α u 2)dµ(u) < ∞
2) Fernique Theorem implies:
Every Gaussian measure µ admits a compact covariance operator (2nd
moment
B
u 2dµ(u) < ∞).
3) In separable Hilbert space B, one can characterize µ completely by its
mean m and covariance operator C, i.e. µ = N(m, C) with m ∈ HC
and C : B → B.
C is trace class symmetric operator.
Cameron Martin Space HC = {u ∈ B|C1/2x = u, x ∈ B} of C.
Tr(C) =
B
exp(α u 2)dµ(u).
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 61 / 64
185. Appendix
Definition (Maximum a Posteriori Estimator)
Let zδ = arg max
z∈H
Jδ(z), with Jδ(z) := Pu|yd
(Bδ(z)), where Bδ(z) ⊂ H is a
ball that is centered in z ∈ H with radius δ > 0. Any point ˜z ∈ H
satisfying lim
δ→0
Jδ(˜z)/Jδ(zδ) = 1, is a MAP estimator for the posterior
measure Pu|yd
on H.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 62 / 64
186. Appendix
Definition (Hellinger Distance)
Let µ1, µ2, ν be measures on X such that µ1, µ2 have a Radon-Nikodym
Derivative with respect to ν. Then the Hellinger distance between µ1 and
µ2 is defined as
dHell(µ1, µ2) =
X
1
2
dµ1
dν
1
2
− dµ2
dν
1
2
2
dν
1
2
.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 63 / 64
187. Appendix
Theorem
Let the prior given k ∈ N0 sources satisfy
µ0
(du|k) = µ0
(dα, dx|k) = µ0
α(dα|k)µ0
x (dx|k),
µ0
x (·|k) = U(Dk
κ), µ0
α(·|k) = N(mα, Γ, C).
Let q(u, ·) be the proposal distribution associated with the redraw step
and define the acceptance probability as follows
aj(u, u ) = min{1, exp(
j
J
(Ψ(u) − Ψ(u )))}.
Then the redraw algorithm is µy
j -invariant.
(IGDK Munich — Graz) Sparse-Bayes Helmholtz and Wave Equation 29.08.18 64 / 64