29. Liftedした評価関数について
. 𝝆∗∗
𝑥, 𝒖 𝑥 + 𝜳∗∗
𝛻𝒖 𝑑𝑥
axed energy minimization problem becomes
min
u:⌦!R|V|
max
q:⌦!K
X
x2⌦
⇢⇤⇤
(x, u(x)) + hDiv q, ui.
order to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we intr
ditional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C,
that w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui,
ere the set C is given as
C =
1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x) y
o
.
numerical optimization we use a GPU-based implementation1
of a first
mal-dual method [14]. The algorithm requires the orthogonal projectio
dual variables onto the sets C respectively K in every iteration. Howeve
elaxed energy minimization problem becomes
min
u:⌦!R|V|
max
q:⌦!K
X
x2⌦
⇢⇤⇤
(x, u(x)) + hDiv q, ui. (18)
n order to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we introduce
dditional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦
o that w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (19)
where the set C is given as
C =
1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x) y
o
. (20)
For numerical optimization we use a GPU-based implementation1
of a first-order
rimal-dual method [14]. The algorithm requires the orthogonal projections of
he dual variables onto the sets C respectively K in every iteration. However, the
rojection onto an epigraph of dimension |V| + 1 is di cult for large values of
V|. We rewrite the constraints (v(x), w(x)) 2 Ci, 1 i |T |, x 2 ⌦ as (n + 1)-
imensional epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2 R:
8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Creme
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1 1, 1 i |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz
8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Cremers
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1 1, 1 i |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
et for now the weight of the regularizer in (1) be zero. Then, at each point
2 ⌦ we minimize a generally nonconvex energy over a compact set ⇢ Rn
:
min
u2
⇢(u). (6)
We set up the lifted energy so that it attains finite values if and only if the
rgument u is a sparse representation u = Ei↵ of a sublabel u 2 :
⇢(u) = min
1i|T |
⇢i(u), ⇢i(u) =
8
<
:
⇢(Ti↵), if u = Ei↵, ↵ 2 U
n ,
1, otherwise.
(7)
roblems (6) and (7) are equivalent due to the one-to-one correspondence of
= Ti↵ and u = Ei↵. However, energy (7) is finite on a nonconvex set only. In
rder to make optimization tractable, we minimize its convex envelope.
Proposition 1 The convex envelope of (7) is given as:
⇢⇤⇤
(u) = sup
v2R|V|
hu, vi max
1i|T |
⇢⇤
i (v),
⇢⇤
i (v) = hEibi, vi + ⇢⇤
i (A>
i E>
i v), ⇢i := ⇢ + i
.
(8)
and Ai are given as bi := Mn+1
i , Ai := M1
i , M2
i , . . . , Mn
i , where Mj
i are
he columns of the matrix Mi := (T>
i , 1) >
2 Rn+1⇥n+1
.
roof. Follows from a calculation starting at the definition of ⇢⇤⇤
. See Ap-
min
u2
⇢(u). (6)
p the lifted energy so that it attains finite values if and only if the
u is a sparse representation u = Ei↵ of a sublabel u 2 :
= min
1i|T |
⇢i(u), ⇢i(u) =
8
<
:
⇢(Ti↵), if u = Ei↵, ↵ 2 U
n ,
1, otherwise.
(7)
(6) and (7) are equivalent due to the one-to-one correspondence of
nd u = Ei↵. However, energy (7) is finite on a nonconvex set only. In
make optimization tractable, we minimize its convex envelope.
ion 1 The convex envelope of (7) is given as:
⇢⇤⇤
(u) = sup
v2R|V|
hu, vi max
1i|T |
⇢⇤
i (v),
⇢⇤
i (v) = hEibi, vi + ⇢⇤
i (A>
i E>
i v), ⇢i := ⇢ + i
.
(8)
are given as bi := Mn+1
i , Ai := M1
i , M2
i , . . . , Mn
i , where Mj
i are
ns of the matrix Mi := (T>
i , 1) >
2 Rn+1⇥n+1
.
⇤⇤
We set up the lifted energy so that it attains fi
argument u is a sparse representation u = Ei↵ o
⇢(u) = min
1i|T |
⇢i(u), ⇢i(u) =
8
<
:
⇢(Ti↵),
1,
Problems (6) and (7) are equivalent due to the
u = Ti↵ and u = Ei↵. However, energy (7) is fin
order to make optimization tractable, we minimi
Proposition 1 The convex envelope of (7) is gi
⇢⇤⇤
(u) = sup
v2R|V|
hu, vi max
1i|T |
⇢⇤
i
⇢⇤
i (v) = hEibi, vi + ⇢⇤
i (A>
i E>
i v),
bi and Ai are given as bi := Mn+1
i , Ai := M1
i ,
the columns of the matrix Mi := (T>
i , 1) >
2 Rn
Proof. Follows from a calculation starting at t
33. 今回の問題について
rder to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we intro
tional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x
hat w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui,
re the set C is given as
C =
1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x) y
o
.
numerical optimization we use a GPU-based implementation1
of a first-o
al-dual method [14]. The algorithm requires the orthogonal projectio
dual variables onto the sets C respectively K in every iteration. However
ection onto an epigraph of dimension |V| + 1 is di cult for large valu
We rewrite the constraints (v(x), w(x)) 2 Ci, 1 i |T |, x 2 ⌦ as (n +
ensional epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2
主問題の変数 双対問題の変数
• 𝑢に関する最適化は容易
• 𝑣, 𝑤, 𝑞の最適化(C, Kへの射影)が課題
𝑣
射影
射影とは・・・変数の動ける範囲が限定されている
場合に、範囲内に変数を移動させるステップ
34. 今回の問題を解くためには・・・
u:⌦!R
x2⌦
r to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we introdu
nal variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2
w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (1
the set C is given as
C =
1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x) y
o
. (2
merical optimization we use a GPU-based implementation1
of a first-ord
dual method [14]. The algorithm requires the orthogonal projections
al variables onto the sets C respectively K in every iteration. However, th
ion onto an epigraph of dimension |V| + 1 is di cult for large values
rewrite the constraints (v(x), w(x)) 2 Ci, 1 i |T |, x 2 ⌦ as (n + 1
u:⌦!R|V| q:⌦!K
x2⌦
der to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we introduce
tional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦
at w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (19)
e the set C is given as
C =
1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x) y
o
. (20)
numerical optimization we use a GPU-based implementation1
of a first-order
al-dual method [14]. The algorithm requires the orthogonal projections of
dual variables onto the sets C respectively K in every iteration. However, the
ection onto an epigraph of dimension |V| + 1 is di cult for large values of
We rewrite the constraints (v(x), w(x)) 2 Ci, 1 i |T |, x 2 ⌦ as (n + 1)-
nsional epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2 R:
ri
(x) si(x), ri
(x) = A>
i E>
i v(x), si(x) = w(x) hEibi, v(x)i. (21)
8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Cremers
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1 1, 1 i |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz
continuity of a piecewise linear function defined by the matrices q 2 Rd⇥|V|
.
Therefore, one can expect that the Lipschitz constraint is equivalent to a bound
• 𝑣、𝑤について最大化しつつ、領域Cに射影
• 𝑞について最大化しつつ、領域Kに射影
35. 射影について
al variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦
w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (19)
he set C is given as
C =
1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x) y
o
. (20)
erical optimization we use a GPU-based implementation1
of a first-order
dual method [14]. The algorithm requires the orthogonal projections of
variables onto the sets C respectively K in every iteration. However, the
on onto an epigraph of dimension |V| + 1 is di cult for large values of
rewrite the constraints (v(x), w(x)) 2 Ci, 1 i |T |, x 2 ⌦ as (n + 1)-
onal epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2 R:
x) si(x), ri
(x) = A>
i E>
i v(x), si(x) = w(x) hEibi, v(x)i. (21)
quality constraints can be implemented using Lagrange multipliers. For
ection onto the set K we use an approach similar to [7, Figure 7].
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1 1, 1 i |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz
continuity of a piecewise linear function defined by the matrices q 2 Rd⇥|V|
.
Therefore, one can expect that the Lipschitz constraint is equivalent to a bound
on the derivative. For the complete proof, see Appendix A.
Schatten-∞ Normへの射影
(Dの特異値の最大が1以下になるようなqを求める)
問題依存、パラボラ関数への射影など
𝑒𝑝𝑖 𝜌 + ∆R
∗
36. (参考)パラボラ関数への射影
• Convex Relaxation of Vectorial Problems with Coupled
Regularization (E. Strekalovskiy, A. Chambolle, D. Cremers), In
SIAM Journal on Imaging Sciences, volume 7, 2014.
CONVEX RELAXATION OF VECTORIAL PROBLEMS 333
B.2. Projection onto parabolas y ≥ α∥x∥2
2. Let α > 0. For x0 ∈ Rd and y0 ∈ R
consider the projection onto a parabola:
(B.4) arg min
x∈Rd, y∈R,
y≥α∥x∥2
2
(x − x0)2
2
+
(y − y0)2
2
.
If already y0 ≥ α∥x0∥2
2, the solution is (x, y) = (x0, y0). Otherwise, with a := 2α∥x0∥2,
b := 2
3(1 − 2αy0), and d := a2 + b3 set
(B.5) v :=
⎧
⎨
⎩
c − b
c with c =
3
a +
√
d if d ≥ 0,
2
√
−b cos 1
3 arccos a
√
−b
3 if d < 0.
If c = 0 in the first case, set v := 0. The solution is then given by
(B.6) x =
v
2α
x0
∥x0∥2
if x0 ̸= 0
0 else
, y = α∥x∥2
2.
Remark. In the case d < 0 it always holds that a
√
−b
3 ∈ [0, 1]. To ensure this also numeri-
cally, one should compute d by d = (a −
√
−b
3
)(a +
√
−b
3
) for b < 0.
Proof. First, for y0 ≥ α∥x0∥2
2 the projection is obviously (x, y) = (x0, y0). Otherwise, we
37. (参考) Schatten-∞ Normへの射影
• The Natural Total Variation Which Arises from Geometric Measure
Theory (B. Goldluecke, E. Strekalovskiy, D. Cremers), In SIAM Journal on
Imaging Sciences, volume 5, 2012.
in that color edges are preserved better. We also showed that TVJ can serve as a regularizer
in more general energy functionals, which makes it applicable to general inverse problems like
deblurring, zooming, inpainting, and superresolution.
7.1. Projection ΠS for TVS. Since each channel is treated separately, we can compute
the well-known projection for the scalar TV for each color channel. Let A ∈ Rn×m with rows
a1, . . . , an ∈ Rm. Then ΠS is defined rowwise as
(7.1) ΠS(ai) =
ai
max(1, |ai|2)
.
7.2. Projection ΠF for TVF . Let A ∈ Rn×m with elements aij ∈ R. From (2.8) we see
that we need to compute the projection onto the unit ball in Rn·m when (aij) is viewed as a
vector in Rn·m. Thus,
(7.2) ΠF (A) =
A
max 1, n
i=1
m
j=1 a2
ij
.
7.3. Projection ΠJ for TVJ . Let A ∈ Rn×m with singular value decomposition A =
UΣV T and Σ = diag(σ1, . . . , σm). We assume that the singular values are ordered with σ1
being the largest. If the sum of the singular values is less than or equal to one, A already lies
in co(En ⊗ Em). Otherwise, according to Theorem 3.18,
(7.3) Π(A) = UΣpV T
with Σp = diag(σp).
To compute the matrix V and the singular values, note that the Eigenvalue decomposition of
the m × m matrix AT A is given by V Σ2V T , which is more efficient to compute than the full
singular value decomposition since m < n. For images, m = 2, so there is even an explicit
formula available. We can now simplify the formula (7.3) to make the computation of U
unnecessary. Let Σ+ denote the pseudoinverse of Σ which is given by
(7.4) Σ+
= diag
1
σ1
, . . . ,
1
σk
, 0, . . . , 0 ,
where σk is the smallest nonzero singular value. Then U = AV Σ+, and from (7.3) we conclude
(7.5) Π(A) = AV Σ+
ΣpV T
.
For the special case of color images, where n = 3 and m = 2, the implementation of (7.5) is
detailed in Figure 7.
Appendix A. In this appendix we show explicitly how to compute the projection ΠK :
38. 実験:デノイジング
• 領域分け+線形補完による最適化手法(右端)と、
提案手法(領域分け+凸関数近似)の比較
• 領域数が少ないにも関わらず質の高い結果
Input image Unlifted Problem,
E = 992.50
Ours, |T | = 1,
|V| = 4,
E = 992.51
Ours, |T | = 6
|V| = 2 ⇥ 2 ⇥ 2
E = 993.52
Baseline,
|V| = 4 ⇥ 4 ⇥ 4,
E = 2255.81
Fig. 5: Convex ROF with vectorial TV. Direct optimization and proposed method
yield the same result. In contrast to the baseline method [11] the proposed ap-
proach has no discretization artefacts and yields a lower energy. The regulariza-
tion parameter is chosen as = 0.3.
Noisy input Ours, |T | = 1,
|V| = 4,
E = 2849.52
Ours, |T | = 6,
|V| = 2 ⇥ 2 ⇥ 2,
E = 2806.18
Ours, |T | = 48,
|V| = 3 ⇥ 3 ⇥ 3,
E = 2633.83
Baseline,
|V| = 4 ⇥ 4 ⇥ 4,
E = 3151.80
e purpose of this experiment is a proof of concept as our method i
overhead and convex problems can be solved via direct optimizatio
seen in Fig. 4 and Fig. 5, that the baseline method [11] has a str
as.
2 Denoising with Truncated Quadratic Dataterm
r images degraded with both, Gaussian and salt-and-pepper noise
e dataterm as ⇢(x, u(x)) = min 1
2 ku(x) I(x)k2
, ⌫ . We solve the
39. 実験:オプティカルフローΩImage 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8],
4.
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11
5.
Ground truth Ours, |V| = 2 ⇥ 2,
0.63 GB, 17 min
aep = 1.28
Ours, |V| = 3 ⇥ 3,
1.9 GB, 34 min
aep = 1.07
Ou
4.
Fig. 7: We compute the optical flow using our met
Image 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8], |V| = 17 ⇥ 17,
4.1 GB, 25 min
aep = 1.63
[
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11], |V| = 7 ⇥ 7,
5.2 GB, 33 min
aep = 2.65
Ground truth Ours, |V| = 2 ⇥ 2,
0.63 GB, 17 min
aep = 1.28
Ours, |V| = 3 ⇥ 3,
1.9 GB, 34 min
aep = 1.07
Ours, |V| = 4 ⇥ 4,
4.1 GB, 41 min
aep = 0.97
O
Fig. 7: We compute the optical flow using our method, the prod
Sublabel-Accurate Convex Relaxation of Vectorial Multil
Image 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8], |V| = 17 ⇥ 17,
4.1 GB, 25 min
aep = 1.63
[8], |
9.3
a
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11], |V| = 7 ⇥ 7,
5.2 GB, 33 min
aep = 2.65
[11],
Out
Sublabel-Accurate Convex Relaxation of Vectorial Multilabel Energies
Image 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8], |V| = 17 ⇥ 17,
4.1 GB, 25 min
aep = 1.63
[8], |V| = 28 ⇥ 28,
9.3 GB, 60 min
aep = 1.39
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11], |V| = 7 ⇥ 7,
5.2 GB, 33 min
aep = 2.65
[11], |V| = 9 ⇥ 9,
Out of memory.
ation of the energy instead of a
etween two input images I1, I2.
ding to the estimated maximum
dataterm is ⇢(x, v(x)) = kI2(x)
of the image gradient rI1(x).
d to the product space approach
ce dataterm using Lagrange mul-
memory as it has to store a con
linear one.
4.3 Optical Flow
We compute the optical flow v
The label space = [ d, d]2
is
displacement d 2 R between the
I1(x + v(x))k, and (x) is based
In Fig. 7 we compare the pr
[8]. Note that we implemented th
• 領域分け+線形補完による最適化手法(右端)と、
提案手法(領域分け+凸関数近似)の比較
• 領域数が少ないにも関わらず質の高い結果