SlideShare a Scribd company logo
1 of 40
Download to read offline
2016/12/03 CV勉強会@関東
ECCV2016読み会 発表資料
2016/12/03
@peisuke
自己紹介
名前:藤本 敬介
研究:ロボティクス、コンピュータビジョン
点群:形状統合、メッシュ化、認識
画像:画像認識、SfM・MVS
ロボット:自律移動、動作計画
本発表の概要
• 発表論文
• Sublabel-Accurate	Convex	Relaxation	of	Vectorial	
Multilabel	Energies
• どんな論文?
• 多次元のラベリング問題を凸緩和によって効率良く解く
• ノイズ除去、密オプティカルフロー、ステレオマッチング等
• 特徴は?
• 問題を複数の区間に分け、区間ごとに凸関数で近似
• 一般的な式に対する解法なので汎用性高
※資料が間に合わなかった
ので、後で挙げ直します
Sublabel-Accurate	Convex	
Relaxation	of	Vectorial	Multilabel	
Energies
Emanuel	Laude,	Thomas	Möllenhoff,	Michael	
Moeller,	Jan	Lellmann,	Daniel	Cremers
目的とする問題の解説
• どんな問題に使える?
• 画像の各画素𝑥に対して、どのラベル𝑢 𝑥 が正しいか
について、何らかの評価関数𝜌が与えられているとする
• 評価関数の値を良くしつつ、隣り合う画素同士のラベル
は近いようにする
𝐸 = min
):+→-
. 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 	𝜆Ψ 𝛻𝑢
+
Ψ 𝛻𝑢 = 𝑇𝑉(𝑢) = ∫ 𝛻𝑢 𝑥+ ;
𝑑𝑥
近い値ほど良い関数何らかの評価関数
例えば・・・
目的とする問題の解説
• どんな問題に使える?
• 画像の各画素𝑥に対して、どのラベル𝑢 𝑥 が正しいか
について、何らかの評価関数𝜌が与えられているとする
• 評価関数の値を良くしつつ、隣り合う画素同士のラベル
は近いようにする
𝐸 = min
):+→-
. 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 	𝜆Ψ 𝛻𝑢
+
Ψ 𝛻𝑢 = 𝑇𝑉(𝑢) = ∫ 𝛻𝑢 𝑥+ ;
𝑑𝑥
近い値ほど良い関数何らかの評価関数
例えば・・・
事例:ステレオマッチング
この点と同じ模様をこのライン上から探し、視差を計算
「この点」上に、画素の一致度に関し、こんな評価関数が定義できる
事例:ステレオマッチング
全ての画素に対して、それぞれ評価関数を算出
事例:ステレオマッチング
隣り合う画素同士の視差の変化が小さくなるようにしつつ、
評価の高い所を選択
事例:オプティカルフロー
• 動画像中の各画素が、次の時点でどの方向にど
れだけ動くかを、各画素とマッチする周辺位置を探
索することで求める
xy座標に対するマッチ度合い
事例:オプティカルフロー
• ステレオマッチングと同様に隣り合う画素同士のフ
ローの変化が小さくなるようにしつつ、マッチ度を
高くする
目的とする問題の解説
• どんな問題に使える?
• 画像の各画素𝑥に対して、どのラベル𝑢 𝑥 が正しいか
について、何らかの評価関数𝜌が与えられているとする
• 評価関数の値を良くしつつ、隣り合う画素同士のラベル
は近いようにする
𝐸 = min
):+→-
. 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 	𝜆Ψ 𝛻𝑢
+
Ψ 𝛻𝑢 = 𝑇𝑉(𝑢) = ∫ 𝛻𝑢 𝑥+ ;
𝑑𝑥
近い値ほど良い関数何らかの評価関数
例えば・・・
エッジを残す平滑化(TVノルム)
• 隣接画素間の2乗距離を使うとエッジがぼやけて
しまうが、絶対値を使うとエッジが保存される
𝐸 𝐮 = ℓ 𝐮 + 𝜆Ω 𝛁𝐮 Ω 𝛁𝐮 = @ 𝛻𝑢A + 𝛻𝑢B
uu
0
1
0
1
どちらも|∇u|=1
0
1
元の信号と近く、
制約項のペナルティも小
元信号
ノイズ除去後
信号候補1
ノイズ除去後
信号候補2
(参考)Vectorial	Total	Variation
• 多チャンネル画像では、チャンネル毎に異なる向
きのエッジが出力され、ボヤけてしまう
• チャンネル間でエッジは共通して残しつつ、変化量
のみチャンネル毎に計算する事のできる評価関数
• ∇uの最大の特異値をノルムとする
目的とする問題(再掲)
• 多値ラベリング問題・・・直接解くのは困難
• 高次元・非線形・微分不可能
𝐸 = min
):+→-
. 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 	𝜆Ψ 𝛻𝑢
+
解きやすい凸問題で近似しよう
凸問題とは
• 局所最適解=大局最適解となるような単純な問題
• 最適化が容易
非凸問題 凸問題
近似方法について
• 目的関数fの双共役(双対の双対)を取ると、fを
下から抑える凸関数f**に変換できる
𝑓 𝒙 𝑓∗∗
𝒙
双対とは
• 関数𝑓∗
は𝑓の共役関数と呼ばれる
• 𝑓∗
の最大化は𝑓の最小化と等価
𝑓∗
𝒔 = sup 𝒔K
𝒙 − 𝑓 𝒙 	|	𝒙 ∈ 𝑹P
Keywords: Convex Relaxation, Optim
(a) Original dataterm (b) Without lifting (
arXiv:16
双共役による近似式
• このまま双共役にすると近似精度低
𝐸 = min
):+→-
. 𝜌∗∗ 𝑥, 𝑢 𝑥 𝑑𝑥 + 	𝜆Ψ∗∗ 𝛻𝑢
+
𝜌∗∗
𝑥, 𝑢 𝑥𝜌 𝑥, 𝑢 𝑥
(c) Classical lifting (d) Proposed lifting
ataterm. Convexification without lifting
領域分けによる高精度な近似
近似精度は高くなるが全体としては非凸関数
→問題の次元数を増やし凸問題のまま上記を解く
領域を3角形の集合で分割
(たぶん・・・!)
Lifted	Representation
• 変数を高次元の変数に変換
• 変数uを、uの属する3角形の頂点の線形和で表す
• u =	(0.3,	0.2)	→ 0.7	*	t2	+	0.1	*	t3	+	0.2	*	t6
• 使わなかった頂点はゼロとして、要素数が頂点数のベ
クトルに変換
• (0,	0.7,	0.1,	0,	0,	0.2)
Sublabel-Accurate Convex Relaxation of Vectorial M
1 0 1
0
1
u(x) = 0.7e
2
+ 0.1e
3
+ 0.2e
6
= (0, 0.7, 0.1, 0, 0, 0.2)
>
1
4
0.2
0.3
t6
t2
t3
𝑢 = @ 𝒖R 𝑡R
Lifted	Representationでの問題設定
• 高次元化されたデータ項と正則化項
𝝆 𝒖 =	U
𝜌 𝑇R 𝛼 ,
∞,
Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐
∞
𝑖𝑓	𝒖 = 𝐸R 𝛼
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑖𝑓	𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
. 𝝆∗∗
𝑥, 𝒖 𝑥 + 𝜳∗∗
𝛻𝒖 𝑑𝑥
Lifted	Representationでの問題設定
• 高次元化されたデータ項と正則化項
𝝆 𝒖 =	U
𝜌 𝑇R 𝛼 ,
∞,
Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐
∞
𝑖𝑓	𝒖 = 𝐸R 𝛼
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑖𝑓	𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
. 𝝆∗∗
𝑥, 𝒖 𝑥 + 𝜳∗∗
𝛻𝒖 𝑑𝑥
Lifted	Representationの具体例
𝝆 𝒖 =	U
𝜌 𝑇R 𝛼 ,
∞,
𝑖𝑓	𝒖 = 𝐸R 𝛼
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝒖 = 0, 0.7, 0.1,0, 0, 0.2 の場合・・・
𝐸; = 𝑒;, 𝑒k, 𝑒l
𝐸m = 𝑒;, 𝑒m, 𝑒l
𝐸n = 𝑒m, 𝑒l, 𝑒o
𝐸k = 𝑒m, 𝑒n, 𝑒o
t1=(0,0) t2=(1,0) t3=(2,0)
t4=(0,1) t5=(1,1) t6=(2,1)
∆;
∆m
∆n
∆k
𝒖 = 0.7𝑒m + 0.1𝑒n + 0.2𝑒o
𝑒m = 0, 1, 0, 0, 0, 0
元空間での表現
Liftedされた表現
𝛼 = 0.7, 0.1, 0.2
𝝆 𝒖 = 𝜌 𝑢 ,
𝑢 = 0.7𝑡m + 0.1𝑡n + 0.2𝑡o
Lifted	Representationの具体例
𝝆 𝒖 =	U
𝜌 𝑇R 𝛼 ,
∞,
𝑖𝑓	𝒖 = 𝐸R 𝛼
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝒖 = 0.5, 0.7, 0.1,0.2, 0.1, 0.2 の場合・・・
𝐸; = 𝑒;, 𝑒k, 𝑒l
𝐸m = 𝑒;, 𝑒m, 𝑒l
𝐸n = 𝑒m, 𝑒l, 𝑒o
𝐸k = 𝑒m, 𝑒n, 𝑒o
t1=(0,0) t2=(1,0) t3=(2,0)
t4=(0,1) t5=(1,1) t6=(2,1)
∆;
∆m
∆n
∆k
𝑒m = 0, 1, 0, 0, 0, 0
元空間での表現
Liftedされた表現
𝝆 𝒖 = ∞
無い!
Lifted	Representationでの問題設定
• 高次元化されたデータ項と正則化項
𝝆 𝒖 =	U
𝜌 𝑇R 𝛼 ,
∞,
Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐
∞
𝑖𝑓	𝒖 = 𝐸R 𝛼
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑖𝑓	𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
. 𝝆∗∗
𝑥, 𝒖 𝑥 + 𝜳∗∗
𝛻𝒖 𝑑𝑥
Lifted	Representationの具体例
t1=(0,0) t2=(1,0) t3=(2,0)
t4=(0,1) t5=(1,1) t6=(2,1)
∆;
∆k
Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐
∞
𝑖𝑓	𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐
	𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑇R 𝛼
𝑇Y 𝛽 𝜐
画像上のエッジ
※𝜐は正則化項の双対に出てくる補助変数と等価、計算上は陽に出て来ない
[ ; = sup @ 𝑢Rdiv
R
𝜂R 𝝊	 − 𝛿
𝛻𝑢
Liftedした評価関数について
. 𝝆∗∗
𝑥, 𝒖 𝑥 + 𝜳∗∗
𝛻𝒖 𝑑𝑥
Liftedした評価関数について
. 𝝆∗∗
𝑥, 𝒖 𝑥 + 𝜳∗∗
𝛻𝒖 𝑑𝑥
axed energy minimization problem becomes
min
u:⌦!R|V|
max
q:⌦!K
X
x2⌦
⇢⇤⇤
(x, u(x)) + hDiv q, ui.
order to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we intr
ditional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C,
that w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui,
ere the set C is given as
C =

1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x)  y
o
.
numerical optimization we use a GPU-based implementation1
of a first
mal-dual method [14]. The algorithm requires the orthogonal projectio
dual variables onto the sets C respectively K in every iteration. Howeve
elaxed energy minimization problem becomes
min
u:⌦!R|V|
max
q:⌦!K
X
x2⌦
⇢⇤⇤
(x, u(x)) + hDiv q, ui. (18)
n order to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we introduce
dditional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦
o that w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (19)
where the set C is given as
C =

1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x)  y
o
. (20)
For numerical optimization we use a GPU-based implementation1
of a first-order
rimal-dual method [14]. The algorithm requires the orthogonal projections of
he dual variables onto the sets C respectively K in every iteration. However, the
rojection onto an epigraph of dimension |V| + 1 is di cult for large values of
V|. We rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1)-
imensional epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2 R:
8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Creme
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1  1, 1  i  |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz
8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Cremers
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1  1, 1  i  |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
et for now the weight of the regularizer in (1) be zero. Then, at each point
2 ⌦ we minimize a generally nonconvex energy over a compact set ⇢ Rn
:
min
u2
⇢(u). (6)
We set up the lifted energy so that it attains finite values if and only if the
rgument u is a sparse representation u = Ei↵ of a sublabel u 2 :
⇢(u) = min
1i|T |
⇢i(u), ⇢i(u) =
8
<
:
⇢(Ti↵), if u = Ei↵, ↵ 2 U
n ,
1, otherwise.
(7)
roblems (6) and (7) are equivalent due to the one-to-one correspondence of
= Ti↵ and u = Ei↵. However, energy (7) is finite on a nonconvex set only. In
rder to make optimization tractable, we minimize its convex envelope.
Proposition 1 The convex envelope of (7) is given as:
⇢⇤⇤
(u) = sup
v2R|V|
hu, vi max
1i|T |
⇢⇤
i (v),
⇢⇤
i (v) = hEibi, vi + ⇢⇤
i (A>
i E>
i v), ⇢i := ⇢ + i
.
(8)
and Ai are given as bi := Mn+1
i , Ai := M1
i , M2
i , . . . , Mn
i , where Mj
i are
he columns of the matrix Mi := (T>
i , 1) >
2 Rn+1⇥n+1
.
roof. Follows from a calculation starting at the definition of ⇢⇤⇤
. See Ap-
min
u2
⇢(u). (6)
p the lifted energy so that it attains finite values if and only if the
u is a sparse representation u = Ei↵ of a sublabel u 2 :
= min
1i|T |
⇢i(u), ⇢i(u) =
8
<
:
⇢(Ti↵), if u = Ei↵, ↵ 2 U
n ,
1, otherwise.
(7)
(6) and (7) are equivalent due to the one-to-one correspondence of
nd u = Ei↵. However, energy (7) is finite on a nonconvex set only. In
make optimization tractable, we minimize its convex envelope.
ion 1 The convex envelope of (7) is given as:
⇢⇤⇤
(u) = sup
v2R|V|
hu, vi max
1i|T |
⇢⇤
i (v),
⇢⇤
i (v) = hEibi, vi + ⇢⇤
i (A>
i E>
i v), ⇢i := ⇢ + i
.
(8)
are given as bi := Mn+1
i , Ai := M1
i , M2
i , . . . , Mn
i , where Mj
i are
ns of the matrix Mi := (T>
i , 1) >
2 Rn+1⇥n+1
.
⇤⇤
We set up the lifted energy so that it attains fi
argument u is a sparse representation u = Ei↵ o
⇢(u) = min
1i|T |
⇢i(u), ⇢i(u) =
8
<
:
⇢(Ti↵),
1,
Problems (6) and (7) are equivalent due to the
u = Ti↵ and u = Ei↵. However, energy (7) is fin
order to make optimization tractable, we minimi
Proposition 1 The convex envelope of (7) is gi
⇢⇤⇤
(u) = sup
v2R|V|
hu, vi max
1i|T |
⇢⇤
i
⇢⇤
i (v) = hEibi, vi + ⇢⇤
i (A>
i E>
i v),
bi and Ai are given as bi := Mn+1
i , Ai := M1
i ,
the columns of the matrix Mi := (T>
i , 1) >
2 Rn
Proof. Follows from a calculation starting at t
一見難しそうに見えるけど・・・
一見難しそうに見えるけど・・・
むずかしい!!
凸問題を解く際の戦略
• 主問題の変数で最適化と双対問題の変数で最適
化を交互に繰り返す
• 主問題の変数とは、元々の問題にあった変数
• 双対問題の変数は、後から加えた変数
𝑓∗
𝒔 = sup 𝒔K
𝒙 − 𝑓 𝒙 	|	𝒙 ∈ 𝑹P𝑓 𝒙
双対
sが双対問題の変数
今回の問題について
rder to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we intro
tional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x
hat w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui,
re the set C is given as
C =

1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x)  y
o
.
numerical optimization we use a GPU-based implementation1
of a first-o
al-dual method [14]. The algorithm requires the orthogonal projectio
dual variables onto the sets C respectively K in every iteration. However
ection onto an epigraph of dimension |V| + 1 is di cult for large valu
We rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n +
ensional epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2
主問題の変数 双対問題の変数
• 𝑢に関する最適化は容易
• 𝑣, 𝑤, 𝑞の最適化(C,	Kへの射影)が課題
𝑣
射影
射影とは・・・変数の動ける範囲が限定されている
場合に、範囲内に変数を移動させるステップ
今回の問題を解くためには・・・
u:⌦!R
x2⌦
r to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we introdu
nal variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2
w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (1
the set C is given as
C =

1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x)  y
o
. (2
merical optimization we use a GPU-based implementation1
of a first-ord
dual method [14]. The algorithm requires the orthogonal projections
al variables onto the sets C respectively K in every iteration. However, th
ion onto an epigraph of dimension |V| + 1 is di cult for large values
rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1
u:⌦!R|V| q:⌦!K
x2⌦
der to get rid of the pointwise maximum over ⇢⇤
i (v) in Eq. (8), we introduce
tional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦
at w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (19)
e the set C is given as
C =

1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x)  y
o
. (20)
numerical optimization we use a GPU-based implementation1
of a first-order
al-dual method [14]. The algorithm requires the orthogonal projections of
dual variables onto the sets C respectively K in every iteration. However, the
ection onto an epigraph of dimension |V| + 1 is di cult for large values of
We rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1)-
nsional epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2 R:
ri
(x)  si(x), ri
(x) = A>
i E>
i v(x), si(x) = w(x) hEibi, v(x)i. (21)
8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Cremers
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1  1, 1  i  |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz
continuity of a piecewise linear function defined by the matrices q 2 Rd⇥|V|
.
Therefore, one can expect that the Lipschitz constraint is equivalent to a bound
• 𝑣、𝑤について最大化しつつ、領域Cに射影
• 𝑞について最大化しつつ、領域Kに射影
射影について
al variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦
w(x) attains the value of the pointwise maximum:
min
u:⌦!R|V|
max
(v,w):⌦!C
q:⌦!K
X
x2⌦
hu(x), v(x)i w(x) + hDiv q, ui, (19)
he set C is given as
C =

1i|T |
Ci, Ci :=
n
(x, y) 2 R|V|+1
| ⇢⇤
i (x)  y
o
. (20)
erical optimization we use a GPU-based implementation1
of a first-order
dual method [14]. The algorithm requires the orthogonal projections of
variables onto the sets C respectively K in every iteration. However, the
on onto an epigraph of dimension |V| + 1 is di cult for large values of
rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1)-
onal epigraph constraints introducing variables ri
(x) 2 Rn
, si(x) 2 R:
x)  si(x), ri
(x) = A>
i E>
i v(x), si(x) = w(x) hEibi, v(x)i. (21)
quality constraints can be implemented using Lagrange multipliers. For
ection onto the set K we use an approach similar to [7, Figure 7].
Proof. Follows from a calculation starting at the definition of the convex conju-
gate ⇤
. See Appendix A.
Interestingly, although in its original formulation (14) the set K has infinitely
many constraints, one can equivalently represent K by finitely many.
Proposition 3 The set K in equation (14) is the same as
K =
n
q 2 Rd⇥|V|
| Di
q S1  1, 1  i  |T |
o
, Di
q = QiD (TiD) 1
, (15)
where the matrices QiD 2 Rd⇥n
and TiD 2 Rn⇥n
are given as
QiD := qi1
qin+1
, . . . , qin
qin+1
, TiD := ti1
tin+1
, . . . , tin
tin+1
.
Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz
continuity of a piecewise linear function defined by the matrices q 2 Rd⇥|V|
.
Therefore, one can expect that the Lipschitz constraint is equivalent to a bound
on the derivative. For the complete proof, see Appendix A.
Schatten-∞ Normへの射影
(Dの特異値の最大が1以下になるようなqを求める)
問題依存、パラボラ関数への射影など
𝑒𝑝𝑖 𝜌 + ∆R
∗
(参考)パラボラ関数への射影
• Convex	Relaxation	of	Vectorial	Problems	with	Coupled	
Regularization (E.	Strekalovskiy,	A.	Chambolle,	D.	Cremers), In	
SIAM	Journal	on	Imaging	Sciences,	volume	7,	2014.
CONVEX RELAXATION OF VECTORIAL PROBLEMS 333
B.2. Projection onto parabolas y ≥ α∥x∥2
2. Let α > 0. For x0 ∈ Rd and y0 ∈ R
consider the projection onto a parabola:
(B.4) arg min
x∈Rd, y∈R,
y≥α∥x∥2
2
(x − x0)2
2
+
(y − y0)2
2
.
If already y0 ≥ α∥x0∥2
2, the solution is (x, y) = (x0, y0). Otherwise, with a := 2α∥x0∥2,
b := 2
3(1 − 2αy0), and d := a2 + b3 set
(B.5) v :=
⎧
⎨
⎩
c − b
c with c =
3
a +
√
d if d ≥ 0,
2
√
−b cos 1
3 arccos a
√
−b
3 if d < 0.
If c = 0 in the first case, set v := 0. The solution is then given by
(B.6) x =
v
2α
x0
∥x0∥2
if x0 ̸= 0
0 else
, y = α∥x∥2
2.
Remark. In the case d < 0 it always holds that a
√
−b
3 ∈ [0, 1]. To ensure this also numeri-
cally, one should compute d by d = (a −
√
−b
3
)(a +
√
−b
3
) for b < 0.
Proof. First, for y0 ≥ α∥x0∥2
2 the projection is obviously (x, y) = (x0, y0). Otherwise, we
(参考) Schatten-∞ Normへの射影
• The	Natural	Total	Variation	Which	Arises	from	Geometric	Measure	
Theory (B.	Goldluecke,	E.	Strekalovskiy,	D.	Cremers), In	SIAM	Journal	on	
Imaging	Sciences,	volume	5,	2012.
in that color edges are preserved better. We also showed that TVJ can serve as a regularizer
in more general energy functionals, which makes it applicable to general inverse problems like
deblurring, zooming, inpainting, and superresolution.
7.1. Projection ΠS for TVS. Since each channel is treated separately, we can compute
the well-known projection for the scalar TV for each color channel. Let A ∈ Rn×m with rows
a1, . . . , an ∈ Rm. Then ΠS is defined rowwise as
(7.1) ΠS(ai) =
ai
max(1, |ai|2)
.
7.2. Projection ΠF for TVF . Let A ∈ Rn×m with elements aij ∈ R. From (2.8) we see
that we need to compute the projection onto the unit ball in Rn·m when (aij) is viewed as a
vector in Rn·m. Thus,
(7.2) ΠF (A) =
A
max 1, n
i=1
m
j=1 a2
ij
.
7.3. Projection ΠJ for TVJ . Let A ∈ Rn×m with singular value decomposition A =
UΣV T and Σ = diag(σ1, . . . , σm). We assume that the singular values are ordered with σ1
being the largest. If the sum of the singular values is less than or equal to one, A already lies
in co(En ⊗ Em). Otherwise, according to Theorem 3.18,
(7.3) Π(A) = UΣpV T
with Σp = diag(σp).
To compute the matrix V and the singular values, note that the Eigenvalue decomposition of
the m × m matrix AT A is given by V Σ2V T , which is more efficient to compute than the full
singular value decomposition since m < n. For images, m = 2, so there is even an explicit
formula available. We can now simplify the formula (7.3) to make the computation of U
unnecessary. Let Σ+ denote the pseudoinverse of Σ which is given by
(7.4) Σ+
= diag
1
σ1
, . . . ,
1
σk
, 0, . . . , 0 ,
where σk is the smallest nonzero singular value. Then U = AV Σ+, and from (7.3) we conclude
(7.5) Π(A) = AV Σ+
ΣpV T
.
For the special case of color images, where n = 3 and m = 2, the implementation of (7.5) is
detailed in Figure 7.
Appendix A. In this appendix we show explicitly how to compute the projection ΠK :
実験:デノイジング
• 領域分け+線形補完による最適化手法(右端)と、
提案手法(領域分け+凸関数近似)の比較
• 領域数が少ないにも関わらず質の高い結果
Input image Unlifted Problem,
E = 992.50
Ours, |T | = 1,
|V| = 4,
E = 992.51
Ours, |T | = 6
|V| = 2 ⇥ 2 ⇥ 2
E = 993.52
Baseline,
|V| = 4 ⇥ 4 ⇥ 4,
E = 2255.81
Fig. 5: Convex ROF with vectorial TV. Direct optimization and proposed method
yield the same result. In contrast to the baseline method [11] the proposed ap-
proach has no discretization artefacts and yields a lower energy. The regulariza-
tion parameter is chosen as = 0.3.
Noisy input Ours, |T | = 1,
|V| = 4,
E = 2849.52
Ours, |T | = 6,
|V| = 2 ⇥ 2 ⇥ 2,
E = 2806.18
Ours, |T | = 48,
|V| = 3 ⇥ 3 ⇥ 3,
E = 2633.83
Baseline,
|V| = 4 ⇥ 4 ⇥ 4,
E = 3151.80
e purpose of this experiment is a proof of concept as our method i
overhead and convex problems can be solved via direct optimizatio
seen in Fig. 4 and Fig. 5, that the baseline method [11] has a str
as.
2 Denoising with Truncated Quadratic Dataterm
r images degraded with both, Gaussian and salt-and-pepper noise
e dataterm as ⇢(x, u(x)) = min 1
2 ku(x) I(x)k2
, ⌫ . We solve the
実験:オプティカルフローΩImage 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8],
4.
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11
5.
Ground truth Ours, |V| = 2 ⇥ 2,
0.63 GB, 17 min
aep = 1.28
Ours, |V| = 3 ⇥ 3,
1.9 GB, 34 min
aep = 1.07
Ou
4.
Fig. 7: We compute the optical flow using our met
Image 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8], |V| = 17 ⇥ 17,
4.1 GB, 25 min
aep = 1.63
[
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11], |V| = 7 ⇥ 7,
5.2 GB, 33 min
aep = 2.65
Ground truth Ours, |V| = 2 ⇥ 2,
0.63 GB, 17 min
aep = 1.28
Ours, |V| = 3 ⇥ 3,
1.9 GB, 34 min
aep = 1.07
Ours, |V| = 4 ⇥ 4,
4.1 GB, 41 min
aep = 0.97
O
Fig. 7: We compute the optical flow using our method, the prod
Sublabel-Accurate Convex Relaxation of Vectorial Multil
Image 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8], |V| = 17 ⇥ 17,
4.1 GB, 25 min
aep = 1.63
[8], |
9.3
a
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11], |V| = 7 ⇥ 7,
5.2 GB, 33 min
aep = 2.65
[11],
Out
Sublabel-Accurate Convex Relaxation of Vectorial Multilabel Energies
Image 1 [8], |V| = 5 ⇥ 5,
0.67 GB, 4 min
aep = 2.78
[8], |V| = 11 ⇥ 11,
2.1 GB, 12 min
aep = 1.97
[8], |V| = 17 ⇥ 17,
4.1 GB, 25 min
aep = 1.63
[8], |V| = 28 ⇥ 28,
9.3 GB, 60 min
aep = 1.39
Image 2 [11], |V| = 3 ⇥ 3,
0.67 GB, 0.35 min
aep = 5.44
[11], |V| = 5 ⇥ 5,
2.4 GB, 16 min
aep = 4.22
[11], |V| = 7 ⇥ 7,
5.2 GB, 33 min
aep = 2.65
[11], |V| = 9 ⇥ 9,
Out of memory.
ation of the energy instead of a
etween two input images I1, I2.
ding to the estimated maximum
dataterm is ⇢(x, v(x)) = kI2(x)
of the image gradient rI1(x).
d to the product space approach
ce dataterm using Lagrange mul-
memory as it has to store a con
linear one.
4.3 Optical Flow
We compute the optical flow v
The label space = [ d, d]2
is
displacement d 2 R between the
I1(x + v(x))k, and (x) is based
In Fig. 7 we compare the pr
[8]. Note that we implemented th
• 領域分け+線形補完による最適化手法(右端)と、
提案手法(領域分け+凸関数近似)の比較
• 領域数が少ないにも関わらず質の高い結果
まとめ
• 多次元のマルチラベル問題の一般解法を提案
• 凸関数で近似することで高精度な解を出力
• Vectorial	Total	Variationによる質の高い正則化

More Related Content

What's hot

Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryFrancesco Tudisco
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon informationFrank Nielsen
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralityFrancesco Tudisco
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsFrancesco Tudisco
 
Comparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering modelsComparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering modelsBigMC
 
22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_final22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_finalvibhuti bansal
 
Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...HidenoriOgata
 
Bellman functions and Lp estimates for paraproducts
Bellman functions and Lp estimates for paraproductsBellman functions and Lp estimates for paraproducts
Bellman functions and Lp estimates for paraproductsVjekoslavKovac1
 
Multilinear Twisted Paraproducts
Multilinear Twisted ParaproductsMultilinear Twisted Paraproducts
Multilinear Twisted ParaproductsVjekoslavKovac1
 
On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodFrank Nielsen
 
On the Jensen-Shannon symmetrization of distances relying on abstract means
On the Jensen-Shannon symmetrization of distances relying on abstract meansOn the Jensen-Shannon symmetrization of distances relying on abstract means
On the Jensen-Shannon symmetrization of distances relying on abstract meansFrank Nielsen
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clusteringFrank Nielsen
 
S. Duplij, A q-deformed generalization of the Hosszu-Gluskin theorem
S. Duplij, A q-deformed generalization of the Hosszu-Gluskin theoremS. Duplij, A q-deformed generalization of the Hosszu-Gluskin theorem
S. Duplij, A q-deformed generalization of the Hosszu-Gluskin theoremSteven Duplij (Stepan Douplii)
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applicationsFrank Nielsen
 
Deep generative model.pdf
Deep generative model.pdfDeep generative model.pdf
Deep generative model.pdfHyungjoo Cho
 

What's hot (20)

Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
 
Comparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering modelsComparing estimation algorithms for block clustering models
Comparing estimation algorithms for block clustering models
 
Fougeres Besancon Archimax
Fougeres Besancon ArchimaxFougeres Besancon Archimax
Fougeres Besancon Archimax
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_final22 01 2014_03_23_31_eee_formula_sheet_final
22 01 2014_03_23_31_eee_formula_sheet_final
 
Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...Hyperfunction method for numerical integration and Fredholm integral equation...
Hyperfunction method for numerical integration and Fredholm integral equation...
 
Bellman functions and Lp estimates for paraproducts
Bellman functions and Lp estimates for paraproductsBellman functions and Lp estimates for paraproducts
Bellman functions and Lp estimates for paraproducts
 
Multilinear Twisted Paraproducts
Multilinear Twisted ParaproductsMultilinear Twisted Paraproducts
Multilinear Twisted Paraproducts
 
On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
 
On the Jensen-Shannon symmetrization of distances relying on abstract means
On the Jensen-Shannon symmetrization of distances relying on abstract meansOn the Jensen-Shannon symmetrization of distances relying on abstract means
On the Jensen-Shannon symmetrization of distances relying on abstract means
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
S. Duplij, A q-deformed generalization of the Hosszu-Gluskin theorem
S. Duplij, A q-deformed generalization of the Hosszu-Gluskin theoremS. Duplij, A q-deformed generalization of the Hosszu-Gluskin theorem
S. Duplij, A q-deformed generalization of the Hosszu-Gluskin theorem
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applications
 
Deep generative model.pdf
Deep generative model.pdfDeep generative model.pdf
Deep generative model.pdf
 

Viewers also liked

CVPR2016読み会 Sparsifying Neural Network Connections for Face Recognition
CVPR2016読み会 Sparsifying Neural Network Connections for Face RecognitionCVPR2016読み会 Sparsifying Neural Network Connections for Face Recognition
CVPR2016読み会 Sparsifying Neural Network Connections for Face RecognitionKoichi Takahashi
 
Stochastic Variational Inference
Stochastic Variational InferenceStochastic Variational Inference
Stochastic Variational InferenceKaede Hayashi
 
20170819 CV勉強会 CVPR 2017
20170819 CV勉強会 CVPR 201720170819 CV勉強会 CVPR 2017
20170819 CV勉強会 CVPR 2017issaymk2
 
On the Dynamics of Machine Learning Algorithms and Behavioral Game Theory
On the Dynamics of Machine Learning Algorithms and Behavioral Game TheoryOn the Dynamics of Machine Learning Algorithms and Behavioral Game Theory
On the Dynamics of Machine Learning Algorithms and Behavioral Game TheoryRikiya Takahashi
 
LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~Yuma Inoue
 
プログラミングコンテストでのデータ構造 2 ~動的木編~
プログラミングコンテストでのデータ構造 2 ~動的木編~プログラミングコンテストでのデータ構造 2 ~動的木編~
プログラミングコンテストでのデータ構造 2 ~動的木編~Takuya Akiba
 
Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展Yuichi Yoshida
 
PRML輪読#14
PRML輪読#14PRML輪読#14
PRML輪読#14matsuolab
 
ウェーブレット木の世界
ウェーブレット木の世界ウェーブレット木の世界
ウェーブレット木の世界Preferred Networks
 
Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...
Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...
Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...Kenko Nakamura
 
Practical recommendations for gradient-based training of deep architectures
Practical recommendations for gradient-based training of deep architecturesPractical recommendations for gradient-based training of deep architectures
Practical recommendations for gradient-based training of deep architecturesKoji Matsuda
 
ORB-SLAMを動かしてみた
ORB-SLAMを動かしてみたORB-SLAMを動かしてみた
ORB-SLAMを動かしてみたTakuya Minagawa
 
強化学習その2
強化学習その2強化学習その2
強化学習その2nishio
 
多項式あてはめで眺めるベイズ推定 ~今日からきみもベイジアン~
多項式あてはめで眺めるベイズ推定~今日からきみもベイジアン~多項式あてはめで眺めるベイズ推定~今日からきみもベイジアン~
多項式あてはめで眺めるベイズ推定 ~今日からきみもベイジアン~ tanutarou
 
最小カットを使って「燃やす埋める問題」を解く
最小カットを使って「燃やす埋める問題」を解く最小カットを使って「燃やす埋める問題」を解く
最小カットを使って「燃やす埋める問題」を解くshindannin
 
LiDAR点群とSfM点群との位置合わせ
LiDAR点群とSfM点群との位置合わせLiDAR点群とSfM点群との位置合わせ
LiDAR点群とSfM点群との位置合わせTakuya Minagawa
 
画像認識モデルを作るための鉄板レシピ
画像認識モデルを作るための鉄板レシピ画像認識モデルを作るための鉄板レシピ
画像認識モデルを作るための鉄板レシピTakahiro Kubo
 

Viewers also liked (20)

488Paper
488Paper488Paper
488Paper
 
Deep Fried Convnets
Deep Fried ConvnetsDeep Fried Convnets
Deep Fried Convnets
 
CVPR2016読み会 Sparsifying Neural Network Connections for Face Recognition
CVPR2016読み会 Sparsifying Neural Network Connections for Face RecognitionCVPR2016読み会 Sparsifying Neural Network Connections for Face Recognition
CVPR2016読み会 Sparsifying Neural Network Connections for Face Recognition
 
Stochastic Variational Inference
Stochastic Variational InferenceStochastic Variational Inference
Stochastic Variational Inference
 
20170819 CV勉強会 CVPR 2017
20170819 CV勉強会 CVPR 201720170819 CV勉強会 CVPR 2017
20170819 CV勉強会 CVPR 2017
 
On the Dynamics of Machine Learning Algorithms and Behavioral Game Theory
On the Dynamics of Machine Learning Algorithms and Behavioral Game TheoryOn the Dynamics of Machine Learning Algorithms and Behavioral Game Theory
On the Dynamics of Machine Learning Algorithms and Behavioral Game Theory
 
LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~LCA and RMQ ~簡潔もあるよ!~
LCA and RMQ ~簡潔もあるよ!~
 
DeepLearningTutorial
DeepLearningTutorialDeepLearningTutorial
DeepLearningTutorial
 
プログラミングコンテストでのデータ構造 2 ~動的木編~
プログラミングコンテストでのデータ構造 2 ~動的木編~プログラミングコンテストでのデータ構造 2 ~動的木編~
プログラミングコンテストでのデータ構造 2 ~動的木編~
 
Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展Greed is Good: 劣モジュラ関数最大化とその発展
Greed is Good: 劣モジュラ関数最大化とその発展
 
PRML輪読#14
PRML輪読#14PRML輪読#14
PRML輪読#14
 
ウェーブレット木の世界
ウェーブレット木の世界ウェーブレット木の世界
ウェーブレット木の世界
 
Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...
Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...
Fractality of Massive Graphs: Scalable Analysis with Sketch-Based Box-Coverin...
 
Practical recommendations for gradient-based training of deep architectures
Practical recommendations for gradient-based training of deep architecturesPractical recommendations for gradient-based training of deep architectures
Practical recommendations for gradient-based training of deep architectures
 
ORB-SLAMを動かしてみた
ORB-SLAMを動かしてみたORB-SLAMを動かしてみた
ORB-SLAMを動かしてみた
 
強化学習その2
強化学習その2強化学習その2
強化学習その2
 
多項式あてはめで眺めるベイズ推定 ~今日からきみもベイジアン~
多項式あてはめで眺めるベイズ推定~今日からきみもベイジアン~多項式あてはめで眺めるベイズ推定~今日からきみもベイジアン~
多項式あてはめで眺めるベイズ推定 ~今日からきみもベイジアン~
 
最小カットを使って「燃やす埋める問題」を解く
最小カットを使って「燃やす埋める問題」を解く最小カットを使って「燃やす埋める問題」を解く
最小カットを使って「燃やす埋める問題」を解く
 
LiDAR点群とSfM点群との位置合わせ
LiDAR点群とSfM点群との位置合わせLiDAR点群とSfM点群との位置合わせ
LiDAR点群とSfM点群との位置合わせ
 
画像認識モデルを作るための鉄板レシピ
画像認識モデルを作るための鉄板レシピ画像認識モデルを作るための鉄板レシピ
画像認識モデルを作るための鉄板レシピ
 

Similar to sublabel accurate convex relaxation of vectorial multilabel energies

On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...BRNSS Publication Hub
 
Solution set 3
Solution set 3Solution set 3
Solution set 3慧环 赵
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateAlexander Litvinenko
 
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...BRNSS Publication Hub
 
Mid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT Kanpur
Mid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT KanpurMid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT Kanpur
Mid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT KanpurVivekananda Samiti
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment HelpEdu Assignment Help
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfAlexander Litvinenko
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning projectLianli Liu
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005Alexander Litvinenko
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT posterAlexander Litvinenko
 

Similar to sublabel accurate convex relaxation of vectorial multilabel energies (20)

QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
 
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf03_AJMS_166_18_RA.pdf
03_AJMS_166_18_RA.pdf
 
Solution set 3
Solution set 3Solution set 3
Solution set 3
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
 
02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf02_AJMS_186_19_RA.pdf
02_AJMS_186_19_RA.pdf
 
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
On Application of the Fixed-Point Theorem to the Solution of Ordinary Differe...
 
Mid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT Kanpur
Mid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT KanpurMid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT Kanpur
Mid semexam | Theory of Computation | Akash Anand | MTH 401A | IIT Kanpur
 
Physical Chemistry Assignment Help
Physical Chemistry Assignment HelpPhysical Chemistry Assignment Help
Physical Chemistry Assignment Help
 
Networking Assignment Help
Networking Assignment HelpNetworking Assignment Help
Networking Assignment Help
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
 
PCA on graph/network
PCA on graph/networkPCA on graph/network
PCA on graph/network
 
machinelearning project
machinelearning projectmachinelearning project
machinelearning project
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
 
Computer Network Homework Help
Computer Network Homework HelpComputer Network Homework Help
Computer Network Homework Help
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
 

More from Fujimoto Keisuke

A quantum computational approach to correspondence problems on point sets
A quantum computational approach to correspondence problems on point setsA quantum computational approach to correspondence problems on point sets
A quantum computational approach to correspondence problems on point setsFujimoto Keisuke
 
F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...
F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...
F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...Fujimoto Keisuke
 
YOLACT real-time instance segmentation
YOLACT real-time instance segmentationYOLACT real-time instance segmentation
YOLACT real-time instance segmentationFujimoto Keisuke
 
Product Managerの役割、周辺ロールとの差異
Product Managerの役割、周辺ロールとの差異Product Managerの役割、周辺ロールとの差異
Product Managerの役割、周辺ロールとの差異Fujimoto Keisuke
 
ChainerRLで株売買を結構頑張ってみた(後編)
ChainerRLで株売買を結構頑張ってみた(後編)ChainerRLで株売買を結構頑張ってみた(後編)
ChainerRLで株売買を結構頑張ってみた(後編)Fujimoto Keisuke
 
Temporal Cycle Consistency Learning
Temporal Cycle Consistency LearningTemporal Cycle Consistency Learning
Temporal Cycle Consistency LearningFujimoto Keisuke
 
20190414 Point Cloud Reconstruction Survey
20190414 Point Cloud Reconstruction Survey20190414 Point Cloud Reconstruction Survey
20190414 Point Cloud Reconstruction SurveyFujimoto Keisuke
 
20180925 CV勉強会 SfM解説
20180925 CV勉強会 SfM解説20180925 CV勉強会 SfM解説
20180925 CV勉強会 SfM解説Fujimoto Keisuke
 
Sliced Wasserstein Distance for Learning Gaussian Mixture Models
Sliced Wasserstein Distance for Learning Gaussian Mixture ModelsSliced Wasserstein Distance for Learning Gaussian Mixture Models
Sliced Wasserstein Distance for Learning Gaussian Mixture ModelsFujimoto Keisuke
 
LiDAR-SLAM チュートリアル資料
LiDAR-SLAM チュートリアル資料LiDAR-SLAM チュートリアル資料
LiDAR-SLAM チュートリアル資料Fujimoto Keisuke
 
Stock trading using ChainerRL
Stock trading using ChainerRLStock trading using ChainerRL
Stock trading using ChainerRLFujimoto Keisuke
 
Cold-Start Reinforcement Learning with Softmax Policy Gradient
Cold-Start Reinforcement Learning with Softmax Policy GradientCold-Start Reinforcement Learning with Softmax Policy Gradient
Cold-Start Reinforcement Learning with Softmax Policy GradientFujimoto Keisuke
 
Representation learning by learning to count
Representation learning by learning to countRepresentation learning by learning to count
Representation learning by learning to countFujimoto Keisuke
 
Dynamic Routing Between Capsules
Dynamic Routing Between CapsulesDynamic Routing Between Capsules
Dynamic Routing Between CapsulesFujimoto Keisuke
 
Deep Learning Framework Comparison on CPU
Deep Learning Framework Comparison on CPUDeep Learning Framework Comparison on CPU
Deep Learning Framework Comparison on CPUFujimoto Keisuke
 
Global optimality in neural network training
Global optimality in neural network trainingGlobal optimality in neural network training
Global optimality in neural network trainingFujimoto Keisuke
 

More from Fujimoto Keisuke (20)

A quantum computational approach to correspondence problems on point sets
A quantum computational approach to correspondence problems on point setsA quantum computational approach to correspondence problems on point sets
A quantum computational approach to correspondence problems on point sets
 
F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...
F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...
F0-Consistent Many-to-many Non-parallel Voice Conversion via Conditional Auto...
 
YOLACT real-time instance segmentation
YOLACT real-time instance segmentationYOLACT real-time instance segmentation
YOLACT real-time instance segmentation
 
Product Managerの役割、周辺ロールとの差異
Product Managerの役割、周辺ロールとの差異Product Managerの役割、周辺ロールとの差異
Product Managerの役割、周辺ロールとの差異
 
ChainerRLで株売買を結構頑張ってみた(後編)
ChainerRLで株売買を結構頑張ってみた(後編)ChainerRLで株売買を結構頑張ってみた(後編)
ChainerRLで株売買を結構頑張ってみた(後編)
 
Temporal Cycle Consistency Learning
Temporal Cycle Consistency LearningTemporal Cycle Consistency Learning
Temporal Cycle Consistency Learning
 
ML@Loft
ML@LoftML@Loft
ML@Loft
 
20190414 Point Cloud Reconstruction Survey
20190414 Point Cloud Reconstruction Survey20190414 Point Cloud Reconstruction Survey
20190414 Point Cloud Reconstruction Survey
 
Chainer meetup 9
Chainer meetup 9Chainer meetup 9
Chainer meetup 9
 
20180925 CV勉強会 SfM解説
20180925 CV勉強会 SfM解説20180925 CV勉強会 SfM解説
20180925 CV勉強会 SfM解説
 
Sliced Wasserstein Distance for Learning Gaussian Mixture Models
Sliced Wasserstein Distance for Learning Gaussian Mixture ModelsSliced Wasserstein Distance for Learning Gaussian Mixture Models
Sliced Wasserstein Distance for Learning Gaussian Mixture Models
 
LiDAR-SLAM チュートリアル資料
LiDAR-SLAM チュートリアル資料LiDAR-SLAM チュートリアル資料
LiDAR-SLAM チュートリアル資料
 
Stock trading using ChainerRL
Stock trading using ChainerRLStock trading using ChainerRL
Stock trading using ChainerRL
 
Cold-Start Reinforcement Learning with Softmax Policy Gradient
Cold-Start Reinforcement Learning with Softmax Policy GradientCold-Start Reinforcement Learning with Softmax Policy Gradient
Cold-Start Reinforcement Learning with Softmax Policy Gradient
 
Representation learning by learning to count
Representation learning by learning to countRepresentation learning by learning to count
Representation learning by learning to count
 
Dynamic Routing Between Capsules
Dynamic Routing Between CapsulesDynamic Routing Between Capsules
Dynamic Routing Between Capsules
 
Deep Learning Framework Comparison on CPU
Deep Learning Framework Comparison on CPUDeep Learning Framework Comparison on CPU
Deep Learning Framework Comparison on CPU
 
ICCV2017一人読み会
ICCV2017一人読み会ICCV2017一人読み会
ICCV2017一人読み会
 
Global optimality in neural network training
Global optimality in neural network trainingGlobal optimality in neural network training
Global optimality in neural network training
 
CVPR2017 oral survey
CVPR2017 oral surveyCVPR2017 oral survey
CVPR2017 oral survey
 

Recently uploaded

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Neo4j
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 

Recently uploaded (20)

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024Build your next Gen AI Breakthrough - April 2024
Build your next Gen AI Breakthrough - April 2024
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 

sublabel accurate convex relaxation of vectorial multilabel energies

  • 3. 本発表の概要 • 発表論文 • Sublabel-Accurate Convex Relaxation of Vectorial Multilabel Energies • どんな論文? • 多次元のラベリング問題を凸緩和によって効率良く解く • ノイズ除去、密オプティカルフロー、ステレオマッチング等 • 特徴は? • 問題を複数の区間に分け、区間ごとに凸関数で近似 • 一般的な式に対する解法なので汎用性高 ※資料が間に合わなかった ので、後で挙げ直します
  • 5. 目的とする問題の解説 • どんな問題に使える? • 画像の各画素𝑥に対して、どのラベル𝑢 𝑥 が正しいか について、何らかの評価関数𝜌が与えられているとする • 評価関数の値を良くしつつ、隣り合う画素同士のラベル は近いようにする 𝐸 = min ):+→- . 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 𝜆Ψ 𝛻𝑢 + Ψ 𝛻𝑢 = 𝑇𝑉(𝑢) = ∫ 𝛻𝑢 𝑥+ ; 𝑑𝑥 近い値ほど良い関数何らかの評価関数 例えば・・・
  • 6. 目的とする問題の解説 • どんな問題に使える? • 画像の各画素𝑥に対して、どのラベル𝑢 𝑥 が正しいか について、何らかの評価関数𝜌が与えられているとする • 評価関数の値を良くしつつ、隣り合う画素同士のラベル は近いようにする 𝐸 = min ):+→- . 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 𝜆Ψ 𝛻𝑢 + Ψ 𝛻𝑢 = 𝑇𝑉(𝑢) = ∫ 𝛻𝑢 𝑥+ ; 𝑑𝑥 近い値ほど良い関数何らかの評価関数 例えば・・・
  • 12. 目的とする問題の解説 • どんな問題に使える? • 画像の各画素𝑥に対して、どのラベル𝑢 𝑥 が正しいか について、何らかの評価関数𝜌が与えられているとする • 評価関数の値を良くしつつ、隣り合う画素同士のラベル は近いようにする 𝐸 = min ):+→- . 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 𝜆Ψ 𝛻𝑢 + Ψ 𝛻𝑢 = 𝑇𝑉(𝑢) = ∫ 𝛻𝑢 𝑥+ ; 𝑑𝑥 近い値ほど良い関数何らかの評価関数 例えば・・・
  • 13. エッジを残す平滑化(TVノルム) • 隣接画素間の2乗距離を使うとエッジがぼやけて しまうが、絶対値を使うとエッジが保存される 𝐸 𝐮 = ℓ 𝐮 + 𝜆Ω 𝛁𝐮 Ω 𝛁𝐮 = @ 𝛻𝑢A + 𝛻𝑢B uu 0 1 0 1 どちらも|∇u|=1 0 1 元の信号と近く、 制約項のペナルティも小 元信号 ノイズ除去後 信号候補1 ノイズ除去後 信号候補2
  • 15. 目的とする問題(再掲) • 多値ラベリング問題・・・直接解くのは困難 • 高次元・非線形・微分不可能 𝐸 = min ):+→- . 𝜌 𝑥, 𝑢 𝑥 𝑑𝑥 + 𝜆Ψ 𝛻𝑢 + 解きやすい凸問題で近似しよう
  • 19. Keywords: Convex Relaxation, Optim (a) Original dataterm (b) Without lifting ( arXiv:16 双共役による近似式 • このまま双共役にすると近似精度低 𝐸 = min ):+→- . 𝜌∗∗ 𝑥, 𝑢 𝑥 𝑑𝑥 + 𝜆Ψ∗∗ 𝛻𝑢 + 𝜌∗∗ 𝑥, 𝑢 𝑥𝜌 𝑥, 𝑢 𝑥
  • 20. (c) Classical lifting (d) Proposed lifting ataterm. Convexification without lifting 領域分けによる高精度な近似 近似精度は高くなるが全体としては非凸関数 →問題の次元数を増やし凸問題のまま上記を解く 領域を3角形の集合で分割 (たぶん・・・!)
  • 21. Lifted Representation • 変数を高次元の変数に変換 • 変数uを、uの属する3角形の頂点の線形和で表す • u = (0.3, 0.2) → 0.7 * t2 + 0.1 * t3 + 0.2 * t6 • 使わなかった頂点はゼロとして、要素数が頂点数のベ クトルに変換 • (0, 0.7, 0.1, 0, 0, 0.2) Sublabel-Accurate Convex Relaxation of Vectorial M 1 0 1 0 1 u(x) = 0.7e 2 + 0.1e 3 + 0.2e 6 = (0, 0.7, 0.1, 0, 0, 0.2) > 1 4 0.2 0.3 t6 t2 t3 𝑢 = @ 𝒖R 𝑡R
  • 22. Lifted Representationでの問題設定 • 高次元化されたデータ項と正則化項 𝝆 𝒖 = U 𝜌 𝑇R 𝛼 , ∞, Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐 ∞ 𝑖𝑓 𝒖 = 𝐸R 𝛼 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑖𝑓 𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 . 𝝆∗∗ 𝑥, 𝒖 𝑥 + 𝜳∗∗ 𝛻𝒖 𝑑𝑥
  • 23. Lifted Representationでの問題設定 • 高次元化されたデータ項と正則化項 𝝆 𝒖 = U 𝜌 𝑇R 𝛼 , ∞, Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐 ∞ 𝑖𝑓 𝒖 = 𝐸R 𝛼 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑖𝑓 𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 . 𝝆∗∗ 𝑥, 𝒖 𝑥 + 𝜳∗∗ 𝛻𝒖 𝑑𝑥
  • 24. Lifted Representationの具体例 𝝆 𝒖 = U 𝜌 𝑇R 𝛼 , ∞, 𝑖𝑓 𝒖 = 𝐸R 𝛼 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝒖 = 0, 0.7, 0.1,0, 0, 0.2 の場合・・・ 𝐸; = 𝑒;, 𝑒k, 𝑒l 𝐸m = 𝑒;, 𝑒m, 𝑒l 𝐸n = 𝑒m, 𝑒l, 𝑒o 𝐸k = 𝑒m, 𝑒n, 𝑒o t1=(0,0) t2=(1,0) t3=(2,0) t4=(0,1) t5=(1,1) t6=(2,1) ∆; ∆m ∆n ∆k 𝒖 = 0.7𝑒m + 0.1𝑒n + 0.2𝑒o 𝑒m = 0, 1, 0, 0, 0, 0 元空間での表現 Liftedされた表現 𝛼 = 0.7, 0.1, 0.2 𝝆 𝒖 = 𝜌 𝑢 , 𝑢 = 0.7𝑡m + 0.1𝑡n + 0.2𝑡o
  • 25. Lifted Representationの具体例 𝝆 𝒖 = U 𝜌 𝑇R 𝛼 , ∞, 𝑖𝑓 𝒖 = 𝐸R 𝛼 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝒖 = 0.5, 0.7, 0.1,0.2, 0.1, 0.2 の場合・・・ 𝐸; = 𝑒;, 𝑒k, 𝑒l 𝐸m = 𝑒;, 𝑒m, 𝑒l 𝐸n = 𝑒m, 𝑒l, 𝑒o 𝐸k = 𝑒m, 𝑒n, 𝑒o t1=(0,0) t2=(1,0) t3=(2,0) t4=(0,1) t5=(1,1) t6=(2,1) ∆; ∆m ∆n ∆k 𝑒m = 0, 1, 0, 0, 0, 0 元空間での表現 Liftedされた表現 𝝆 𝒖 = ∞ 無い!
  • 26. Lifted Representationでの問題設定 • 高次元化されたデータ項と正則化項 𝝆 𝒖 = U 𝜌 𝑇R 𝛼 , ∞, Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐 ∞ 𝑖𝑓 𝒖 = 𝐸R 𝛼 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑖𝑓 𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 . 𝝆∗∗ 𝑥, 𝒖 𝑥 + 𝜳∗∗ 𝛻𝒖 𝑑𝑥
  • 27. Lifted Representationの具体例 t1=(0,0) t2=(1,0) t3=(2,0) t4=(0,1) t5=(1,1) t6=(2,1) ∆; ∆k Ψ 𝑔 = U 𝑇R 𝛼 − 𝑇Y 𝛽 [ 𝜐 ∞ 𝑖𝑓 𝑔 = 𝐸R 𝛼 − 𝐸Y 𝛽 ⊗ 𝜐 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 𝑇R 𝛼 𝑇Y 𝛽 𝜐 画像上のエッジ ※𝜐は正則化項の双対に出てくる補助変数と等価、計算上は陽に出て来ない [ ; = sup @ 𝑢Rdiv R 𝜂R 𝝊 − 𝛿 𝛻𝑢
  • 29. Liftedした評価関数について . 𝝆∗∗ 𝑥, 𝒖 𝑥 + 𝜳∗∗ 𝛻𝒖 𝑑𝑥 axed energy minimization problem becomes min u:⌦!R|V| max q:⌦!K X x2⌦ ⇢⇤⇤ (x, u(x)) + hDiv q, ui. order to get rid of the pointwise maximum over ⇢⇤ i (v) in Eq. (8), we intr ditional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, that w(x) attains the value of the pointwise maximum: min u:⌦!R|V| max (v,w):⌦!C q:⌦!K X x2⌦ hu(x), v(x)i w(x) + hDiv q, ui, ere the set C is given as C = 1i|T | Ci, Ci := n (x, y) 2 R|V|+1 | ⇢⇤ i (x)  y o . numerical optimization we use a GPU-based implementation1 of a first mal-dual method [14]. The algorithm requires the orthogonal projectio dual variables onto the sets C respectively K in every iteration. Howeve elaxed energy minimization problem becomes min u:⌦!R|V| max q:⌦!K X x2⌦ ⇢⇤⇤ (x, u(x)) + hDiv q, ui. (18) n order to get rid of the pointwise maximum over ⇢⇤ i (v) in Eq. (8), we introduce dditional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦ o that w(x) attains the value of the pointwise maximum: min u:⌦!R|V| max (v,w):⌦!C q:⌦!K X x2⌦ hu(x), v(x)i w(x) + hDiv q, ui, (19) where the set C is given as C = 1i|T | Ci, Ci := n (x, y) 2 R|V|+1 | ⇢⇤ i (x)  y o . (20) For numerical optimization we use a GPU-based implementation1 of a first-order rimal-dual method [14]. The algorithm requires the orthogonal projections of he dual variables onto the sets C respectively K in every iteration. However, the rojection onto an epigraph of dimension |V| + 1 is di cult for large values of V|. We rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1)- imensional epigraph constraints introducing variables ri (x) 2 Rn , si(x) 2 R: 8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Creme Proof. Follows from a calculation starting at the definition of the convex conju- gate ⇤ . See Appendix A. Interestingly, although in its original formulation (14) the set K has infinitely many constraints, one can equivalently represent K by finitely many. Proposition 3 The set K in equation (14) is the same as K = n q 2 Rd⇥|V| | Di q S1  1, 1  i  |T | o , Di q = QiD (TiD) 1 , (15) where the matrices QiD 2 Rd⇥n and TiD 2 Rn⇥n are given as QiD := qi1 qin+1 , . . . , qin qin+1 , TiD := ti1 tin+1 , . . . , tin tin+1 . Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz 8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Cremers Proof. Follows from a calculation starting at the definition of the convex conju- gate ⇤ . See Appendix A. Interestingly, although in its original formulation (14) the set K has infinitely many constraints, one can equivalently represent K by finitely many. Proposition 3 The set K in equation (14) is the same as K = n q 2 Rd⇥|V| | Di q S1  1, 1  i  |T | o , Di q = QiD (TiD) 1 , (15) where the matrices QiD 2 Rd⇥n and TiD 2 Rn⇥n are given as QiD := qi1 qin+1 , . . . , qin qin+1 , TiD := ti1 tin+1 , . . . , tin tin+1 . et for now the weight of the regularizer in (1) be zero. Then, at each point 2 ⌦ we minimize a generally nonconvex energy over a compact set ⇢ Rn : min u2 ⇢(u). (6) We set up the lifted energy so that it attains finite values if and only if the rgument u is a sparse representation u = Ei↵ of a sublabel u 2 : ⇢(u) = min 1i|T | ⇢i(u), ⇢i(u) = 8 < : ⇢(Ti↵), if u = Ei↵, ↵ 2 U n , 1, otherwise. (7) roblems (6) and (7) are equivalent due to the one-to-one correspondence of = Ti↵ and u = Ei↵. However, energy (7) is finite on a nonconvex set only. In rder to make optimization tractable, we minimize its convex envelope. Proposition 1 The convex envelope of (7) is given as: ⇢⇤⇤ (u) = sup v2R|V| hu, vi max 1i|T | ⇢⇤ i (v), ⇢⇤ i (v) = hEibi, vi + ⇢⇤ i (A> i E> i v), ⇢i := ⇢ + i . (8) and Ai are given as bi := Mn+1 i , Ai := M1 i , M2 i , . . . , Mn i , where Mj i are he columns of the matrix Mi := (T> i , 1) > 2 Rn+1⇥n+1 . roof. Follows from a calculation starting at the definition of ⇢⇤⇤ . See Ap- min u2 ⇢(u). (6) p the lifted energy so that it attains finite values if and only if the u is a sparse representation u = Ei↵ of a sublabel u 2 : = min 1i|T | ⇢i(u), ⇢i(u) = 8 < : ⇢(Ti↵), if u = Ei↵, ↵ 2 U n , 1, otherwise. (7) (6) and (7) are equivalent due to the one-to-one correspondence of nd u = Ei↵. However, energy (7) is finite on a nonconvex set only. In make optimization tractable, we minimize its convex envelope. ion 1 The convex envelope of (7) is given as: ⇢⇤⇤ (u) = sup v2R|V| hu, vi max 1i|T | ⇢⇤ i (v), ⇢⇤ i (v) = hEibi, vi + ⇢⇤ i (A> i E> i v), ⇢i := ⇢ + i . (8) are given as bi := Mn+1 i , Ai := M1 i , M2 i , . . . , Mn i , where Mj i are ns of the matrix Mi := (T> i , 1) > 2 Rn+1⇥n+1 . ⇤⇤ We set up the lifted energy so that it attains fi argument u is a sparse representation u = Ei↵ o ⇢(u) = min 1i|T | ⇢i(u), ⇢i(u) = 8 < : ⇢(Ti↵), 1, Problems (6) and (7) are equivalent due to the u = Ti↵ and u = Ei↵. However, energy (7) is fin order to make optimization tractable, we minimi Proposition 1 The convex envelope of (7) is gi ⇢⇤⇤ (u) = sup v2R|V| hu, vi max 1i|T | ⇢⇤ i ⇢⇤ i (v) = hEibi, vi + ⇢⇤ i (A> i E> i v), bi and Ai are given as bi := Mn+1 i , Ai := M1 i , the columns of the matrix Mi := (T> i , 1) > 2 Rn Proof. Follows from a calculation starting at t
  • 32. 凸問題を解く際の戦略 • 主問題の変数で最適化と双対問題の変数で最適 化を交互に繰り返す • 主問題の変数とは、元々の問題にあった変数 • 双対問題の変数は、後から加えた変数 𝑓∗ 𝒔 = sup 𝒔K 𝒙 − 𝑓 𝒙 | 𝒙 ∈ 𝑹P𝑓 𝒙 双対 sが双対問題の変数
  • 33. 今回の問題について rder to get rid of the pointwise maximum over ⇢⇤ i (v) in Eq. (8), we intro tional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x hat w(x) attains the value of the pointwise maximum: min u:⌦!R|V| max (v,w):⌦!C q:⌦!K X x2⌦ hu(x), v(x)i w(x) + hDiv q, ui, re the set C is given as C = 1i|T | Ci, Ci := n (x, y) 2 R|V|+1 | ⇢⇤ i (x)  y o . numerical optimization we use a GPU-based implementation1 of a first-o al-dual method [14]. The algorithm requires the orthogonal projectio dual variables onto the sets C respectively K in every iteration. However ection onto an epigraph of dimension |V| + 1 is di cult for large valu We rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + ensional epigraph constraints introducing variables ri (x) 2 Rn , si(x) 2 主問題の変数 双対問題の変数 • 𝑢に関する最適化は容易 • 𝑣, 𝑤, 𝑞の最適化(C, Kへの射影)が課題 𝑣 射影 射影とは・・・変数の動ける範囲が限定されている 場合に、範囲内に変数を移動させるステップ
  • 34. 今回の問題を解くためには・・・ u:⌦!R x2⌦ r to get rid of the pointwise maximum over ⇢⇤ i (v) in Eq. (8), we introdu nal variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 w(x) attains the value of the pointwise maximum: min u:⌦!R|V| max (v,w):⌦!C q:⌦!K X x2⌦ hu(x), v(x)i w(x) + hDiv q, ui, (1 the set C is given as C = 1i|T | Ci, Ci := n (x, y) 2 R|V|+1 | ⇢⇤ i (x)  y o . (2 merical optimization we use a GPU-based implementation1 of a first-ord dual method [14]. The algorithm requires the orthogonal projections al variables onto the sets C respectively K in every iteration. However, th ion onto an epigraph of dimension |V| + 1 is di cult for large values rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1 u:⌦!R|V| q:⌦!K x2⌦ der to get rid of the pointwise maximum over ⇢⇤ i (v) in Eq. (8), we introduce tional variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦ at w(x) attains the value of the pointwise maximum: min u:⌦!R|V| max (v,w):⌦!C q:⌦!K X x2⌦ hu(x), v(x)i w(x) + hDiv q, ui, (19) e the set C is given as C = 1i|T | Ci, Ci := n (x, y) 2 R|V|+1 | ⇢⇤ i (x)  y o . (20) numerical optimization we use a GPU-based implementation1 of a first-order al-dual method [14]. The algorithm requires the orthogonal projections of dual variables onto the sets C respectively K in every iteration. However, the ection onto an epigraph of dimension |V| + 1 is di cult for large values of We rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1)- nsional epigraph constraints introducing variables ri (x) 2 Rn , si(x) 2 R: ri (x)  si(x), ri (x) = A> i E> i v(x), si(x) = w(x) hEibi, v(x)i. (21) 8 E. Laude, T. M¨ollenho↵, M. Moeller, J. Lellmann, D. Cremers Proof. Follows from a calculation starting at the definition of the convex conju- gate ⇤ . See Appendix A. Interestingly, although in its original formulation (14) the set K has infinitely many constraints, one can equivalently represent K by finitely many. Proposition 3 The set K in equation (14) is the same as K = n q 2 Rd⇥|V| | Di q S1  1, 1  i  |T | o , Di q = QiD (TiD) 1 , (15) where the matrices QiD 2 Rd⇥n and TiD 2 Rn⇥n are given as QiD := qi1 qin+1 , . . . , qin qin+1 , TiD := ti1 tin+1 , . . . , tin tin+1 . Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz continuity of a piecewise linear function defined by the matrices q 2 Rd⇥|V| . Therefore, one can expect that the Lipschitz constraint is equivalent to a bound • 𝑣、𝑤について最大化しつつ、領域Cに射影 • 𝑞について最大化しつつ、領域Kに射影
  • 35. 射影について al variables w(x) 2 R and additional constraints (v(x), w(x)) 2 C, x 2 ⌦ w(x) attains the value of the pointwise maximum: min u:⌦!R|V| max (v,w):⌦!C q:⌦!K X x2⌦ hu(x), v(x)i w(x) + hDiv q, ui, (19) he set C is given as C = 1i|T | Ci, Ci := n (x, y) 2 R|V|+1 | ⇢⇤ i (x)  y o . (20) erical optimization we use a GPU-based implementation1 of a first-order dual method [14]. The algorithm requires the orthogonal projections of variables onto the sets C respectively K in every iteration. However, the on onto an epigraph of dimension |V| + 1 is di cult for large values of rewrite the constraints (v(x), w(x)) 2 Ci, 1  i  |T |, x 2 ⌦ as (n + 1)- onal epigraph constraints introducing variables ri (x) 2 Rn , si(x) 2 R: x)  si(x), ri (x) = A> i E> i v(x), si(x) = w(x) hEibi, v(x)i. (21) quality constraints can be implemented using Lagrange multipliers. For ection onto the set K we use an approach similar to [7, Figure 7]. Proof. Follows from a calculation starting at the definition of the convex conju- gate ⇤ . See Appendix A. Interestingly, although in its original formulation (14) the set K has infinitely many constraints, one can equivalently represent K by finitely many. Proposition 3 The set K in equation (14) is the same as K = n q 2 Rd⇥|V| | Di q S1  1, 1  i  |T | o , Di q = QiD (TiD) 1 , (15) where the matrices QiD 2 Rd⇥n and TiD 2 Rn⇥n are given as QiD := qi1 qin+1 , . . . , qin qin+1 , TiD := ti1 tin+1 , . . . , tin tin+1 . Proof. Similar to the analysis in [11], equation (14) basically states the Lipschitz continuity of a piecewise linear function defined by the matrices q 2 Rd⇥|V| . Therefore, one can expect that the Lipschitz constraint is equivalent to a bound on the derivative. For the complete proof, see Appendix A. Schatten-∞ Normへの射影 (Dの特異値の最大が1以下になるようなqを求める) 問題依存、パラボラ関数への射影など 𝑒𝑝𝑖 𝜌 + ∆R ∗
  • 36. (参考)パラボラ関数への射影 • Convex Relaxation of Vectorial Problems with Coupled Regularization (E. Strekalovskiy, A. Chambolle, D. Cremers), In SIAM Journal on Imaging Sciences, volume 7, 2014. CONVEX RELAXATION OF VECTORIAL PROBLEMS 333 B.2. Projection onto parabolas y ≥ α∥x∥2 2. Let α > 0. For x0 ∈ Rd and y0 ∈ R consider the projection onto a parabola: (B.4) arg min x∈Rd, y∈R, y≥α∥x∥2 2 (x − x0)2 2 + (y − y0)2 2 . If already y0 ≥ α∥x0∥2 2, the solution is (x, y) = (x0, y0). Otherwise, with a := 2α∥x0∥2, b := 2 3(1 − 2αy0), and d := a2 + b3 set (B.5) v := ⎧ ⎨ ⎩ c − b c with c = 3 a + √ d if d ≥ 0, 2 √ −b cos 1 3 arccos a √ −b 3 if d < 0. If c = 0 in the first case, set v := 0. The solution is then given by (B.6) x = v 2α x0 ∥x0∥2 if x0 ̸= 0 0 else , y = α∥x∥2 2. Remark. In the case d < 0 it always holds that a √ −b 3 ∈ [0, 1]. To ensure this also numeri- cally, one should compute d by d = (a − √ −b 3 )(a + √ −b 3 ) for b < 0. Proof. First, for y0 ≥ α∥x0∥2 2 the projection is obviously (x, y) = (x0, y0). Otherwise, we
  • 37. (参考) Schatten-∞ Normへの射影 • The Natural Total Variation Which Arises from Geometric Measure Theory (B. Goldluecke, E. Strekalovskiy, D. Cremers), In SIAM Journal on Imaging Sciences, volume 5, 2012. in that color edges are preserved better. We also showed that TVJ can serve as a regularizer in more general energy functionals, which makes it applicable to general inverse problems like deblurring, zooming, inpainting, and superresolution. 7.1. Projection ΠS for TVS. Since each channel is treated separately, we can compute the well-known projection for the scalar TV for each color channel. Let A ∈ Rn×m with rows a1, . . . , an ∈ Rm. Then ΠS is defined rowwise as (7.1) ΠS(ai) = ai max(1, |ai|2) . 7.2. Projection ΠF for TVF . Let A ∈ Rn×m with elements aij ∈ R. From (2.8) we see that we need to compute the projection onto the unit ball in Rn·m when (aij) is viewed as a vector in Rn·m. Thus, (7.2) ΠF (A) = A max 1, n i=1 m j=1 a2 ij . 7.3. Projection ΠJ for TVJ . Let A ∈ Rn×m with singular value decomposition A = UΣV T and Σ = diag(σ1, . . . , σm). We assume that the singular values are ordered with σ1 being the largest. If the sum of the singular values is less than or equal to one, A already lies in co(En ⊗ Em). Otherwise, according to Theorem 3.18, (7.3) Π(A) = UΣpV T with Σp = diag(σp). To compute the matrix V and the singular values, note that the Eigenvalue decomposition of the m × m matrix AT A is given by V Σ2V T , which is more efficient to compute than the full singular value decomposition since m < n. For images, m = 2, so there is even an explicit formula available. We can now simplify the formula (7.3) to make the computation of U unnecessary. Let Σ+ denote the pseudoinverse of Σ which is given by (7.4) Σ+ = diag 1 σ1 , . . . , 1 σk , 0, . . . , 0 , where σk is the smallest nonzero singular value. Then U = AV Σ+, and from (7.3) we conclude (7.5) Π(A) = AV Σ+ ΣpV T . For the special case of color images, where n = 3 and m = 2, the implementation of (7.5) is detailed in Figure 7. Appendix A. In this appendix we show explicitly how to compute the projection ΠK :
  • 38. 実験:デノイジング • 領域分け+線形補完による最適化手法(右端)と、 提案手法(領域分け+凸関数近似)の比較 • 領域数が少ないにも関わらず質の高い結果 Input image Unlifted Problem, E = 992.50 Ours, |T | = 1, |V| = 4, E = 992.51 Ours, |T | = 6 |V| = 2 ⇥ 2 ⇥ 2 E = 993.52 Baseline, |V| = 4 ⇥ 4 ⇥ 4, E = 2255.81 Fig. 5: Convex ROF with vectorial TV. Direct optimization and proposed method yield the same result. In contrast to the baseline method [11] the proposed ap- proach has no discretization artefacts and yields a lower energy. The regulariza- tion parameter is chosen as = 0.3. Noisy input Ours, |T | = 1, |V| = 4, E = 2849.52 Ours, |T | = 6, |V| = 2 ⇥ 2 ⇥ 2, E = 2806.18 Ours, |T | = 48, |V| = 3 ⇥ 3 ⇥ 3, E = 2633.83 Baseline, |V| = 4 ⇥ 4 ⇥ 4, E = 3151.80 e purpose of this experiment is a proof of concept as our method i overhead and convex problems can be solved via direct optimizatio seen in Fig. 4 and Fig. 5, that the baseline method [11] has a str as. 2 Denoising with Truncated Quadratic Dataterm r images degraded with both, Gaussian and salt-and-pepper noise e dataterm as ⇢(x, u(x)) = min 1 2 ku(x) I(x)k2 , ⌫ . We solve the
  • 39. 実験:オプティカルフローΩImage 1 [8], |V| = 5 ⇥ 5, 0.67 GB, 4 min aep = 2.78 [8], |V| = 11 ⇥ 11, 2.1 GB, 12 min aep = 1.97 [8], 4. Image 2 [11], |V| = 3 ⇥ 3, 0.67 GB, 0.35 min aep = 5.44 [11], |V| = 5 ⇥ 5, 2.4 GB, 16 min aep = 4.22 [11 5. Ground truth Ours, |V| = 2 ⇥ 2, 0.63 GB, 17 min aep = 1.28 Ours, |V| = 3 ⇥ 3, 1.9 GB, 34 min aep = 1.07 Ou 4. Fig. 7: We compute the optical flow using our met Image 1 [8], |V| = 5 ⇥ 5, 0.67 GB, 4 min aep = 2.78 [8], |V| = 11 ⇥ 11, 2.1 GB, 12 min aep = 1.97 [8], |V| = 17 ⇥ 17, 4.1 GB, 25 min aep = 1.63 [ Image 2 [11], |V| = 3 ⇥ 3, 0.67 GB, 0.35 min aep = 5.44 [11], |V| = 5 ⇥ 5, 2.4 GB, 16 min aep = 4.22 [11], |V| = 7 ⇥ 7, 5.2 GB, 33 min aep = 2.65 Ground truth Ours, |V| = 2 ⇥ 2, 0.63 GB, 17 min aep = 1.28 Ours, |V| = 3 ⇥ 3, 1.9 GB, 34 min aep = 1.07 Ours, |V| = 4 ⇥ 4, 4.1 GB, 41 min aep = 0.97 O Fig. 7: We compute the optical flow using our method, the prod Sublabel-Accurate Convex Relaxation of Vectorial Multil Image 1 [8], |V| = 5 ⇥ 5, 0.67 GB, 4 min aep = 2.78 [8], |V| = 11 ⇥ 11, 2.1 GB, 12 min aep = 1.97 [8], |V| = 17 ⇥ 17, 4.1 GB, 25 min aep = 1.63 [8], | 9.3 a Image 2 [11], |V| = 3 ⇥ 3, 0.67 GB, 0.35 min aep = 5.44 [11], |V| = 5 ⇥ 5, 2.4 GB, 16 min aep = 4.22 [11], |V| = 7 ⇥ 7, 5.2 GB, 33 min aep = 2.65 [11], Out Sublabel-Accurate Convex Relaxation of Vectorial Multilabel Energies Image 1 [8], |V| = 5 ⇥ 5, 0.67 GB, 4 min aep = 2.78 [8], |V| = 11 ⇥ 11, 2.1 GB, 12 min aep = 1.97 [8], |V| = 17 ⇥ 17, 4.1 GB, 25 min aep = 1.63 [8], |V| = 28 ⇥ 28, 9.3 GB, 60 min aep = 1.39 Image 2 [11], |V| = 3 ⇥ 3, 0.67 GB, 0.35 min aep = 5.44 [11], |V| = 5 ⇥ 5, 2.4 GB, 16 min aep = 4.22 [11], |V| = 7 ⇥ 7, 5.2 GB, 33 min aep = 2.65 [11], |V| = 9 ⇥ 9, Out of memory. ation of the energy instead of a etween two input images I1, I2. ding to the estimated maximum dataterm is ⇢(x, v(x)) = kI2(x) of the image gradient rI1(x). d to the product space approach ce dataterm using Lagrange mul- memory as it has to store a con linear one. 4.3 Optical Flow We compute the optical flow v The label space = [ d, d]2 is displacement d 2 R between the I1(x + v(x))k, and (x) is based In Fig. 7 we compare the pr [8]. Note that we implemented th • 領域分け+線形補完による最適化手法(右端)と、 提案手法(領域分け+凸関数近似)の比較 • 領域数が少ないにも関わらず質の高い結果