SlideShare a Scribd company logo




Rn
Rn1⇥n2
Rn1⇥n2⇥n3


Rn⇥n⇥n
1 2 r+ + +· · ·=
=
rX
i=1
iui ⌦ ui ⌦ ui
=
rX
i=1
iu⌦3
i
= +






L S
(= U⌃V T
)
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a state-of-
the-art technique from the computer vision literature, [47].12 That approach also aims at robustly
recovering a good low-rank approximation, but uses a more complicated, nonconvex m-estimator,
which incorporates a local scale estimate that implicitly exploits the spatial characteristics of natural
images. This leads to a highly nonconvex optimization, which is solved locally via alternating
minimization. Interestingly, despite using more prior information about the signal to be recovered,
this approach does not perform as well as the convex programming heuristic: notice the large
artifacts in the top and bottom rows of Figure 2 (d).
In Figure 3, we consider 250 frames of a sequence with several drastic illumination changes.
Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity, and to
illustrate the theoretical results obtained above, we again choose = 1/
p
n1.13 For this example,
on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iterations and 36
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a state-of-
the-art technique from the computer vision literature, [47].12 That approach also aims at robustly
recovering a good low-rank approximation, but uses a more complicated, nonconvex m-estimator,
which incorporates a local scale estimate that implicitly exploits the spatial characteristics of natural
images. This leads to a highly nonconvex optimization, which is solved locally via alternating
minimization. Interestingly, despite using more prior information about the signal to be recovered,
this approach does not perform as well as the convex programming heuristic: notice the large
artifacts in the top and bottom rows of Figure 2 (d).
In Figure 3, we consider 250 frames of a sequence with several drastic illumination changes.
Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity, and to
illustrate the theoretical results obtained above, we again choose = 1/
p
n1.13 For this example,
on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iterations and 36
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse
Convex optimization (this work) Alternating minimization [4
Figure 2: Background modeling from video. Three frames from a 200 frame video seque
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and spa
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimizat
of an m-estimator [47]. PCP yields a much more appealing result despite using less pr
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a
the-art technique from the computer vision literature, [47].12 That approach also aims at
recovering a good low-rank approximation, but uses a more complicated, nonconvex m-e
which incorporates a local scale estimate that implicitly exploits the spatial characteristics o
images. This leads to a highly nonconvex optimization, which is solved locally via al
minimization. Interestingly, despite using more prior information about the signal to be r
this approach does not perform as well as the convex programming heuristic: notice
artifacts in the top and bottom rows of Figure 2 (d).
In Figure 3, we consider 250 frames of a sequence with several drastic illumination
Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity
illustrate the theoretical results obtained above, we again choose = 1/
p
n1.13 For this
on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iteration
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse
Convex optimization (this work) Alternating minimization [4
Figure 2: Background modeling from video. Three frames from a 200 frame video seque
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and spa
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimizat
of an m-estimator [47]. PCP yields a much more appealing result despite using less pr
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a
= +
1 2 r+ + +· · ·
= +
T = L + S


1 2 r+ + +· · ·
= +
T = L + S




1 2 r+ + +· · ·
= +
T = L + S


T = L + S
L, S ˆL, ˆS
ˆS
ˆL Pl(T ˆS)
ˆS H⇣(T ˆL) 

ˆL
ˆS
Pl
H⇣ ⇣
Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} =
(Gradient Ascent method)
1: Input: Symmetric tensor T
rank l, exact rank r, N1 numb
or restarts, N2 number of powe
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute
u of Tj(I, I, ✓). Initialize
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )
{Run power method t
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1
i
8: until t = N2
9: Pick the best:
arg maxi2[N1] Tj(v
(t+1)
i , v
(t
i
i =
(t+1)
i and vi = v
(t+1
i
10: Deflate: Tj Tj ivi ⌦
Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} =
(Gradient Ascent method)
1: Input: Symmetric tensor T
rank l, exact rank r, N1 numb
or restarts, N2 number of powe
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute
u of Tj(I, I, ✓). Initialize
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )
{Run power method t
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1
i
8: until t = N2
9: Pick the best:
arg maxi2[N1] Tj(v
(t+1)
i , v
(t
i
i =
(t+1)
i and vi = v
(t+1
i
10: Deflate: Tj Tj ivi ⌦
Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} =
(Gradient Ascent method)
1: Input: Symmetric tensor T
rank l, exact rank r, N1 numb
or restarts, N2 number of powe
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute
u of Tj(I, I, ✓). Initialize
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )
{Run power method t
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1
i
8: until t = N2
9: Pick the best:
arg maxi2[N1] Tj(v
(t+1)
i , v
(t
i
i =
(t+1)
i and vi = v
(t+1
i
10: Deflate: Tj Tj ivi ⌦


Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
which is a multilinear combination of the tensor mode-
1 fibers. Similarly T(u, v, w) 2 R is a multilinear com-
bination of the tensor entries.
A tensor T 2 Rn⇥n⇥n
has a CP rank at most r if it
can be written as the sum of r rank-1 tensors as
T =
X
i2[r]
⇤
i ui ⌦ ui ⌦ ui, ui 2 Rn
, kuik = 1, (3)
where notation ⌦ represents the outer product. We
sometimes abbreviate a ⌦ a ⌦ a as a⌦3
. Without loss
of generality, ⇤
i > 0, since ⇤
i u⌦3
i = ⇤
i ( ui)⌦3
.
RTD method: We propose non-convex algorithm
RTD for robust tensor decomposition, described in Al-
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} = Pl(T): GradAscent
(Gradient Ascent method)
1: Input: Symmetric tensor T 2 Rn⇥n⇥n
, target
rank l, exact rank r, N1 number of initializations
or restarts, N2 number of power iterations for each
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute top singular vector
u of Tj(I, I, ✓). Initialize v
(1)
i u. Let =
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )/kTj(I, v
(t)
i , v
(t)
i )k2
{Run power method to land in spectral
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1)
i , v
(t+1)
i )
8: until t = N2
9: Pick the best: reset i
arg maxi2[N1] Tj(v
(t+1)
i , v
(t+1)
i , v
(t+1)
i ) and
i =
(t+1)
i and vi = v
(t+1)
i .
10: Deflate: Tj Tj ivi ⌦ vi ⌦ vi.
11: for j = 1, . . . , r do
12: repeat
13: Gradient Ascent iteration: v
(t+1)
j v
(t)
j +
1
4 (1+ /
p
n)
·
⇣
T(I, v
(t)
j , v
(t)
j ) kv
(t)
j k2
v
(t)
j
⌘
.
14: until convergence (linear rate, refer Lemma 3).
15: Set buj = v
(t+1)
j , j = T(v
(t+1)
j , v
(t+1)
j , v
(t+1)
j )
16: return Estimated top l out of all the top r eigen-
pairs (buj, j)j2[l], and low rank estimate ˆLl =P
i2[l] ibuj ⌦ buj ⌦ buj.
is ˜O n4+c
r2
.
T u2
= u uT
u = 1
T u
u
s.t. kuk = 1f(u) = T u3


max
u2Rn
f(u)












+ =
L⇤
1 2 r+ + +··· +
L
kL L⇤
kF /kL⇤
kF
Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan
d
10 20 30 40
Error
0.4
0.5
0.6
Figure
d
10 20 30 40
Error 0.3
0.4
0.5
0.6
0.7
0.8
Figure
d
10 20 30 40
Error
100
Figure
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Error
100
Figure
(a) (b) (c) (d)
Figure 1: (a) Error comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Error compari
of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Error comparison of di↵erent methods w
block sparsity, rank 5, varying d. (d) Error comparison of di↵erent methods with block sparsity, rank 25, varying
Error = kL⇤
LkF /kL⇤
kF . The curve labeled ‘T-RPCA-W(slice)’ refers to considering recovered low rank part fr
a random slice of the tensor T by using matrix non-convex RPCA method as the whiten matrix, ‘T-RPCA-W(tru
is using true second order moment in whitening, ‘M-RPCA(slice)’ treats each slice of the input tensor as a non-con
matrix-RPCA(M-RPCA) problem, ‘M-RPCA(flat)’ reshapes the tensor along one mode and treat the resultant a
matrix RPCA problem. All four sub-figures share same curve descriptions.
d
10 20 30 40
Time(s)
50
100
150
200
Figure
d
10 20 30 40
Time(s)
102
103
Figure
d
10 20 30 40
Time(s)
102
Figure
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Time(s)
102
103
Figure
d
10 20 30 40
Error
0.4
0.5
0.6
d
10 20 30 40
Error
0.3
0.4
0.5
0.6
0.7
0.8
d
10 20 30 40
Error
100
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Error
100
(a) (b) (c) (d)
Figure 1: (a) Error comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Error compa
of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Error comparison of di↵erent methods
block sparsity, rank 5, varying d. (d) Error comparison of di↵erent methods with block sparsity, rank 25, varyi
Error = kL⇤
LkF /kL⇤
kF . The curve labeled ‘T-RPCA-W(slice)’ refers to considering recovered low rank part
a random slice of the tensor T by using matrix non-convex RPCA method as the whiten matrix, ‘T-RPCA-W(t
is using true second order moment in whitening, ‘M-RPCA(slice)’ treats each slice of the input tensor as a non-co
matrix-RPCA(M-RPCA) problem, ‘M-RPCA(flat)’ reshapes the tensor along one mode and treat the resultant
matrix RPCA problem. All four sub-figures share same curve descriptions.
d
10 20 30 40
Time(s)
50
100
150
200
Figure
d
10 20 30 40
Time(s)
102
103
Figure
d
10 20 30 40
Time(s)
102
Figure
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Time(s)
102
103
Figure
(a) (b) (c) (d)
Figure 2: (a) Running time comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Run
time comparison of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Running time comparis
di↵erent methods with block sparsity, rank 5, varying d. (d) Running time comparison of di↵erent methods with
sparsity, rank 25, varying d. Curve descriptions are same as in Figure 1.
is orthogonal. We can also extend to non-orthogonal
tensors L⇤
, whose components u are linearly indepen-
Synthetic datasets: The low-rank part LP
u⌦3
is generated from a factor matrix
Robust Tensor Decomposition under Block Sparse Perturbation
(a) (b) (c)
3: Foreground filtering or activity detection in the Curtain video dataset. (a): Original image fra
und filtered (sparse part estimated) using tensor method; time taken is 5.1s. (c): Foreground
part estimated) using matrix method; time taken is 5.7s.

More Related Content

What's hot

Antenna Paper Solution
Antenna Paper SolutionAntenna Paper Solution
Antenna Paper Solution
Haris Hassan
 
Computation Assignment Help
Computation Assignment Help Computation Assignment Help
Computation Assignment Help
Programming Homework Help
 
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...grssieee
 
Dsp
DspDsp
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Volume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensionsVolume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensions
Vissarion Fisikopoulos
 
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Han
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung HanHomomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Han
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Han
vpnmentor
 
Chapter 3 finite difference calculus (temporarily)
Chapter 3 finite difference calculus (temporarily)Chapter 3 finite difference calculus (temporarily)
Chapter 3 finite difference calculus (temporarily)
MichaelDang47
 
RF Module Design - [Chapter 1] From Basics to RF Transceivers
RF Module Design - [Chapter 1] From Basics to RF TransceiversRF Module Design - [Chapter 1] From Basics to RF Transceivers
RF Module Design - [Chapter 1] From Basics to RF Transceivers
Simen Li
 
Efficient Random-Walk Methods forApproximating Polytope Volume
Efficient Random-Walk Methods forApproximating Polytope VolumeEfficient Random-Walk Methods forApproximating Polytope Volume
Efficient Random-Walk Methods forApproximating Polytope Volume
Vissarion Fisikopoulos
 
Digital Signal Processing Homework Help
Digital Signal Processing Homework HelpDigital Signal Processing Homework Help
Digital Signal Processing Homework Help
Matlab Assignment Experts
 
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM System
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemA Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM System
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM System
IJAAS Team
 
VJAI Paper Reading#3-KDD2019-ClusterGCN
VJAI Paper Reading#3-KDD2019-ClusterGCNVJAI Paper Reading#3-KDD2019-ClusterGCN
VJAI Paper Reading#3-KDD2019-ClusterGCN
Dat Nguyen
 
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Www
D I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{WwwD I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{Www
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Wwwguest3f9c6b
 
Foreground Detection : Combining Background Subspace Learning with Object Smo...
Foreground Detection : Combining Background Subspace Learning with Object Smo...Foreground Detection : Combining Background Subspace Learning with Object Smo...
Foreground Detection : Combining Background Subspace Learning with Object Smo...
Shanghai Jiao Tong University(上海交通大学)
 
Mask R-CNN
Mask R-CNNMask R-CNN
Mask R-CNN
Chanuk Lim
 
Multiband Transceivers - [Chapter 1]
Multiband Transceivers - [Chapter 1] Multiband Transceivers - [Chapter 1]
Multiband Transceivers - [Chapter 1]
Simen Li
 
1999 actual paper q
1999 actual paper q1999 actual paper q
1999 actual paper q
Vivekv Erma
 
IGARSSWellLog_Vancouver_07_29.pptx
IGARSSWellLog_Vancouver_07_29.pptxIGARSSWellLog_Vancouver_07_29.pptx
IGARSSWellLog_Vancouver_07_29.pptxgrssieee
 
RF Module Design - [Chapter 3] Linearity
RF Module Design - [Chapter 3]  LinearityRF Module Design - [Chapter 3]  Linearity
RF Module Design - [Chapter 3] Linearity
Simen Li
 

What's hot (20)

Antenna Paper Solution
Antenna Paper SolutionAntenna Paper Solution
Antenna Paper Solution
 
Computation Assignment Help
Computation Assignment Help Computation Assignment Help
Computation Assignment Help
 
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
FR4.L09.5 - THREE DIMENSIONAL RECONSTRUCTION OF URBAN AREAS USING JOINTLY PHA...
 
Dsp
DspDsp
Dsp
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
Volume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensionsVolume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensions
 
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Han
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung HanHomomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Han
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Han
 
Chapter 3 finite difference calculus (temporarily)
Chapter 3 finite difference calculus (temporarily)Chapter 3 finite difference calculus (temporarily)
Chapter 3 finite difference calculus (temporarily)
 
RF Module Design - [Chapter 1] From Basics to RF Transceivers
RF Module Design - [Chapter 1] From Basics to RF TransceiversRF Module Design - [Chapter 1] From Basics to RF Transceivers
RF Module Design - [Chapter 1] From Basics to RF Transceivers
 
Efficient Random-Walk Methods forApproximating Polytope Volume
Efficient Random-Walk Methods forApproximating Polytope VolumeEfficient Random-Walk Methods forApproximating Polytope Volume
Efficient Random-Walk Methods forApproximating Polytope Volume
 
Digital Signal Processing Homework Help
Digital Signal Processing Homework HelpDigital Signal Processing Homework Help
Digital Signal Processing Homework Help
 
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM System
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemA Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM System
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM System
 
VJAI Paper Reading#3-KDD2019-ClusterGCN
VJAI Paper Reading#3-KDD2019-ClusterGCNVJAI Paper Reading#3-KDD2019-ClusterGCN
VJAI Paper Reading#3-KDD2019-ClusterGCN
 
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Www
D I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{WwwD I G I T A L  C O N T R O L  S Y S T E M S  J N T U  M O D E L  P A P E R{Www
D I G I T A L C O N T R O L S Y S T E M S J N T U M O D E L P A P E R{Www
 
Foreground Detection : Combining Background Subspace Learning with Object Smo...
Foreground Detection : Combining Background Subspace Learning with Object Smo...Foreground Detection : Combining Background Subspace Learning with Object Smo...
Foreground Detection : Combining Background Subspace Learning with Object Smo...
 
Mask R-CNN
Mask R-CNNMask R-CNN
Mask R-CNN
 
Multiband Transceivers - [Chapter 1]
Multiband Transceivers - [Chapter 1] Multiband Transceivers - [Chapter 1]
Multiband Transceivers - [Chapter 1]
 
1999 actual paper q
1999 actual paper q1999 actual paper q
1999 actual paper q
 
IGARSSWellLog_Vancouver_07_29.pptx
IGARSSWellLog_Vancouver_07_29.pptxIGARSSWellLog_Vancouver_07_29.pptx
IGARSSWellLog_Vancouver_07_29.pptx
 
RF Module Design - [Chapter 3] Linearity
RF Module Design - [Chapter 3]  LinearityRF Module Design - [Chapter 3]  Linearity
RF Module Design - [Chapter 3] Linearity
 

Viewers also liked

Aistats2016 uel
Aistats2016 uelAistats2016 uel
Aistats2016 uel
Daigo HIROOKA
 
ASforSGD
ASforSGDASforSGD
ASforSGD
Daigo HIROOKA
 
AISTAT2016 SNFS
AISTAT2016 SNFSAISTAT2016 SNFS
AISTAT2016 SNFS
Riku Iwabuchi
 
Sparse Representation of Multivariate Extremes with Applications to Anomaly R...
Sparse Representation of Multivariate Extremes with Applications to Anomaly R...Sparse Representation of Multivariate Extremes with Applications to Anomaly R...
Sparse Representation of Multivariate Extremes with Applications to Anomaly R...
Hayato Watanabe
 
Learning with a Wasserstein Loss (NIPS2015)
Learning with a Wasserstein Loss (NIPS2015)Learning with a Wasserstein Loss (NIPS2015)
Learning with a Wasserstein Loss (NIPS2015)
Hayato Watanabe
 
ICML読み会2016@早稲田
ICML読み会2016@早稲田ICML読み会2016@早稲田
ICML読み会2016@早稲田
Taikai Takeda
 
Learning Deep Embeddings with Histogram Loss (NIPS2016)
Learning Deep Embeddings with Histogram Loss (NIPS2016)Learning Deep Embeddings with Histogram Loss (NIPS2016)
Learning Deep Embeddings with Histogram Loss (NIPS2016)
Hayato Watanabe
 
Nips2016 mlgkernel
Nips2016 mlgkernelNips2016 mlgkernel
Nips2016 mlgkernel
Daigo HIROOKA
 

Viewers also liked (8)

Aistats2016 uel
Aistats2016 uelAistats2016 uel
Aistats2016 uel
 
ASforSGD
ASforSGDASforSGD
ASforSGD
 
AISTAT2016 SNFS
AISTAT2016 SNFSAISTAT2016 SNFS
AISTAT2016 SNFS
 
Sparse Representation of Multivariate Extremes with Applications to Anomaly R...
Sparse Representation of Multivariate Extremes with Applications to Anomaly R...Sparse Representation of Multivariate Extremes with Applications to Anomaly R...
Sparse Representation of Multivariate Extremes with Applications to Anomaly R...
 
Learning with a Wasserstein Loss (NIPS2015)
Learning with a Wasserstein Loss (NIPS2015)Learning with a Wasserstein Loss (NIPS2015)
Learning with a Wasserstein Loss (NIPS2015)
 
ICML読み会2016@早稲田
ICML読み会2016@早稲田ICML読み会2016@早稲田
ICML読み会2016@早稲田
 
Learning Deep Embeddings with Histogram Loss (NIPS2016)
Learning Deep Embeddings with Histogram Loss (NIPS2016)Learning Deep Embeddings with Histogram Loss (NIPS2016)
Learning Deep Embeddings with Histogram Loss (NIPS2016)
 
Nips2016 mlgkernel
Nips2016 mlgkernelNips2016 mlgkernel
Nips2016 mlgkernel
 

Similar to Aistats RTD

E 2017 1
E 2017 1E 2017 1
E 2017 1
vipin pal
 
Lect 03 - first portion
Lect 03 - first portionLect 03 - first portion
Lect 03 - first portion
Moe Moe Myint
 
ACS 22LIE12 lab Manul.docx
ACS 22LIE12 lab Manul.docxACS 22LIE12 lab Manul.docx
ACS 22LIE12 lab Manul.docx
VasantkumarUpadhye
 
B61301007 matlab documentation
B61301007 matlab documentationB61301007 matlab documentation
B61301007 matlab documentation
Manchireddy Reddy
 
DLD BOOLEAN EXPRESSIONS
DLD BOOLEAN EXPRESSIONSDLD BOOLEAN EXPRESSIONS
DLD BOOLEAN EXPRESSIONS
naresh414857
 
Digital Communication Exam Help
Digital Communication Exam HelpDigital Communication Exam Help
Digital Communication Exam Help
Live Exam Helper
 
Control assignment#2
Control assignment#2Control assignment#2
Control assignment#2
cairo university
 
Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...
thanh nguyen
 
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...
ijcisjournal
 
student-problem-solutions.pdf
student-problem-solutions.pdfstudent-problem-solutions.pdf
student-problem-solutions.pdf
ssuser4d4e5a
 
student-problem-solutions.PDF
student-problem-solutions.PDFstudent-problem-solutions.PDF
student-problem-solutions.PDF
KarminderSingh7
 
Digitalcontrolsystems
DigitalcontrolsystemsDigitalcontrolsystems
Digitalcontrolsystems
Satish Gottumukkala
 
Linear Cryptanalysis Lecture 線形解読法
Linear Cryptanalysis Lecture 線形解読法Linear Cryptanalysis Lecture 線形解読法
Linear Cryptanalysis Lecture 線形解読法
Kai Katsumata
 
Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007 Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007
Rohit Garg
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptx
OthmanBensaoud
 

Similar to Aistats RTD (20)

E 2017 1
E 2017 1E 2017 1
E 2017 1
 
Lect5 v2
Lect5 v2Lect5 v2
Lect5 v2
 
Lect 03 - first portion
Lect 03 - first portionLect 03 - first portion
Lect 03 - first portion
 
ACS 22LIE12 lab Manul.docx
ACS 22LIE12 lab Manul.docxACS 22LIE12 lab Manul.docx
ACS 22LIE12 lab Manul.docx
 
B61301007 matlab documentation
B61301007 matlab documentationB61301007 matlab documentation
B61301007 matlab documentation
 
3rd Semester Electronic and Communication Engineering (2013-June) Question P...
3rd  Semester Electronic and Communication Engineering (2013-June) Question P...3rd  Semester Electronic and Communication Engineering (2013-June) Question P...
3rd Semester Electronic and Communication Engineering (2013-June) Question P...
 
2013-June: 3rd Semester E & C Question Papers
2013-June: 3rd Semester E & C Question Papers2013-June: 3rd Semester E & C Question Papers
2013-June: 3rd Semester E & C Question Papers
 
DLD BOOLEAN EXPRESSIONS
DLD BOOLEAN EXPRESSIONSDLD BOOLEAN EXPRESSIONS
DLD BOOLEAN EXPRESSIONS
 
Digital Communication Exam Help
Digital Communication Exam HelpDigital Communication Exam Help
Digital Communication Exam Help
 
Control assignment#2
Control assignment#2Control assignment#2
Control assignment#2
 
Ae11 sol
Ae11 solAe11 sol
Ae11 sol
 
Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...Digital image processing using matlab: basic transformations, filters and ope...
Digital image processing using matlab: basic transformations, filters and ope...
 
Ee gate-2011
Ee gate-2011 Ee gate-2011
Ee gate-2011
 
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...
 
student-problem-solutions.pdf
student-problem-solutions.pdfstudent-problem-solutions.pdf
student-problem-solutions.pdf
 
student-problem-solutions.PDF
student-problem-solutions.PDFstudent-problem-solutions.PDF
student-problem-solutions.PDF
 
Digitalcontrolsystems
DigitalcontrolsystemsDigitalcontrolsystems
Digitalcontrolsystems
 
Linear Cryptanalysis Lecture 線形解読法
Linear Cryptanalysis Lecture 線形解読法Linear Cryptanalysis Lecture 線形解読法
Linear Cryptanalysis Lecture 線形解読法
 
Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007 Gate Computer Science Solved Paper 2007
Gate Computer Science Solved Paper 2007
 
BALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptxBALLANDBEAM_GROUP7.pptx
BALLANDBEAM_GROUP7.pptx
 

Recently uploaded

一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
vcaxypu
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
v3tuleee
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
John Andrews
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
ewymefz
 
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
mbawufebxi
 
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
pchutichetpong
 
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
AbhimanyuSinha9
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
ocavb
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
haila53
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
TravisMalana
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
ahzuo
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
enxupq
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
vcaxypu
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 

Recently uploaded (20)

一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
 
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理一比一原版(UofS毕业证书)萨省大学毕业证如何办理
一比一原版(UofS毕业证书)萨省大学毕业证如何办理
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
 
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
 
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...
 
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...Best best suvichar in gujarati english meaning of this sentence as Silk road ...
Best best suvichar in gujarati english meaning of this sentence as Silk road ...
 
一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单一比一原版(TWU毕业证)西三一大学毕业证成绩单
一比一原版(TWU毕业证)西三一大学毕业证成绩单
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 

Aistats RTD

  • 2.
  • 4. 1 2 r+ + +· · ·= = rX i=1 iui ⌦ ui ⌦ ui = rX i=1 iu⌦3 i
  • 6. (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS Convex optimization (this work) Alternating minimization [47] Figure 2: Background modeling from video. Three frames from a 200 frame video sequence taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization of an m-estimator [47]. PCP yields a much more appealing result despite using less prior knowledge. Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a state-of- the-art technique from the computer vision literature, [47].12 That approach also aims at robustly recovering a good low-rank approximation, but uses a more complicated, nonconvex m-estimator, which incorporates a local scale estimate that implicitly exploits the spatial characteristics of natural images. This leads to a highly nonconvex optimization, which is solved locally via alternating minimization. Interestingly, despite using more prior information about the signal to be recovered, this approach does not perform as well as the convex programming heuristic: notice the large artifacts in the top and bottom rows of Figure 2 (d). In Figure 3, we consider 250 frames of a sequence with several drastic illumination changes. Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity, and to illustrate the theoretical results obtained above, we again choose = 1/ p n1.13 For this example, on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iterations and 36 (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS Convex optimization (this work) Alternating minimization [47] Figure 2: Background modeling from video. Three frames from a 200 frame video sequence taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization of an m-estimator [47]. PCP yields a much more appealing result despite using less prior knowledge. (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS Convex optimization (this work) Alternating minimization [47] Figure 2: Background modeling from video. Three frames from a 200 frame video sequence taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization of an m-estimator [47]. PCP yields a much more appealing result despite using less prior knowledge. (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS Convex optimization (this work) Alternating minimization [47] Figure 2: Background modeling from video. Three frames from a 200 frame video sequence taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization of an m-estimator [47]. PCP yields a much more appealing result despite using less prior knowledge. Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a state-of- the-art technique from the computer vision literature, [47].12 That approach also aims at robustly recovering a good low-rank approximation, but uses a more complicated, nonconvex m-estimator, which incorporates a local scale estimate that implicitly exploits the spatial characteristics of natural images. This leads to a highly nonconvex optimization, which is solved locally via alternating minimization. Interestingly, despite using more prior information about the signal to be recovered, this approach does not perform as well as the convex programming heuristic: notice the large artifacts in the top and bottom rows of Figure 2 (d). In Figure 3, we consider 250 frames of a sequence with several drastic illumination changes. Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity, and to illustrate the theoretical results obtained above, we again choose = 1/ p n1.13 For this example, on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iterations and 36 (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse Convex optimization (this work) Alternating minimization [4 Figure 2: Background modeling from video. Three frames from a 200 frame video seque taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and spa components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimizat of an m-estimator [47]. PCP yields a much more appealing result despite using less pr knowledge. Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a the-art technique from the computer vision literature, [47].12 That approach also aims at recovering a good low-rank approximation, but uses a more complicated, nonconvex m-e which incorporates a local scale estimate that implicitly exploits the spatial characteristics o images. This leads to a highly nonconvex optimization, which is solved locally via al minimization. Interestingly, despite using more prior information about the signal to be r this approach does not perform as well as the convex programming heuristic: notice artifacts in the top and bottom rows of Figure 2 (d). In Figure 3, we consider 250 frames of a sequence with several drastic illumination Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity illustrate the theoretical results obtained above, we again choose = 1/ p n1.13 For this on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iteration (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse Convex optimization (this work) Alternating minimization [4 Figure 2: Background modeling from video. Three frames from a 200 frame video seque taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and spa components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimizat of an m-estimator [47]. PCP yields a much more appealing result despite using less pr knowledge. Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a = +
  • 7. 1 2 r+ + +· · · = + T = L + S 

  • 8. 1 2 r+ + +· · · = + T = L + S 
 

  • 9. 1 2 r+ + +· · · = + T = L + S 

  • 10. T = L + S L, S ˆL, ˆS ˆS ˆL Pl(T ˆS) ˆS H⇣(T ˆL) 
 ˆL ˆS Pl H⇣ ⇣
  • 11. Robust Tensor Decomposition under Block Sparse Perturbation Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro- bust PCA 1: Input: Tensor T 2 Rn⇥n⇥n , convergence crite- rion , target rank r, thresholding scale parameter . Pl(A) denote estimated rank-l approximation of tensor A, and let l(A) denote the estimated lth largest eigenvalue using Procedure 1. HT⇣(A) denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk if |Aijk| ⇣ and 0 otherwise. 2: Set initial threshold ⇣0 1(T) and estimates S(0) = H⇣0 (T L(0) ). 3: for Stage l = 1 to r do 4: for t = 0 to ⌧ = 10 log n T S(0) 2 / do 5: L(t+1) = Pl(T S(t) ). 6: S(t+1) = H⇣(T L(t+1) ). 7: ⇣t+1= ( l+1(T S(t+1) )+ 1 2 t l(T S(t+1) )). 8: If l+1(L(t+1) ) < 2n , then return L(⌧) , S(⌧) , else reset S(0) = S(⌧) 9: Return: bL = L(⌧) , bS = S(⌧) Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} = (Gradient Ascent method) 1: Input: Symmetric tensor T rank l, exact rank r, N1 numb or restarts, N2 number of powe initialization. Let T1 T. 2: for j = 1, . . . , r do 3: for i = 1, . . . , N1 do 4: ✓ ⇠ N(0, In). Compute u of Tj(I, I, ✓). Initialize Tj(u, u, u). 5: repeat 6: v (t+1) i Tj(I, v (t) i , v (t) i ) {Run power method t ball} 7: (t+1) i Tj(v (t+1) i , v (t+1 i 8: until t = N2 9: Pick the best: arg maxi2[N1] Tj(v (t+1) i , v (t i i = (t+1) i and vi = v (t+1 i 10: Deflate: Tj Tj ivi ⌦
  • 12. Robust Tensor Decomposition under Block Sparse Perturbation Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro- bust PCA 1: Input: Tensor T 2 Rn⇥n⇥n , convergence crite- rion , target rank r, thresholding scale parameter . Pl(A) denote estimated rank-l approximation of tensor A, and let l(A) denote the estimated lth largest eigenvalue using Procedure 1. HT⇣(A) denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk if |Aijk| ⇣ and 0 otherwise. 2: Set initial threshold ⇣0 1(T) and estimates S(0) = H⇣0 (T L(0) ). 3: for Stage l = 1 to r do 4: for t = 0 to ⌧ = 10 log n T S(0) 2 / do 5: L(t+1) = Pl(T S(t) ). 6: S(t+1) = H⇣(T L(t+1) ). 7: ⇣t+1= ( l+1(T S(t+1) )+ 1 2 t l(T S(t+1) )). 8: If l+1(L(t+1) ) < 2n , then return L(⌧) , S(⌧) , else reset S(0) = S(⌧) 9: Return: bL = L(⌧) , bS = S(⌧) Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} = (Gradient Ascent method) 1: Input: Symmetric tensor T rank l, exact rank r, N1 numb or restarts, N2 number of powe initialization. Let T1 T. 2: for j = 1, . . . , r do 3: for i = 1, . . . , N1 do 4: ✓ ⇠ N(0, In). Compute u of Tj(I, I, ✓). Initialize Tj(u, u, u). 5: repeat 6: v (t+1) i Tj(I, v (t) i , v (t) i ) {Run power method t ball} 7: (t+1) i Tj(v (t+1) i , v (t+1 i 8: until t = N2 9: Pick the best: arg maxi2[N1] Tj(v (t+1) i , v (t i i = (t+1) i and vi = v (t+1 i 10: Deflate: Tj Tj ivi ⌦
  • 13. Robust Tensor Decomposition under Block Sparse Perturbation Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro- bust PCA 1: Input: Tensor T 2 Rn⇥n⇥n , convergence crite- rion , target rank r, thresholding scale parameter . Pl(A) denote estimated rank-l approximation of tensor A, and let l(A) denote the estimated lth largest eigenvalue using Procedure 1. HT⇣(A) denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk if |Aijk| ⇣ and 0 otherwise. 2: Set initial threshold ⇣0 1(T) and estimates S(0) = H⇣0 (T L(0) ). 3: for Stage l = 1 to r do 4: for t = 0 to ⌧ = 10 log n T S(0) 2 / do 5: L(t+1) = Pl(T S(t) ). 6: S(t+1) = H⇣(T L(t+1) ). 7: ⇣t+1= ( l+1(T S(t+1) )+ 1 2 t l(T S(t+1) )). 8: If l+1(L(t+1) ) < 2n , then return L(⌧) , S(⌧) , else reset S(0) = S(⌧) 9: Return: bL = L(⌧) , bS = S(⌧) Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} = (Gradient Ascent method) 1: Input: Symmetric tensor T rank l, exact rank r, N1 numb or restarts, N2 number of powe initialization. Let T1 T. 2: for j = 1, . . . , r do 3: for i = 1, . . . , N1 do 4: ✓ ⇠ N(0, In). Compute u of Tj(I, I, ✓). Initialize Tj(u, u, u). 5: repeat 6: v (t+1) i Tj(I, v (t) i , v (t) i ) {Run power method t ball} 7: (t+1) i Tj(v (t+1) i , v (t+1 i 8: until t = N2 9: Pick the best: arg maxi2[N1] Tj(v (t+1) i , v (t i i = (t+1) i and vi = v (t+1 i 10: Deflate: Tj Tj ivi ⌦ 

  • 14. Robust Tensor Decomposition under Block Sparse Perturbation Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro- bust PCA 1: Input: Tensor T 2 Rn⇥n⇥n , convergence crite- rion , target rank r, thresholding scale parameter . Pl(A) denote estimated rank-l approximation of tensor A, and let l(A) denote the estimated lth largest eigenvalue using Procedure 1. HT⇣(A) denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk if |Aijk| ⇣ and 0 otherwise. 2: Set initial threshold ⇣0 1(T) and estimates S(0) = H⇣0 (T L(0) ). 3: for Stage l = 1 to r do 4: for t = 0 to ⌧ = 10 log n T S(0) 2 / do 5: L(t+1) = Pl(T S(t) ). 6: S(t+1) = H⇣(T L(t+1) ). 7: ⇣t+1= ( l+1(T S(t+1) )+ 1 2 t l(T S(t+1) )). 8: If l+1(L(t+1) ) < 2n , then return L(⌧) , S(⌧) , else reset S(0) = S(⌧) 9: Return: bL = L(⌧) , bS = S(⌧) which is a multilinear combination of the tensor mode- 1 fibers. Similarly T(u, v, w) 2 R is a multilinear com- bination of the tensor entries. A tensor T 2 Rn⇥n⇥n has a CP rank at most r if it can be written as the sum of r rank-1 tensors as T = X i2[r] ⇤ i ui ⌦ ui ⌦ ui, ui 2 Rn , kuik = 1, (3) where notation ⌦ represents the outer product. We sometimes abbreviate a ⌦ a ⌦ a as a⌦3 . Without loss of generality, ⇤ i > 0, since ⇤ i u⌦3 i = ⇤ i ( ui)⌦3 . RTD method: We propose non-convex algorithm RTD for robust tensor decomposition, described in Al- Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} = Pl(T): GradAscent (Gradient Ascent method) 1: Input: Symmetric tensor T 2 Rn⇥n⇥n , target rank l, exact rank r, N1 number of initializations or restarts, N2 number of power iterations for each initialization. Let T1 T. 2: for j = 1, . . . , r do 3: for i = 1, . . . , N1 do 4: ✓ ⇠ N(0, In). Compute top singular vector u of Tj(I, I, ✓). Initialize v (1) i u. Let = Tj(u, u, u). 5: repeat 6: v (t+1) i Tj(I, v (t) i , v (t) i )/kTj(I, v (t) i , v (t) i )k2 {Run power method to land in spectral ball} 7: (t+1) i Tj(v (t+1) i , v (t+1) i , v (t+1) i ) 8: until t = N2 9: Pick the best: reset i arg maxi2[N1] Tj(v (t+1) i , v (t+1) i , v (t+1) i ) and i = (t+1) i and vi = v (t+1) i . 10: Deflate: Tj Tj ivi ⌦ vi ⌦ vi. 11: for j = 1, . . . , r do 12: repeat 13: Gradient Ascent iteration: v (t+1) j v (t) j + 1 4 (1+ / p n) · ⇣ T(I, v (t) j , v (t) j ) kv (t) j k2 v (t) j ⌘ . 14: until convergence (linear rate, refer Lemma 3). 15: Set buj = v (t+1) j , j = T(v (t+1) j , v (t+1) j , v (t+1) j ) 16: return Estimated top l out of all the top r eigen- pairs (buj, j)j2[l], and low rank estimate ˆLl =P i2[l] ibuj ⌦ buj ⌦ buj. is ˜O n4+c r2 . T u2 = u uT u = 1 T u u s.t. kuk = 1f(u) = T u3 
 max u2Rn f(u)
  • 16. + = L⇤ 1 2 r+ + +··· + L kL L⇤ kF /kL⇤ kF
  • 17. Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan d 10 20 30 40 Error 0.4 0.5 0.6 Figure d 10 20 30 40 Error 0.3 0.4 0.5 0.6 0.7 0.8 Figure d 10 20 30 40 Error 100 Figure Nonwhiten Whiten(random) Whiten(true) Matrix(slice) Matrix(flat) d 10 20 30 40 Error 100 Figure (a) (b) (c) (d) Figure 1: (a) Error comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Error compari of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Error comparison of di↵erent methods w block sparsity, rank 5, varying d. (d) Error comparison of di↵erent methods with block sparsity, rank 25, varying Error = kL⇤ LkF /kL⇤ kF . The curve labeled ‘T-RPCA-W(slice)’ refers to considering recovered low rank part fr a random slice of the tensor T by using matrix non-convex RPCA method as the whiten matrix, ‘T-RPCA-W(tru is using true second order moment in whitening, ‘M-RPCA(slice)’ treats each slice of the input tensor as a non-con matrix-RPCA(M-RPCA) problem, ‘M-RPCA(flat)’ reshapes the tensor along one mode and treat the resultant a matrix RPCA problem. All four sub-figures share same curve descriptions. d 10 20 30 40 Time(s) 50 100 150 200 Figure d 10 20 30 40 Time(s) 102 103 Figure d 10 20 30 40 Time(s) 102 Figure Nonwhiten Whiten(random) Whiten(true) Matrix(slice) Matrix(flat) d 10 20 30 40 Time(s) 102 103 Figure d 10 20 30 40 Error 0.4 0.5 0.6 d 10 20 30 40 Error 0.3 0.4 0.5 0.6 0.7 0.8 d 10 20 30 40 Error 100 Nonwhiten Whiten(random) Whiten(true) Matrix(slice) Matrix(flat) d 10 20 30 40 Error 100 (a) (b) (c) (d) Figure 1: (a) Error comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Error compa of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Error comparison of di↵erent methods block sparsity, rank 5, varying d. (d) Error comparison of di↵erent methods with block sparsity, rank 25, varyi Error = kL⇤ LkF /kL⇤ kF . The curve labeled ‘T-RPCA-W(slice)’ refers to considering recovered low rank part a random slice of the tensor T by using matrix non-convex RPCA method as the whiten matrix, ‘T-RPCA-W(t is using true second order moment in whitening, ‘M-RPCA(slice)’ treats each slice of the input tensor as a non-co matrix-RPCA(M-RPCA) problem, ‘M-RPCA(flat)’ reshapes the tensor along one mode and treat the resultant matrix RPCA problem. All four sub-figures share same curve descriptions. d 10 20 30 40 Time(s) 50 100 150 200 Figure d 10 20 30 40 Time(s) 102 103 Figure d 10 20 30 40 Time(s) 102 Figure Nonwhiten Whiten(random) Whiten(true) Matrix(slice) Matrix(flat) d 10 20 30 40 Time(s) 102 103 Figure (a) (b) (c) (d) Figure 2: (a) Running time comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Run time comparison of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Running time comparis di↵erent methods with block sparsity, rank 5, varying d. (d) Running time comparison of di↵erent methods with sparsity, rank 25, varying d. Curve descriptions are same as in Figure 1. is orthogonal. We can also extend to non-orthogonal tensors L⇤ , whose components u are linearly indepen- Synthetic datasets: The low-rank part LP u⌦3 is generated from a factor matrix
  • 18. Robust Tensor Decomposition under Block Sparse Perturbation (a) (b) (c) 3: Foreground filtering or activity detection in the Curtain video dataset. (a): Original image fra und filtered (sparse part estimated) using tensor method; time taken is 5.1s. (c): Foreground part estimated) using matrix method; time taken is 5.7s.