The document proposes an algorithm called Robust Tensor Decomposition (RTD) to recover a low-rank tensor from observations that are corrupted by block sparse errors. RTD iteratively estimates the low-rank and sparse components using hard thresholding and tensor projections. It compares RTD to existing approaches on video sequences, finding RTD yields more accurate recovery with fewer assumptions.
Implementation of Energy Efficient Scalar Point Multiplication Techniques for...idescitation
Elliptic curve cryptography (ECC) is mainly an
alternative to traditional public-key cryptosystems (PKCs),
such as RSA, due to its smaller key size with same security
level for resource-constrained networks. The computational
efficiency of ECC depends on the scalar point multiplication,
which consists of modular point addition and point doubling
operations. The paper emphasizes on point multiplication
techniques such as Binary, NAF, w-NAF and different
coordinate systems like Affine and Projective (Standard
Projective, Jacobian and Mixed) for point addition and doubling
operations. These operations are compared based on execution
time. The results given here are for general purpose processor
with 1:73 GH z frequency. The implementation is done over
NIST-recommended prime fields 192/224/256/384/521.
Implementation of Energy Efficient Scalar Point Multiplication Techniques for...idescitation
Elliptic curve cryptography (ECC) is mainly an
alternative to traditional public-key cryptosystems (PKCs),
such as RSA, due to its smaller key size with same security
level for resource-constrained networks. The computational
efficiency of ECC depends on the scalar point multiplication,
which consists of modular point addition and point doubling
operations. The paper emphasizes on point multiplication
techniques such as Binary, NAF, w-NAF and different
coordinate systems like Affine and Projective (Standard
Projective, Jacobian and Mixed) for point addition and doubling
operations. These operations are compared based on execution
time. The results given here are for general purpose processor
with 1:73 GH z frequency. The implementation is done over
NIST-recommended prime fields 192/224/256/384/521.
I am Boris M. I am a Computer Science Assignment Help Expert at programminghomeworkhelp.com. I hold MSc. in Programming, McGill University, Canada. I have been helping students with their homework for the past 7 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
In general dimension, there is no known total polynomial algorithm for either convex hull or vertex enumeration, i.e. an algorithm whose complexity depends polynomially on the input and output sizes. It is thus important to identify problems (and polytope representations) for which total polynomial-time algorithms can be obtained. We offer the first total polynomial-time algorithm for computing the edge-skeleton (including vertex enumeration) of a polytope given by an optimization or separation oracle, where we are also given a superset of its edge directions. We also offer a space-efficient variant of our algorithm by employing reverse search. All complexity bounds refer to the (oracle) Turing machine model. There is a number of polytope classes naturally defined by oracles; for some of them neither vertex nor facet representation is obvious. We consider two main applications, where we obtain (weakly) total polynomial-time algorithms: Signed Minkowski sums of convex polytopes, where polytopes can be subtracted provided the signed sum is a convex polytope, and computation of secondary, resultant, and discriminant polytopes. Further applications include convex combinatorial optimization and convex integer programming, where we offer a new approach, thus removing the complexity's exponential dependence in the dimension.
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Hanvpnmentor
Kyoohyung Han is a PhD student in the Department of Mathematical Science at the Seoul National University in Korea. These are the slides from his presentation at EuroCrypt 2018.
We experimentally study the fundamental problem of computing the volume of a convex polytope given as an intersection of linear inequalities. We implement and evaluate practical randomized algorithms for accurately approximating the polytope’s volume in high dimensions (e.g. one hundred). To carry out this efficiently we experimentally correlate the effect of parameters, such as random walk length and number of sample points, on accuracy andruntime. Moreover, we exploit the problem’s geometry by implementing an iterative rounding procedure, computing partial generations of random points and designing fast polytope boundary oracles. Our publicly available code is significantly faster than exact computation and more accurate than existing approximation methods. We provide volume approximations for the Birkhoff polytopes B11,...,B15, whereas exact methods have only computed that ofB10.
I am Martin J. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, Arizona University, USA. I have been helping students with their homework for the past 6 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemIJAAS Team
Several classical timing synchronization schemes have been proposed for the timing synchronization in OFDM systems based on the correlation between identical parts of OFDM symbol. These schemes show poor performance due to the presence of plateau and significant side lobe. In this paper we present a timing synchronization schemes with timing metric based on a Constant Amplitude Zero Auto Correlation (CAZAC) sequence. The performance of the proposed timing synchronization scheme is better than the classical techniques.
I am Boris M. I am a Computer Science Assignment Help Expert at programminghomeworkhelp.com. I hold MSc. in Programming, McGill University, Canada. I have been helping students with their homework for the past 7 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
In general dimension, there is no known total polynomial algorithm for either convex hull or vertex enumeration, i.e. an algorithm whose complexity depends polynomially on the input and output sizes. It is thus important to identify problems (and polytope representations) for which total polynomial-time algorithms can be obtained. We offer the first total polynomial-time algorithm for computing the edge-skeleton (including vertex enumeration) of a polytope given by an optimization or separation oracle, where we are also given a superset of its edge directions. We also offer a space-efficient variant of our algorithm by employing reverse search. All complexity bounds refer to the (oracle) Turing machine model. There is a number of polytope classes naturally defined by oracles; for some of them neither vertex nor facet representation is obvious. We consider two main applications, where we obtain (weakly) total polynomial-time algorithms: Signed Minkowski sums of convex polytopes, where polytopes can be subtracted provided the signed sum is a convex polytope, and computation of secondary, resultant, and discriminant polytopes. Further applications include convex combinatorial optimization and convex integer programming, where we offer a new approach, thus removing the complexity's exponential dependence in the dimension.
Homomorphic Lower Digit Removal and Improved FHE Bootstrapping by Kyoohyung Hanvpnmentor
Kyoohyung Han is a PhD student in the Department of Mathematical Science at the Seoul National University in Korea. These are the slides from his presentation at EuroCrypt 2018.
We experimentally study the fundamental problem of computing the volume of a convex polytope given as an intersection of linear inequalities. We implement and evaluate practical randomized algorithms for accurately approximating the polytope’s volume in high dimensions (e.g. one hundred). To carry out this efficiently we experimentally correlate the effect of parameters, such as random walk length and number of sample points, on accuracy andruntime. Moreover, we exploit the problem’s geometry by implementing an iterative rounding procedure, computing partial generations of random points and designing fast polytope boundary oracles. Our publicly available code is significantly faster than exact computation and more accurate than existing approximation methods. We provide volume approximations for the Birkhoff polytopes B11,...,B15, whereas exact methods have only computed that ofB10.
I am Martin J. I am a Digital Signal Processing Assignment Expert at matlabassignmentexperts.com. I hold a Ph.D. in Matlab, Arizona University, USA. I have been helping students with their homework for the past 6 years. I solve assignments related to Digital Signal Processing.
Visit matlabassignmentexperts.com or email info@matlabassignmentexperts.com.
You can also call on +1 678 648 4277 for any assistance with Digital Signal Processing Assignments.
A Novel CAZAC Sequence Based Timing Synchronization Scheme for OFDM SystemIJAAS Team
Several classical timing synchronization schemes have been proposed for the timing synchronization in OFDM systems based on the correlation between identical parts of OFDM symbol. These schemes show poor performance due to the presence of plateau and significant side lobe. In this paper we present a timing synchronization schemes with timing metric based on a Constant Amplitude Zero Auto Correlation (CAZAC) sequence. The performance of the proposed timing synchronization scheme is better than the classical techniques.
MATLAB DOCUMENTATION ON SOME OF THE MODULES
A.Generate videos in which a skeleton of a person doing the following Gestures.
1.Tilting his head to right and left
2.Tilting his hand to right and left
3.Walking
in matlab.
B. Write a MATLAB program that converts a decimal number to Roman number and vice versa.
C.Using EZ plot & anonymous functions plot the following:
· Y=Sqrt(X)
· Y= X^2
· Y=e^(-XY)
D.Take your picture and
· Show R, G, B channels along with RGB Image in same figure using sub figure.
· Convert into HSV( Hue, saturation and value) and show the H,S,V channels along with HSV image
E.Record your name pronounced by yourself. Try to display the signal(name) in a plot vs Time, using matlab.
F.Write a script to open a new figure and plot five circles, all centered at the origin and with increasing radii. Set the line width for each circle to something thick (at least 2 points), and use the colors from a 5-color jet colormap (jet).
G. NEWTON RAPHSON AND SECANT METHOD
H.Write any one of the program to do following things using file concept.
1.Create or Open a file
2. Read data from the file and write data to another file
3. Append some text to already existed file
4. Close the file
I.Write a function to perform following set operations
1.Union of A and B
2. Intersection of A and B
3. Complement of A and B
(Assume A= {1, 2, 3, 4, 5, 6}, B= {2, 4, 6})
I am Smith A. I am a Digital Communication Exam Helper at liveexamhelper.com. I hold a Masters' Degree in Digital Communication, from The University of North Carolina, USA. I have been helping students with their exams for the past 11 years. You can hire me to take your exam in Digital Communication.
Visit liveexamhelper.com or email info@liveexamhelper.com. You can also call on +1 678 648 4277 for any assistance with the Digital Communication exam.
Digital image processing using matlab: basic transformations, filters and ope...thanh nguyen
How to use Matlab to deal with basic image manipulations.
Negative transformation
Log transformation
Power-law transformation
Piecewise-linear transformation
Histogram equalization
Subtraction
Smoothing Linear Filters
Order-Statistics Filters
The Laplacian
The Gradient
Implementation of Low-Complexity Redundant Multiplier Architecture for Finite...ijcisjournal
In the present work, a low-complexity Digit-Serial/parallel Multiplier over Finite Field is proposed. It is
employed in applications like cryptography for data encryption and decryptionto deal with discrete
mathematical andarithmetic structures. The proposedmultiplier utilizes a redundant representation because
of their free squaring and modular reduction. The proposed 10-bit multiplier is simulated and synthesized
using Xilinx VerilogHDL. It is evident from the simulation results that the multiplier has significantly low
area and power when compared to the previous structures using the same representation.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
6. (a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a state-of-
the-art technique from the computer vision literature, [47].12 That approach also aims at robustly
recovering a good low-rank approximation, but uses a more complicated, nonconvex m-estimator,
which incorporates a local scale estimate that implicitly exploits the spatial characteristics of natural
images. This leads to a highly nonconvex optimization, which is solved locally via alternating
minimization. Interestingly, despite using more prior information about the signal to be recovered,
this approach does not perform as well as the convex programming heuristic: notice the large
artifacts in the top and bottom rows of Figure 2 (d).
In Figure 3, we consider 250 frames of a sequence with several drastic illumination changes.
Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity, and to
illustrate the theoretical results obtained above, we again choose = 1/
p
n1.13 For this example,
on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iterations and 36
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse ˆS
Convex optimization (this work) Alternating minimization [47]
Figure 2: Background modeling from video. Three frames from a 200 frame video sequence
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and sparse
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimization
of an m-estimator [47]. PCP yields a much more appealing result despite using less prior
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a state-of-
the-art technique from the computer vision literature, [47].12 That approach also aims at robustly
recovering a good low-rank approximation, but uses a more complicated, nonconvex m-estimator,
which incorporates a local scale estimate that implicitly exploits the spatial characteristics of natural
images. This leads to a highly nonconvex optimization, which is solved locally via alternating
minimization. Interestingly, despite using more prior information about the signal to be recovered,
this approach does not perform as well as the convex programming heuristic: notice the large
artifacts in the top and bottom rows of Figure 2 (d).
In Figure 3, we consider 250 frames of a sequence with several drastic illumination changes.
Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity, and to
illustrate the theoretical results obtained above, we again choose = 1/
p
n1.13 For this example,
on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iterations and 36
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse
Convex optimization (this work) Alternating minimization [4
Figure 2: Background modeling from video. Three frames from a 200 frame video seque
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and spa
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimizat
of an m-estimator [47]. PCP yields a much more appealing result despite using less pr
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a
the-art technique from the computer vision literature, [47].12 That approach also aims at
recovering a good low-rank approximation, but uses a more complicated, nonconvex m-e
which incorporates a local scale estimate that implicitly exploits the spatial characteristics o
images. This leads to a highly nonconvex optimization, which is solved locally via al
minimization. Interestingly, despite using more prior information about the signal to be r
this approach does not perform as well as the convex programming heuristic: notice
artifacts in the top and bottom rows of Figure 2 (d).
In Figure 3, we consider 250 frames of a sequence with several drastic illumination
Here, the resolution is 168 ⇥ 120, and so M is a 20, 160 ⇥ 250 matrix. For simplicity
illustrate the theoretical results obtained above, we again choose = 1/
p
n1.13 For this
on the same 2.66 GHz Core 2 Duo machine, the algorithm requires a total of 561 iteration
(a) Original frames (b) Low-rank ˆL (c) Sparse ˆS (d) Low-rank ˆL (e) Sparse
Convex optimization (this work) Alternating minimization [4
Figure 2: Background modeling from video. Three frames from a 200 frame video seque
taken in an airport [31]. (a) Frames of original video M. (b)-(c) Low-rank ˆL and spa
components ˆS obtained by PCP, (d)-(e) competing approach based on alternating minimizat
of an m-estimator [47]. PCP yields a much more appealing result despite using less pr
knowledge.
Figure 2 (d) and (e) compares the result obtained by Principal Component Pursuit to a
= +
10. T = L + S
L, S ˆL, ˆS
ˆS
ˆL Pl(T ˆS)
ˆS H⇣(T ˆL)
ˆL
ˆS
Pl
H⇣ ⇣
11. Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} =
(Gradient Ascent method)
1: Input: Symmetric tensor T
rank l, exact rank r, N1 numb
or restarts, N2 number of powe
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute
u of Tj(I, I, ✓). Initialize
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )
{Run power method t
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1
i
8: until t = N2
9: Pick the best:
arg maxi2[N1] Tj(v
(t+1)
i , v
(t
i
i =
(t+1)
i and vi = v
(t+1
i
10: Deflate: Tj Tj ivi ⌦
12. Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} =
(Gradient Ascent method)
1: Input: Symmetric tensor T
rank l, exact rank r, N1 numb
or restarts, N2 number of powe
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute
u of Tj(I, I, ✓). Initialize
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )
{Run power method t
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1
i
8: until t = N2
9: Pick the best:
arg maxi2[N1] Tj(v
(t+1)
i , v
(t
i
i =
(t+1)
i and vi = v
(t+1
i
10: Deflate: Tj Tj ivi ⌦
13. Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} =
(Gradient Ascent method)
1: Input: Symmetric tensor T
rank l, exact rank r, N1 numb
or restarts, N2 number of powe
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute
u of Tj(I, I, ✓). Initialize
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )
{Run power method t
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1
i
8: until t = N2
9: Pick the best:
arg maxi2[N1] Tj(v
(t+1)
i , v
(t
i
i =
(t+1)
i and vi = v
(t+1
i
10: Deflate: Tj Tj ivi ⌦
14. Robust Tensor Decomposition under Block Sparse Perturbation
Algorithm 1 (bL, bS) = RTD (T, , r, ): Tensor Ro-
bust PCA
1: Input: Tensor T 2 Rn⇥n⇥n
, convergence crite-
rion , target rank r, thresholding scale parameter
. Pl(A) denote estimated rank-l approximation
of tensor A, and let l(A) denote the estimated
lth
largest eigenvalue using Procedure 1. HT⇣(A)
denotes hard-thresholding, i.e. H⇣(A))ijk = Aijk
if |Aijk| ⇣ and 0 otherwise.
2: Set initial threshold ⇣0 1(T) and estimates
S(0)
= H⇣0 (T L(0)
).
3: for Stage l = 1 to r do
4: for t = 0 to ⌧ = 10 log n T S(0)
2
/ do
5: L(t+1)
= Pl(T S(t)
).
6: S(t+1)
= H⇣(T L(t+1)
).
7: ⇣t+1= ( l+1(T S(t+1)
)+ 1
2
t
l(T S(t+1)
)).
8: If l+1(L(t+1)
) < 2n , then return L(⌧)
, S(⌧)
,
else reset S(0)
= S(⌧)
9: Return: bL = L(⌧)
, bS = S(⌧)
which is a multilinear combination of the tensor mode-
1 fibers. Similarly T(u, v, w) 2 R is a multilinear com-
bination of the tensor entries.
A tensor T 2 Rn⇥n⇥n
has a CP rank at most r if it
can be written as the sum of r rank-1 tensors as
T =
X
i2[r]
⇤
i ui ⌦ ui ⌦ ui, ui 2 Rn
, kuik = 1, (3)
where notation ⌦ represents the outer product. We
sometimes abbreviate a ⌦ a ⌦ a as a⌦3
. Without loss
of generality, ⇤
i > 0, since ⇤
i u⌦3
i = ⇤
i ( ui)⌦3
.
RTD method: We propose non-convex algorithm
RTD for robust tensor decomposition, described in Al-
Procedure 1 {ˆLl, ( ˆuj, j)j2[l]} = Pl(T): GradAscent
(Gradient Ascent method)
1: Input: Symmetric tensor T 2 Rn⇥n⇥n
, target
rank l, exact rank r, N1 number of initializations
or restarts, N2 number of power iterations for each
initialization. Let T1 T.
2: for j = 1, . . . , r do
3: for i = 1, . . . , N1 do
4: ✓ ⇠ N(0, In). Compute top singular vector
u of Tj(I, I, ✓). Initialize v
(1)
i u. Let =
Tj(u, u, u).
5: repeat
6: v
(t+1)
i Tj(I, v
(t)
i , v
(t)
i )/kTj(I, v
(t)
i , v
(t)
i )k2
{Run power method to land in spectral
ball}
7:
(t+1)
i Tj(v
(t+1)
i , v
(t+1)
i , v
(t+1)
i )
8: until t = N2
9: Pick the best: reset i
arg maxi2[N1] Tj(v
(t+1)
i , v
(t+1)
i , v
(t+1)
i ) and
i =
(t+1)
i and vi = v
(t+1)
i .
10: Deflate: Tj Tj ivi ⌦ vi ⌦ vi.
11: for j = 1, . . . , r do
12: repeat
13: Gradient Ascent iteration: v
(t+1)
j v
(t)
j +
1
4 (1+ /
p
n)
·
⇣
T(I, v
(t)
j , v
(t)
j ) kv
(t)
j k2
v
(t)
j
⌘
.
14: until convergence (linear rate, refer Lemma 3).
15: Set buj = v
(t+1)
j , j = T(v
(t+1)
j , v
(t+1)
j , v
(t+1)
j )
16: return Estimated top l out of all the top r eigen-
pairs (buj, j)j2[l], and low rank estimate ˆLl =P
i2[l] ibuj ⌦ buj ⌦ buj.
is ˜O n4+c
r2
.
T u2
= u uT
u = 1
T u
u
s.t. kuk = 1f(u) = T u3
max
u2Rn
f(u)
17. Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan
d
10 20 30 40
Error
0.4
0.5
0.6
Figure
d
10 20 30 40
Error 0.3
0.4
0.5
0.6
0.7
0.8
Figure
d
10 20 30 40
Error
100
Figure
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Error
100
Figure
(a) (b) (c) (d)
Figure 1: (a) Error comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Error compari
of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Error comparison of di↵erent methods w
block sparsity, rank 5, varying d. (d) Error comparison of di↵erent methods with block sparsity, rank 25, varying
Error = kL⇤
LkF /kL⇤
kF . The curve labeled ‘T-RPCA-W(slice)’ refers to considering recovered low rank part fr
a random slice of the tensor T by using matrix non-convex RPCA method as the whiten matrix, ‘T-RPCA-W(tru
is using true second order moment in whitening, ‘M-RPCA(slice)’ treats each slice of the input tensor as a non-con
matrix-RPCA(M-RPCA) problem, ‘M-RPCA(flat)’ reshapes the tensor along one mode and treat the resultant a
matrix RPCA problem. All four sub-figures share same curve descriptions.
d
10 20 30 40
Time(s)
50
100
150
200
Figure
d
10 20 30 40
Time(s)
102
103
Figure
d
10 20 30 40
Time(s)
102
Figure
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Time(s)
102
103
Figure
d
10 20 30 40
Error
0.4
0.5
0.6
d
10 20 30 40
Error
0.3
0.4
0.5
0.6
0.7
0.8
d
10 20 30 40
Error
100
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Error
100
(a) (b) (c) (d)
Figure 1: (a) Error comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Error compa
of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Error comparison of di↵erent methods
block sparsity, rank 5, varying d. (d) Error comparison of di↵erent methods with block sparsity, rank 25, varyi
Error = kL⇤
LkF /kL⇤
kF . The curve labeled ‘T-RPCA-W(slice)’ refers to considering recovered low rank part
a random slice of the tensor T by using matrix non-convex RPCA method as the whiten matrix, ‘T-RPCA-W(t
is using true second order moment in whitening, ‘M-RPCA(slice)’ treats each slice of the input tensor as a non-co
matrix-RPCA(M-RPCA) problem, ‘M-RPCA(flat)’ reshapes the tensor along one mode and treat the resultant
matrix RPCA problem. All four sub-figures share same curve descriptions.
d
10 20 30 40
Time(s)
50
100
150
200
Figure
d
10 20 30 40
Time(s)
102
103
Figure
d
10 20 30 40
Time(s)
102
Figure
Nonwhiten
Whiten(random)
Whiten(true)
Matrix(slice)
Matrix(flat)
d
10 20 30 40
Time(s)
102
103
Figure
(a) (b) (c) (d)
Figure 2: (a) Running time comparison of di↵erent methods with deterministic sparsity, rank 5, varying d. (b) Run
time comparison of di↵erent methods with deterministic sparsity, rank 25, varying d. (c) Running time comparis
di↵erent methods with block sparsity, rank 5, varying d. (d) Running time comparison of di↵erent methods with
sparsity, rank 25, varying d. Curve descriptions are same as in Figure 1.
is orthogonal. We can also extend to non-orthogonal
tensors L⇤
, whose components u are linearly indepen-
Synthetic datasets: The low-rank part LP
u⌦3
is generated from a factor matrix
18. Robust Tensor Decomposition under Block Sparse Perturbation
(a) (b) (c)
3: Foreground filtering or activity detection in the Curtain video dataset. (a): Original image fra
und filtered (sparse part estimated) using tensor method; time taken is 5.1s. (c): Foreground
part estimated) using matrix method; time taken is 5.7s.