SlideShare a Scribd company logo
Nonconvex Compressed Sensing
with the Sum-of-Squares Method
Tasuku Soma (Univ. Tokyo)
Joint work with:
Yuichi Yoshida (NII&PFI)
1 / 21
1 Introduction
2 Sum of Squares Method
3 Nonconvex Compressed Sensing
2 / 21
1 Introduction
2 Sum of Squares Method
3 Nonconvex Compressed Sensing
3 / 21
Compressed Sensing
Given: A ∈ Rm×n
(m n) and y = Ax,
Task: Estimate the original sparse signal x ∈ Rn
.
y = A x
4 / 21
Compressed Sensing
Given: A ∈ Rm×n
(m n) and y = Ax,
Task: Estimate the original sparse signal x ∈ Rn
.
y = A x
Applications:
Image Processing, Statistics, Machine Learning...
4 / 21
1 Minimization (Basis Pursuit)
0 1
min z 1 sub. to Az = y
5 / 21
1 Minimization (Basis Pursuit)
0 1
min z 1 sub. to Az = y
• Convex relaxation for 0 minimization
• For a subgaussian A with m = Ω(s log n
s
),
1 minimization reconstructs x.
[Cand`es-Romberg-Tao ’06, Donoho ’06]
s: sparsity of x (maybe unknown)
5 / 21
Nonconvex Compressed Sensing
0 1/2 1
q minimization (0 < q ≤ 1):
[Laska-Davenport-Baraniuk ’09,Cherian-Sra-Papanikolopoulos ’11]
min z
q
q sub. to Az = y
6 / 21
Nonconvex Compressed Sensing
0 1/2 1
q minimization (0 < q ≤ 1):
[Laska-Davenport-Baraniuk ’09,Cherian-Sra-Papanikolopoulos ’11]
min z
q
q sub. to Az = y
• Requires fewer samples than 1 minimization
• Recovers arbitrary sparse signals as q → 0
• Nonconvex Optimization!
6 / 21
Stable Signal Recovery
x needs not to be sparse but close to a sparse signal.
7 / 21
Stable Signal Recovery
x needs not to be sparse but close to a sparse signal.
p-stable recovery (0 < p ≤ ∞): Algorithms that output
ˆx satisfying
ˆx − x p ≤ O(σs(x)p)
for any x ∈ Rn
.
7 / 21
Stable Signal Recovery
x needs not to be sparse but close to a sparse signal.
p-stable recovery (0 < p ≤ ∞): Algorithms that output
ˆx satisfying
ˆx − x p ≤ O(σs(x)p)
for any x ∈ Rn
. p distance to s-sparse vector
7 / 21
Stable Signal Recovery
x needs not to be sparse but close to a sparse signal.
p-stable recovery (0 < p ≤ ∞): Algorithms that output
ˆx satisfying
ˆx − x p ≤ O(σs(x)p)
for any x ∈ Rn
. p distance to s-sparse vector
• If A is a subgaussian matrix with m = Ω(s log n
s
),
1 minimization is 1-stable
[Cand`es-Romberg-Tao ’06,Cand`es ’08]
• For same A, q minimization is q-stable
(0 < q ≤ 1) [Cohen-Dahmen-DeVore ’09]
7 / 21
Our Result
Theorem
For x ∞ ≤ 1, A: Rademacher matrix, and fixed
q = 2−k
, there is a polytime algorithm for finding ˆx s.t.
ˆx − x q ≤ O(σs(x)q) + ε,
provided that m = Ω(s2/q
log n).
Aij ∼ {±1/
√
m}
• (Nearly) q-stable recovery
• #samples >> O(s log(n/s)) (Sample Complexity
Trade Off)
• Use of SoS Method and Ellipsoid Method
8 / 21
1 Introduction
2 Sum of Squares Method
3 Nonconvex Compressed Sensing
9 / 21
SoS Method [Lasserre ’06, Parrilo ’00, Nesterov ’00, Shor ’87]
Polynomial Optimization: f, g1, . . . , gi ∈ R[z]: polynomials
min
z
f(z)
sub. to gi(z) = 0 (i = 1, . . . , m)
10 / 21
SoS Method [Lasserre ’06, Parrilo ’00, Nesterov ’00, Shor ’87]
Polynomial Optimization: f, g1, . . . , gi ∈ R[z]: polynomials
min
z
f(z)
sub. to gi(z) = 0 (i = 1, . . . , m)
SoS Relaxation (of degree d):
min
E
E[f(z)]
sub. to E : R[z]d → R, linear operator “pseudoexpectation”
E[1] = 1
E[p(z)2
] ≥ 0 (p ∈ R[z] : deg(p) ≤ d/2)
E[gi(z)p(z)] = 0 (p ∈ R[z] : deg(gip) ≤ d, i = 1, . . . , m)
10 / 21
Facts on SoS Method
• The SoS Relaxation (of degree d) reduces to Semidefinite
Programming (SDP) with nO(d)
-size matrix.
11 / 21
Facts on SoS Method
• The SoS Relaxation (of degree d) reduces to Semidefinite
Programming (SDP) with nO(d)
-size matrix.
• Dual View: SoS Proof System
Any (low-degree) “proof” in SoS proof system yields an
algorithm via the SoS method.
11 / 21
Facts on SoS Method
• The SoS Relaxation (of degree d) reduces to Semidefinite
Programming (SDP) with nO(d)
-size matrix.
• Dual View: SoS Proof System
Any (low-degree) “proof” in SoS proof system yields an
algorithm via the SoS method.
• Very Powerful Tool in Computer Science:
• Subexponential Alg. for UG [Arora-Barak-Steurer’10]
• Planted Sparse Vector [Barak-Kelner-Steurer’14]
• Sparse PCA [Ma-Wigderson’14]
11 / 21
1 Introduction
2 Sum of Squares Method
3 Nonconvex Compressed Sensing
12 / 21
Basic Idea
Formulate q minimization as polynomial optimization:
min z
q
q sub. to Az = y
Note: |z(i)|q
is not a polynomial, but representable by lifting;
|z(i)|
4
= z(i)2
, |z(i)| ≥ 0
13 / 21
Basic Idea
Formulate q minimization as polynomial optimization:
min z
q
q sub. to Az = y
Note: |z(i)|q
is not a polynomial, but representable by lifting;
|z(i)|
4
= z(i)2
, |z(i)| ≥ 0
×Solutions of SoS method do not satisfy triangle inequalities:
E z + x
q
q E z
q
q + x
q
q
13 / 21
Basic Idea
Formulate q minimization as polynomial optimization:
min z
q
q sub. to Az = y
Note: |z(i)|q
is not a polynomial, but representable by lifting;
|z(i)|
4
= z(i)2
, |z(i)| ≥ 0
×Solutions of SoS method do not satisfy triangle inequalities:
E z + x
q
q E z
q
q + x
q
q
Add Valid Constraints!
min z
q
q s.t. Az = y, Valid Constraints
13 / 21
Triangle Inequalities
z + x
q
q ≤ z
q
q + x
q
q
We have to add |z(i) + x(i)|q
, but do not know x(i).
14 / 21
Triangle Inequalities
z + x
q
q ≤ z
q
q + x
q
q
We have to add |z(i) + x(i)|q
, but do not know x(i).
Idea: Using Grid
L: set of multiples of δ in [−1, 1]. -1 0 1δ
• new variable for |z(i) − b|q
(b ∈ L)
• triangle inequalities for
|z(i) − b|q
, |z(i) − b |q
, and |b − b |q
(b, b ∈ L)
14 / 21
Robust q Minimization
Instead of x, we will find xL
∈ Ln
closest to x.
Robust q Minimization
min z
q
q s.t. y − Az 2
2 ≤ η2
η = σmax(A)
√
sδ
15 / 21
Robust q Minimization
Instead of x, we will find xL
∈ Ln
closest to x.
Robust q Minimization
min z
q
q s.t. y − Az 2
2 ≤ η2
η = σmax(A)
√
sδ
q Robust Null Space Property
vS
q
q ≤ ρ vS
q
q + τ Av
q
2
for any v and S ⊆ [n] with |S| ≤ s.
15 / 21
Robust q Minimization
Instead of x, we will find xL
∈ Ln
closest to x.
Robust q Minimization
min z
q
q s.t. y − Az 2
2 ≤ η2
η = σmax(A)
√
sδ
q Pseudo Robust Null Space Property ( q-PRNSP)
E vS
q
q ≤ ρ E vS
q
q + τ E Av 2
2
q/2
for any v = z − b (b ∈ Ln
) and S ⊆ [n] with |S| ≤ s.
15 / 21
PRNSP =⇒ Stable Recovery
Theorem
If E satisfies q-PRNSP, then
E z − xL q
q ≤
2(1 + ρ)
1 − ρ
σs(xL
)q
q +
21+q
τ
1 − ρ
ηq
,
where xL
is the closest vector in Ln
to x.
Proof Idea:
A proof of stability only needs:
•
q
q triangle inequalities for z − xL
, x and z + xL
• 2 triangle inequality
16 / 21
Rounding
Extract an actual vector ˆx from a pseudoexpectation E.
ˆx(i) := argmin
b∈L
E|z(i) − b|q
(i = 1, . . . , n)
Theorem
If E satisfies PRNSP,
ˆx − xL q
q ≤ 2
2(1 + ρ)
1 − ρ
σs(xL
)q
q +
21+q
τ
1 − ρ
ηq
17 / 21
Imposing PRNSP
How can we obtain E satisfying PRNSP?
Idea: Follow known proofs for robust NSP!
• From Restricted Isometry Property (RIP)
[Cand`es ’08]
• From Coherence
[Gribonval-Nielsen ’03, Donoho-Elad ’03]
• From Lossless Expander [Berinde et al. ’08]
18 / 21
Coherence
The coherence of a matrix A = [a1 . . . an] is
µ = max
i j
| ai, aj |
ai 2 aj 2
Facts:
• If µq
< 1
2s
, q Robust NSP holds.
• If A is a Rademacher matrix with
m = O(s2/q
log n), then µq
< 1
2s
w.h.p.
19 / 21
Small Coherence =⇒ PRNSP
Issue: Naive import needs exponentially many
variables and constraints!
Lemma
If A is a Rademacher matrix,
• additinal variables are polynomially many
• additional constraints have a separation oracle
Thus ellipsoid methods find E with PRNSP.
20 / 21
Our Result
Theorem
For x ∞ ≤ 1, A: Rademacher matrix, and fixed
q = 2−k
, there is a polytime algorithm for finding ˆx s.t.
ˆx − x q ≤ O(σs(x)q) + ε,
provided that m = Ω(s2/q
log n).
Aij ∼ {±1/
√
m}
• (Nearly) q-stable recovery
• #samples >> O(s log(n/s)) (Sample Complexity
Trade Off)
• Use of SoS Method and Ellipsoid Method
21 / 21
Putting Things Together
Using a Rademacher matrix yields PRNSP:
E vS
q
q ≤ O(1) · E vS
q
q + O(s) · E Av 2
2
q/2
22 / 21
Putting Things Together
Using a Rademacher matrix yields PRNSP:
E vS
q
q ≤ O(1) · E vS
q
q + O(s) · E Av 2
2
q/2
This guarantees:
ˆx − x
q
q ≤ O(σs(xL
)q
q) + O(s) · ηq
22 / 21
Putting Things Together
Using a Rademacher matrix yields PRNSP:
E vS
q
q ≤ O(1) · E vS
q
q + O(s) · E Av 2
2
q/2
This guarantees:
ˆx − x
q
q ≤ O(σs(xL
)q
q) + O(s) · ηq
Theorem
If we take δ small, then the rounded vector ˆx satisfies
ˆx − x
q
q ≤ O(σs(x)q
q) + ε.
(pf) η = σmax(A)
√
sδ and σmax(A) = O(n/m)
22 / 21

More Related Content

What's hot

PRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling MethodPRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling Method
Ha Phuong
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
Frank Nielsen
 
Pseudo Random Number Generators
Pseudo Random Number GeneratorsPseudo Random Number Generators
Pseudo Random Number Generators
Darshini Parikh
 
Dynamical systems solved ex
Dynamical systems solved exDynamical systems solved ex
Dynamical systems solved ex
Maths Tutoring
 
The Definite Integral
The Definite IntegralThe Definite Integral
The Definite IntegralSilvius
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
Optimal multi-configuration approximation of an N-fermion wave function
 Optimal multi-configuration approximation of an N-fermion wave function Optimal multi-configuration approximation of an N-fermion wave function
Optimal multi-configuration approximation of an N-fermion wave function
jiang-min zhang
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
Edu Assignment Help
 
Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)
Matthew Leingang
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
Frank Nielsen
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
Francesco Tudisco
 
Absorbing Random Walk Centrality
Absorbing Random Walk CentralityAbsorbing Random Walk Centrality
Absorbing Random Walk Centrality
Michael Mathioudakis
 
Lesson 27: Evaluating Definite Integrals
Lesson 27: Evaluating Definite IntegralsLesson 27: Evaluating Definite Integrals
Lesson 27: Evaluating Definite Integrals
Matthew Leingang
 
Stochastic Assignment Help
Stochastic Assignment Help Stochastic Assignment Help
Stochastic Assignment Help
Statistics Assignment Help
 
Lesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of CalculusLesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of CalculusMatthew Leingang
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Francesco Tudisco
 
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
The Statistical and Applied Mathematical Sciences Institute
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
Alexander Litvinenko
 

What's hot (20)

PRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling MethodPRML Reading Chapter 11 - Sampling Method
PRML Reading Chapter 11 - Sampling Method
 
The dual geometry of Shannon information
The dual geometry of Shannon informationThe dual geometry of Shannon information
The dual geometry of Shannon information
 
Pseudo Random Number Generators
Pseudo Random Number GeneratorsPseudo Random Number Generators
Pseudo Random Number Generators
 
Dynamical systems solved ex
Dynamical systems solved exDynamical systems solved ex
Dynamical systems solved ex
 
The Definite Integral
The Definite IntegralThe Definite Integral
The Definite Integral
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Optimal multi-configuration approximation of an N-fermion wave function
 Optimal multi-configuration approximation of an N-fermion wave function Optimal multi-configuration approximation of an N-fermion wave function
Optimal multi-configuration approximation of an N-fermion wave function
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Chemistry Assignment Help
Chemistry Assignment Help Chemistry Assignment Help
Chemistry Assignment Help
 
Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)
 
Classification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metricsClassification with mixtures of curved Mahalanobis metrics
Classification with mixtures of curved Mahalanobis metrics
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
 
Fourier series 3
Fourier series 3Fourier series 3
Fourier series 3
 
Absorbing Random Walk Centrality
Absorbing Random Walk CentralityAbsorbing Random Walk Centrality
Absorbing Random Walk Centrality
 
Lesson 27: Evaluating Definite Integrals
Lesson 27: Evaluating Definite IntegralsLesson 27: Evaluating Definite Integrals
Lesson 27: Evaluating Definite Integrals
 
Stochastic Assignment Help
Stochastic Assignment Help Stochastic Assignment Help
Stochastic Assignment Help
 
Lesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of CalculusLesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of Calculus
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
 
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 

Viewers also liked

Fast Deterministic Algorithms for Matrix Completion Problems
Fast Deterministic Algorithms for Matrix Completion ProblemsFast Deterministic Algorithms for Matrix Completion Problems
Fast Deterministic Algorithms for Matrix Completion Problems
Tasuku Soma
 
Beamertemplete
BeamertempleteBeamertemplete
Beamertemplete
Tasuku Soma
 
行列補完を用いた無線マルチキャスト符号構成アルゴリズム
行列補完を用いた無線マルチキャスト符号構成アルゴリズム行列補完を用いた無線マルチキャスト符号構成アルゴリズム
行列補完を用いた無線マルチキャスト符号構成アルゴリズムTasuku Soma
 
整数格子点上の劣モジュラ被覆に対する高速アルゴリズム
整数格子点上の劣モジュラ被覆に対する高速アルゴリズム整数格子点上の劣モジュラ被覆に対する高速アルゴリズム
整数格子点上の劣モジュラ被覆に対する高速アルゴリズム
Tasuku Soma
 
計算情報学研究室 (数理情報学第7研究室)
計算情報学研究室  (数理情報学第7研究室) 計算情報学研究室  (数理情報学第7研究室)
計算情報学研究室 (数理情報学第7研究室) Tasuku Soma
 
Compressive Sensing Basics - Medical Imaging - MRI
Compressive Sensing Basics - Medical Imaging - MRICompressive Sensing Basics - Medical Imaging - MRI
Compressive Sensing Basics - Medical Imaging - MRI
Thomas Stefani
 
Introduction to compressive sensing
Introduction to compressive sensingIntroduction to compressive sensing
Introduction to compressive sensingMohammed Musfir N N
 
Ppt compressed sensing a tutorial
Ppt compressed sensing a tutorialPpt compressed sensing a tutorial
Ppt compressed sensing a tutorial
Terence Gao
 
非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出
非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出
非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出
hoxo_m
 
脆弱性スキャナVulsのAWS環境への融合
脆弱性スキャナVulsのAWS環境への融合脆弱性スキャナVulsのAWS環境への融合
脆弱性スキャナVulsのAWS環境への融合
Takayuki Ushida
 
Config rulesを1年ほど使い続けて分かったこと
Config rulesを1年ほど使い続けて分かったことConfig rulesを1年ほど使い続けて分かったこと
Config rulesを1年ほど使い続けて分かったこと
qtomonari
 
Pythonによる非同期プログラミング入門
Pythonによる非同期プログラミング入門Pythonによる非同期プログラミング入門
Pythonによる非同期プログラミング入門
Hironori Sekine
 
QAアーキテクチャの設計による 説明責任の高いテスト・品質保証
QAアーキテクチャの設計による説明責任の高いテスト・品質保証QAアーキテクチャの設計による説明責任の高いテスト・品質保証
QAアーキテクチャの設計による 説明責任の高いテスト・品質保証
Yasuharu Nishi
 
Pythonによるwebアプリケーション入門 - Django編-
Pythonによるwebアプリケーション入門 - Django編- Pythonによるwebアプリケーション入門 - Django編-
Pythonによるwebアプリケーション入門 - Django編-
Hironori Sekine
 
0528 kanntigai ui_ux
0528 kanntigai ui_ux0528 kanntigai ui_ux
0528 kanntigai ui_ux
Saori Matsui
 
女子の心をつかむUIデザインポイント - MERY編 -
女子の心をつかむUIデザインポイント - MERY編 -女子の心をつかむUIデザインポイント - MERY編 -
女子の心をつかむUIデザインポイント - MERY編 -
Shoko Tanaka
 

Viewers also liked (17)

Fast Deterministic Algorithms for Matrix Completion Problems
Fast Deterministic Algorithms for Matrix Completion ProblemsFast Deterministic Algorithms for Matrix Completion Problems
Fast Deterministic Algorithms for Matrix Completion Problems
 
Beamertemplete
BeamertempleteBeamertemplete
Beamertemplete
 
行列補完を用いた無線マルチキャスト符号構成アルゴリズム
行列補完を用いた無線マルチキャスト符号構成アルゴリズム行列補完を用いた無線マルチキャスト符号構成アルゴリズム
行列補完を用いた無線マルチキャスト符号構成アルゴリズム
 
整数格子点上の劣モジュラ被覆に対する高速アルゴリズム
整数格子点上の劣モジュラ被覆に対する高速アルゴリズム整数格子点上の劣モジュラ被覆に対する高速アルゴリズム
整数格子点上の劣モジュラ被覆に対する高速アルゴリズム
 
計算情報学研究室 (数理情報学第7研究室)
計算情報学研究室  (数理情報学第7研究室) 計算情報学研究室  (数理情報学第7研究室)
計算情報学研究室 (数理情報学第7研究室)
 
Compressed Sensing - Achuta Kadambi
Compressed Sensing - Achuta KadambiCompressed Sensing - Achuta Kadambi
Compressed Sensing - Achuta Kadambi
 
Compressive Sensing Basics - Medical Imaging - MRI
Compressive Sensing Basics - Medical Imaging - MRICompressive Sensing Basics - Medical Imaging - MRI
Compressive Sensing Basics - Medical Imaging - MRI
 
Introduction to compressive sensing
Introduction to compressive sensingIntroduction to compressive sensing
Introduction to compressive sensing
 
Ppt compressed sensing a tutorial
Ppt compressed sensing a tutorialPpt compressed sensing a tutorial
Ppt compressed sensing a tutorial
 
非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出
非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出
非制約最小二乗密度比推定法 uLSIF を用いた外れ値検出
 
脆弱性スキャナVulsのAWS環境への融合
脆弱性スキャナVulsのAWS環境への融合脆弱性スキャナVulsのAWS環境への融合
脆弱性スキャナVulsのAWS環境への融合
 
Config rulesを1年ほど使い続けて分かったこと
Config rulesを1年ほど使い続けて分かったことConfig rulesを1年ほど使い続けて分かったこと
Config rulesを1年ほど使い続けて分かったこと
 
Pythonによる非同期プログラミング入門
Pythonによる非同期プログラミング入門Pythonによる非同期プログラミング入門
Pythonによる非同期プログラミング入門
 
QAアーキテクチャの設計による 説明責任の高いテスト・品質保証
QAアーキテクチャの設計による説明責任の高いテスト・品質保証QAアーキテクチャの設計による説明責任の高いテスト・品質保証
QAアーキテクチャの設計による 説明責任の高いテスト・品質保証
 
Pythonによるwebアプリケーション入門 - Django編-
Pythonによるwebアプリケーション入門 - Django編- Pythonによるwebアプリケーション入門 - Django編-
Pythonによるwebアプリケーション入門 - Django編-
 
0528 kanntigai ui_ux
0528 kanntigai ui_ux0528 kanntigai ui_ux
0528 kanntigai ui_ux
 
女子の心をつかむUIデザインポイント - MERY編 -
女子の心をつかむUIデザインポイント - MERY編 -女子の心をつかむUIデザインポイント - MERY編 -
女子の心をつかむUIデザインポイント - MERY編 -
 

Similar to Nonconvex Compressed Sensing with the Sum-of-Squares Method

A Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeA Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cube
VjekoslavKovac1
 
ch3.ppt
ch3.pptch3.ppt
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
Delta Pi Systems
 
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Yandex
 
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
The Statistical and Applied Mathematical Sciences Institute
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
Sean Meyn
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
The Statistical and Applied Mathematical Sciences Institute
 
Introduction to the AKS Primality Test
Introduction to the AKS Primality TestIntroduction to the AKS Primality Test
Introduction to the AKS Primality Test
Pranshu Bhatnagar
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Gabriel Peyré
 
CPP.pptx
CPP.pptxCPP.pptx
2.3 Operations that preserve convexity & 2.4 Generalized inequalities
2.3 Operations that preserve convexity & 2.4 Generalized inequalities2.3 Operations that preserve convexity & 2.4 Generalized inequalities
2.3 Operations that preserve convexity & 2.4 Generalized inequalities
RyotaroTsukada
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
The Statistical and Applied Mathematical Sciences Institute
 
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Tomoya Murata
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
Gabriel Peyré
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2
Fabian Pedregosa
 

Similar to Nonconvex Compressed Sensing with the Sum-of-Squares Method (20)

A Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cubeA Szemeredi-type theorem for subsets of the unit cube
A Szemeredi-type theorem for subsets of the unit cube
 
ch3.ppt
ch3.pptch3.ppt
ch3.ppt
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
 
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
QMC: Operator Splitting Workshop, A Splitting Method for Nonsmooth Nonconvex ...
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
sada_pres
sada_pressada_pres
sada_pres
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
 
Introduction to the AKS Primality Test
Introduction to the AKS Primality TestIntroduction to the AKS Primality Test
Introduction to the AKS Primality Test
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
 
CPP.pptx
CPP.pptxCPP.pptx
CPP.pptx
 
2.3 Operations that preserve convexity & 2.4 Generalized inequalities
2.3 Operations that preserve convexity & 2.4 Generalized inequalities2.3 Operations that preserve convexity & 2.4 Generalized inequalities
2.3 Operations that preserve convexity & 2.4 Generalized inequalities
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
Doubly Accelerated Stochastic Variance Reduced Gradient Methods for Regulariz...
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2
 

Recently uploaded

How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
Ana-Maria Mihalceanu
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
UiPathCommunity
 
Welocme to ViralQR, your best QR code generator.
Welocme to ViralQR, your best QR code generator.Welocme to ViralQR, your best QR code generator.
Welocme to ViralQR, your best QR code generator.
ViralQR
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
nkrafacyberclub
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
Peter Spielvogel
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
Kari Kakkonen
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
ControlCase
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
UiPathCommunity
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
Pierluigi Pugliese
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 

Recently uploaded (20)

How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
Monitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR EventsMonitoring Java Application Security with JDK Tools and JFR Events
Monitoring Java Application Security with JDK Tools and JFR Events
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™
 
Welocme to ViralQR, your best QR code generator.
Welocme to ViralQR, your best QR code generator.Welocme to ViralQR, your best QR code generator.
Welocme to ViralQR, your best QR code generator.
 
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptxSecstrike : Reverse Engineering & Pwnable tools for CTF.pptx
Secstrike : Reverse Engineering & Pwnable tools for CTF.pptx
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfSAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdf
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
DevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA ConnectDevOps and Testing slides at DASA Connect
DevOps and Testing slides at DASA Connect
 
PCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase TeamPCI PIN Basics Webinar from the Controlcase Team
PCI PIN Basics Webinar from the Controlcase Team
 
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdfFIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
 
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024By Design, not by Accident - Agile Venture Bolzano 2024
By Design, not by Accident - Agile Venture Bolzano 2024
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 

Nonconvex Compressed Sensing with the Sum-of-Squares Method

  • 1. Nonconvex Compressed Sensing with the Sum-of-Squares Method Tasuku Soma (Univ. Tokyo) Joint work with: Yuichi Yoshida (NII&PFI) 1 / 21
  • 2. 1 Introduction 2 Sum of Squares Method 3 Nonconvex Compressed Sensing 2 / 21
  • 3. 1 Introduction 2 Sum of Squares Method 3 Nonconvex Compressed Sensing 3 / 21
  • 4. Compressed Sensing Given: A ∈ Rm×n (m n) and y = Ax, Task: Estimate the original sparse signal x ∈ Rn . y = A x 4 / 21
  • 5. Compressed Sensing Given: A ∈ Rm×n (m n) and y = Ax, Task: Estimate the original sparse signal x ∈ Rn . y = A x Applications: Image Processing, Statistics, Machine Learning... 4 / 21
  • 6. 1 Minimization (Basis Pursuit) 0 1 min z 1 sub. to Az = y 5 / 21
  • 7. 1 Minimization (Basis Pursuit) 0 1 min z 1 sub. to Az = y • Convex relaxation for 0 minimization • For a subgaussian A with m = Ω(s log n s ), 1 minimization reconstructs x. [Cand`es-Romberg-Tao ’06, Donoho ’06] s: sparsity of x (maybe unknown) 5 / 21
  • 8. Nonconvex Compressed Sensing 0 1/2 1 q minimization (0 < q ≤ 1): [Laska-Davenport-Baraniuk ’09,Cherian-Sra-Papanikolopoulos ’11] min z q q sub. to Az = y 6 / 21
  • 9. Nonconvex Compressed Sensing 0 1/2 1 q minimization (0 < q ≤ 1): [Laska-Davenport-Baraniuk ’09,Cherian-Sra-Papanikolopoulos ’11] min z q q sub. to Az = y • Requires fewer samples than 1 minimization • Recovers arbitrary sparse signals as q → 0 • Nonconvex Optimization! 6 / 21
  • 10. Stable Signal Recovery x needs not to be sparse but close to a sparse signal. 7 / 21
  • 11. Stable Signal Recovery x needs not to be sparse but close to a sparse signal. p-stable recovery (0 < p ≤ ∞): Algorithms that output ˆx satisfying ˆx − x p ≤ O(σs(x)p) for any x ∈ Rn . 7 / 21
  • 12. Stable Signal Recovery x needs not to be sparse but close to a sparse signal. p-stable recovery (0 < p ≤ ∞): Algorithms that output ˆx satisfying ˆx − x p ≤ O(σs(x)p) for any x ∈ Rn . p distance to s-sparse vector 7 / 21
  • 13. Stable Signal Recovery x needs not to be sparse but close to a sparse signal. p-stable recovery (0 < p ≤ ∞): Algorithms that output ˆx satisfying ˆx − x p ≤ O(σs(x)p) for any x ∈ Rn . p distance to s-sparse vector • If A is a subgaussian matrix with m = Ω(s log n s ), 1 minimization is 1-stable [Cand`es-Romberg-Tao ’06,Cand`es ’08] • For same A, q minimization is q-stable (0 < q ≤ 1) [Cohen-Dahmen-DeVore ’09] 7 / 21
  • 14. Our Result Theorem For x ∞ ≤ 1, A: Rademacher matrix, and fixed q = 2−k , there is a polytime algorithm for finding ˆx s.t. ˆx − x q ≤ O(σs(x)q) + ε, provided that m = Ω(s2/q log n). Aij ∼ {±1/ √ m} • (Nearly) q-stable recovery • #samples >> O(s log(n/s)) (Sample Complexity Trade Off) • Use of SoS Method and Ellipsoid Method 8 / 21
  • 15. 1 Introduction 2 Sum of Squares Method 3 Nonconvex Compressed Sensing 9 / 21
  • 16. SoS Method [Lasserre ’06, Parrilo ’00, Nesterov ’00, Shor ’87] Polynomial Optimization: f, g1, . . . , gi ∈ R[z]: polynomials min z f(z) sub. to gi(z) = 0 (i = 1, . . . , m) 10 / 21
  • 17. SoS Method [Lasserre ’06, Parrilo ’00, Nesterov ’00, Shor ’87] Polynomial Optimization: f, g1, . . . , gi ∈ R[z]: polynomials min z f(z) sub. to gi(z) = 0 (i = 1, . . . , m) SoS Relaxation (of degree d): min E E[f(z)] sub. to E : R[z]d → R, linear operator “pseudoexpectation” E[1] = 1 E[p(z)2 ] ≥ 0 (p ∈ R[z] : deg(p) ≤ d/2) E[gi(z)p(z)] = 0 (p ∈ R[z] : deg(gip) ≤ d, i = 1, . . . , m) 10 / 21
  • 18. Facts on SoS Method • The SoS Relaxation (of degree d) reduces to Semidefinite Programming (SDP) with nO(d) -size matrix. 11 / 21
  • 19. Facts on SoS Method • The SoS Relaxation (of degree d) reduces to Semidefinite Programming (SDP) with nO(d) -size matrix. • Dual View: SoS Proof System Any (low-degree) “proof” in SoS proof system yields an algorithm via the SoS method. 11 / 21
  • 20. Facts on SoS Method • The SoS Relaxation (of degree d) reduces to Semidefinite Programming (SDP) with nO(d) -size matrix. • Dual View: SoS Proof System Any (low-degree) “proof” in SoS proof system yields an algorithm via the SoS method. • Very Powerful Tool in Computer Science: • Subexponential Alg. for UG [Arora-Barak-Steurer’10] • Planted Sparse Vector [Barak-Kelner-Steurer’14] • Sparse PCA [Ma-Wigderson’14] 11 / 21
  • 21. 1 Introduction 2 Sum of Squares Method 3 Nonconvex Compressed Sensing 12 / 21
  • 22. Basic Idea Formulate q minimization as polynomial optimization: min z q q sub. to Az = y Note: |z(i)|q is not a polynomial, but representable by lifting; |z(i)| 4 = z(i)2 , |z(i)| ≥ 0 13 / 21
  • 23. Basic Idea Formulate q minimization as polynomial optimization: min z q q sub. to Az = y Note: |z(i)|q is not a polynomial, but representable by lifting; |z(i)| 4 = z(i)2 , |z(i)| ≥ 0 ×Solutions of SoS method do not satisfy triangle inequalities: E z + x q q E z q q + x q q 13 / 21
  • 24. Basic Idea Formulate q minimization as polynomial optimization: min z q q sub. to Az = y Note: |z(i)|q is not a polynomial, but representable by lifting; |z(i)| 4 = z(i)2 , |z(i)| ≥ 0 ×Solutions of SoS method do not satisfy triangle inequalities: E z + x q q E z q q + x q q Add Valid Constraints! min z q q s.t. Az = y, Valid Constraints 13 / 21
  • 25. Triangle Inequalities z + x q q ≤ z q q + x q q We have to add |z(i) + x(i)|q , but do not know x(i). 14 / 21
  • 26. Triangle Inequalities z + x q q ≤ z q q + x q q We have to add |z(i) + x(i)|q , but do not know x(i). Idea: Using Grid L: set of multiples of δ in [−1, 1]. -1 0 1δ • new variable for |z(i) − b|q (b ∈ L) • triangle inequalities for |z(i) − b|q , |z(i) − b |q , and |b − b |q (b, b ∈ L) 14 / 21
  • 27. Robust q Minimization Instead of x, we will find xL ∈ Ln closest to x. Robust q Minimization min z q q s.t. y − Az 2 2 ≤ η2 η = σmax(A) √ sδ 15 / 21
  • 28. Robust q Minimization Instead of x, we will find xL ∈ Ln closest to x. Robust q Minimization min z q q s.t. y − Az 2 2 ≤ η2 η = σmax(A) √ sδ q Robust Null Space Property vS q q ≤ ρ vS q q + τ Av q 2 for any v and S ⊆ [n] with |S| ≤ s. 15 / 21
  • 29. Robust q Minimization Instead of x, we will find xL ∈ Ln closest to x. Robust q Minimization min z q q s.t. y − Az 2 2 ≤ η2 η = σmax(A) √ sδ q Pseudo Robust Null Space Property ( q-PRNSP) E vS q q ≤ ρ E vS q q + τ E Av 2 2 q/2 for any v = z − b (b ∈ Ln ) and S ⊆ [n] with |S| ≤ s. 15 / 21
  • 30. PRNSP =⇒ Stable Recovery Theorem If E satisfies q-PRNSP, then E z − xL q q ≤ 2(1 + ρ) 1 − ρ σs(xL )q q + 21+q τ 1 − ρ ηq , where xL is the closest vector in Ln to x. Proof Idea: A proof of stability only needs: • q q triangle inequalities for z − xL , x and z + xL • 2 triangle inequality 16 / 21
  • 31. Rounding Extract an actual vector ˆx from a pseudoexpectation E. ˆx(i) := argmin b∈L E|z(i) − b|q (i = 1, . . . , n) Theorem If E satisfies PRNSP, ˆx − xL q q ≤ 2 2(1 + ρ) 1 − ρ σs(xL )q q + 21+q τ 1 − ρ ηq 17 / 21
  • 32. Imposing PRNSP How can we obtain E satisfying PRNSP? Idea: Follow known proofs for robust NSP! • From Restricted Isometry Property (RIP) [Cand`es ’08] • From Coherence [Gribonval-Nielsen ’03, Donoho-Elad ’03] • From Lossless Expander [Berinde et al. ’08] 18 / 21
  • 33. Coherence The coherence of a matrix A = [a1 . . . an] is µ = max i j | ai, aj | ai 2 aj 2 Facts: • If µq < 1 2s , q Robust NSP holds. • If A is a Rademacher matrix with m = O(s2/q log n), then µq < 1 2s w.h.p. 19 / 21
  • 34. Small Coherence =⇒ PRNSP Issue: Naive import needs exponentially many variables and constraints! Lemma If A is a Rademacher matrix, • additinal variables are polynomially many • additional constraints have a separation oracle Thus ellipsoid methods find E with PRNSP. 20 / 21
  • 35. Our Result Theorem For x ∞ ≤ 1, A: Rademacher matrix, and fixed q = 2−k , there is a polytime algorithm for finding ˆx s.t. ˆx − x q ≤ O(σs(x)q) + ε, provided that m = Ω(s2/q log n). Aij ∼ {±1/ √ m} • (Nearly) q-stable recovery • #samples >> O(s log(n/s)) (Sample Complexity Trade Off) • Use of SoS Method and Ellipsoid Method 21 / 21
  • 36. Putting Things Together Using a Rademacher matrix yields PRNSP: E vS q q ≤ O(1) · E vS q q + O(s) · E Av 2 2 q/2 22 / 21
  • 37. Putting Things Together Using a Rademacher matrix yields PRNSP: E vS q q ≤ O(1) · E vS q q + O(s) · E Av 2 2 q/2 This guarantees: ˆx − x q q ≤ O(σs(xL )q q) + O(s) · ηq 22 / 21
  • 38. Putting Things Together Using a Rademacher matrix yields PRNSP: E vS q q ≤ O(1) · E vS q q + O(s) · E Av 2 2 q/2 This guarantees: ˆx − x q q ≤ O(σs(xL )q q) + O(s) · ηq Theorem If we take δ small, then the rounded vector ˆx satisfies ˆx − x q q ≤ O(σs(x)q q) + ε. (pf) η = σmax(A) √ sδ and σmax(A) = O(n/m) 22 / 21