This document discusses rank-aware algorithms for joint sparse recovery from multiple measurement vectors (MMV). It begins by introducing the MMV problem and showing that when the rank of the signal matrix is r, the necessary and sufficient conditions for unique recovery are less restrictive than in the single measurement vector case. Classical MMV algorithms like SOMP and l1/lq minimization are not rank-aware. The document then proposes two rank-aware pursuit algorithms:
1) Rank-Aware OMP, which modifies the atom selection step of SOMP but still suffers from rank degeneration over iterations.
2) Rank-Aware Order Recursive Matching Pursuit (RA-ORMP), which forces the sparsity
In this article we consider macrocanonical models for texture synthesis. In these models samples are generated given an input texture image and a set of features which should be matched in expectation. It is known that if the images are quantized, macrocanonical models are given by Gibbs measures, using the maximum entropy principle. We study conditions under which this result extends to real-valued images. If these conditions hold, finding a macrocanonical model amounts to minimizing a convex function and sampling from an associated Gibbs measure. We analyze an algorithm which alternates between sampling and minimizing. We present experiments with neural network features and study the drawbacks and advantages of using this sampling scheme.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
In this paper, we solve a semi-supervised regression
problem. Due to the luck of knowledge about the
data structure and the presence of random noise, the considered data model is uncertain. We propose a method which combines graph Laplacian regularization and cluster ensemble methodologies. The co-association matrix of the ensemble is calculated on both labeled and unlabeled data; this matrix is used as a similarity matrix in the regularization framework to derive the predicted outputs. We use the low-rank decomposition of the co-association matrix to significantly speedup calculations and reduce memory. Two clustering problem examples are presented.
Full version is here https://arxiv.org/abs/1901.03919
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
In this article we consider macrocanonical models for texture synthesis. In these models samples are generated given an input texture image and a set of features which should be matched in expectation. It is known that if the images are quantized, macrocanonical models are given by Gibbs measures, using the maximum entropy principle. We study conditions under which this result extends to real-valued images. If these conditions hold, finding a macrocanonical model amounts to minimizing a convex function and sampling from an associated Gibbs measure. We analyze an algorithm which alternates between sampling and minimizing. We present experiments with neural network features and study the drawbacks and advantages of using this sampling scheme.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
In this paper, we solve a semi-supervised regression
problem. Due to the luck of knowledge about the
data structure and the presence of random noise, the considered data model is uncertain. We propose a method which combines graph Laplacian regularization and cluster ensemble methodologies. The co-association matrix of the ensemble is calculated on both labeled and unlabeled data; this matrix is used as a similarity matrix in the regularization framework to derive the predicted outputs. We use the low-rank decomposition of the co-association matrix to significantly speedup calculations and reduce memory. Two clustering problem examples are presented.
Full version is here https://arxiv.org/abs/1901.03919
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A discussion on sampling graphs to approximate network classification functionsLARCA UPC
The problem of network classification consists on assigning a finite set of labels to the nodes of the graphs; the underlying assumption is that nodes with the same label tend to be connected via strong paths in the graph. This is similar to the assumptions made by semi-supervised learning algorithms based on graphs, which build an artificial graph from vectorial data. Such semi-supervised algorithms are based on label propagation principles and their accuracy heavily relies on the structure (presence of edges) in the graph.
In this talk I will discuss ideas of how to perform sampling in the network graph, thus sparsifying the structure in order to apply semi-supervised algorithms and compute efficiently the classification function on the network. I will show very preliminary experiments indicating that the sampling technique has an important effect on the final results and discuss open theoretical and practical questions that are to be solved yet.
The standard Galerkin formulation of the acoustic wave propagation, governed by the Helmholtz partial differential equation (PDE), is indefinite for large wavenumbers. However, the Helmholtz PDE is in general not indefinite. The lack of coercivity (indefiniteness) is one of the major difficulties for approximation and simulation of heterogeneous media wave propagation models, including application to stochastic wave propagation Quasi Monte Carlo (QMC) analysis. We will present a new class of sign-definite continuous and discrete preconditioned FEM Helmholtz wave propagation models.
EXPERT SYSTEMS AND SOLUTIONS
Project Center For Research in Power Electronics and Power Systems
IEEE 2010 , IEEE 2011 BASED PROJECTS FOR FINAL YEAR STUDENTS OF B.E
Email: expertsyssol@gmail.com,
Cell: +919952749533, +918608603634
www.researchprojects.info
OMR, CHENNAI
IEEE based Projects For
Final year students of B.E in
EEE, ECE, EIE,CSE
M.E (Power Systems)
M.E (Applied Electronics)
M.E (Power Electronics)
Ph.D Electrical and Electronics.
Training
Students can assemble their hardware in our Research labs. Experts will be guiding the projects.
EXPERT GUIDANCE IN POWER SYSTEMS POWER ELECTRONICS
We provide guidance and codes for the for the following power systems areas.
1. Deregulated Systems,
2. Wind power Generation and Grid connection
3. Unit commitment
4. Economic Dispatch using AI methods
5. Voltage stability
6. FLC Control
7. Transformer Fault Identifications
8. SCADA - Power system Automation
we provide guidance and codes for the for the following power Electronics areas.
1. Three phase inverter and converters
2. Buck Boost Converter
3. Matrix Converter
4. Inverter and converter topologies
5. Fuzzy based control of Electric Drives.
6. Optimal design of Electrical Machines
7. BLDC and SR motor Drives
Slides: On the Chi Square and Higher-Order Chi Distances for Approximating f-...Frank Nielsen
Slides for the paper:
On the Chi Square and Higher-Order Chi Distances for Approximating f-Divergences
published in IEEE SPL:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6654274
An introduction to some Bayesian variable selection techniques, plus some ideas about how to (mis-)use them.
Talk given at IWSM 2012 in Prague (which is rather rainy)
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l_1/l_2 Regular...Laurent Duval
The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.
Tweddle Child and Family Health Service is a statewide early intervention and prevention health service. Our purpose is to provide parenting support to families during pregnancy and with children from birth to school age. Our highest priority is to provide assistance to families that are facing multiple challenges and are in urgent need of therapeutic support.
Tweddle's joint submission to 'Victoria's Vulnerable Children Inquiry'Tweddle Australia
Victoria's early parenting centres, including Tweddle, have urged the Protection Victoria's Vulnerable Children Inquiry Panel to recommend strengthening support to families in the critical early years and to invest in therapeutic early intervention and prevention programs for families of infants and children up to the age of 4.
For more information about the inquiry and its terms of reference see here - http://bit.ly/jEJ5dn
A discussion on sampling graphs to approximate network classification functionsLARCA UPC
The problem of network classification consists on assigning a finite set of labels to the nodes of the graphs; the underlying assumption is that nodes with the same label tend to be connected via strong paths in the graph. This is similar to the assumptions made by semi-supervised learning algorithms based on graphs, which build an artificial graph from vectorial data. Such semi-supervised algorithms are based on label propagation principles and their accuracy heavily relies on the structure (presence of edges) in the graph.
In this talk I will discuss ideas of how to perform sampling in the network graph, thus sparsifying the structure in order to apply semi-supervised algorithms and compute efficiently the classification function on the network. I will show very preliminary experiments indicating that the sampling technique has an important effect on the final results and discuss open theoretical and practical questions that are to be solved yet.
The standard Galerkin formulation of the acoustic wave propagation, governed by the Helmholtz partial differential equation (PDE), is indefinite for large wavenumbers. However, the Helmholtz PDE is in general not indefinite. The lack of coercivity (indefiniteness) is one of the major difficulties for approximation and simulation of heterogeneous media wave propagation models, including application to stochastic wave propagation Quasi Monte Carlo (QMC) analysis. We will present a new class of sign-definite continuous and discrete preconditioned FEM Helmholtz wave propagation models.
EXPERT SYSTEMS AND SOLUTIONS
Project Center For Research in Power Electronics and Power Systems
IEEE 2010 , IEEE 2011 BASED PROJECTS FOR FINAL YEAR STUDENTS OF B.E
Email: expertsyssol@gmail.com,
Cell: +919952749533, +918608603634
www.researchprojects.info
OMR, CHENNAI
IEEE based Projects For
Final year students of B.E in
EEE, ECE, EIE,CSE
M.E (Power Systems)
M.E (Applied Electronics)
M.E (Power Electronics)
Ph.D Electrical and Electronics.
Training
Students can assemble their hardware in our Research labs. Experts will be guiding the projects.
EXPERT GUIDANCE IN POWER SYSTEMS POWER ELECTRONICS
We provide guidance and codes for the for the following power systems areas.
1. Deregulated Systems,
2. Wind power Generation and Grid connection
3. Unit commitment
4. Economic Dispatch using AI methods
5. Voltage stability
6. FLC Control
7. Transformer Fault Identifications
8. SCADA - Power system Automation
we provide guidance and codes for the for the following power Electronics areas.
1. Three phase inverter and converters
2. Buck Boost Converter
3. Matrix Converter
4. Inverter and converter topologies
5. Fuzzy based control of Electric Drives.
6. Optimal design of Electrical Machines
7. BLDC and SR motor Drives
Slides: On the Chi Square and Higher-Order Chi Distances for Approximating f-...Frank Nielsen
Slides for the paper:
On the Chi Square and Higher-Order Chi Distances for Approximating f-Divergences
published in IEEE SPL:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6654274
An introduction to some Bayesian variable selection techniques, plus some ideas about how to (mis-)use them.
Talk given at IWSM 2012 in Prague (which is rather rainy)
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
Euclid in a Taxicab: Sparse Blind Deconvolution with Smoothed l_1/l_2 Regular...Laurent Duval
The l1/l2 ratio regularization function has shown good performance for retrieving sparse signals in a number of recent works, in the context of blind deconvolution. Indeed, it benefits from a scale invariance property much desirable in the blind context. However, the l1/l2 function raises some difficulties when solving the nonconvex and nonsmooth minimization problems resulting from the use of such a penalty term in current restoration methods. In this paper, we propose a new penalty based on a smooth approximation to the l1/l2 function. In addition, we develop a proximal-based algorithm to solve variational problems involving this function and we derive theoretical convergence results. We demonstrate the effectiveness of our method through a comparison with a recent alternating optimization strategy dealing with the exact l1/l2 term, on an application to seismic data blind deconvolution.
Tweddle Child and Family Health Service is a statewide early intervention and prevention health service. Our purpose is to provide parenting support to families during pregnancy and with children from birth to school age. Our highest priority is to provide assistance to families that are facing multiple challenges and are in urgent need of therapeutic support.
Tweddle's joint submission to 'Victoria's Vulnerable Children Inquiry'Tweddle Australia
Victoria's early parenting centres, including Tweddle, have urged the Protection Victoria's Vulnerable Children Inquiry Panel to recommend strengthening support to families in the critical early years and to invest in therapeutic early intervention and prevention programs for families of infants and children up to the age of 4.
For more information about the inquiry and its terms of reference see here - http://bit.ly/jEJ5dn
o The China Analyst is a quarterly knowledge tool by The Beijing Axis. This March 2011 edition peers into the future that likely lies ahead for China and the changing opportunity landscape for foreign firms. For more on The Beijing Axis, and to see more publications by The Beijing Axis, please go to www.thebeijingaxis.com
Day Stay Program - Research and Evaluation - Tweddle Child and Family Health ...Tweddle Australia
A recent Monash University Jean Hailes Research Unit study into the Tweddle Day Stay Program examined the health, social circumstances and presenting needs of 115 clients attending the Tweddle Day stay Program. The study looked at parents with infants under 12 months old and assessed the parent mental health and infant behaviour outcomes and factors associated with program success. Results revealed that Day Stay participants’ mental health and their infants’ behaviours were significantly improved after their admission.
Recent Victorian State Government policy and legislative changes are intended to promote earlier intervention for vulnerable families and children. Tweddle’s Day Stay programs, which operate across 5 western locations across Victoria, have a focus on infant health and development and the promotion of parent-infant emotional attachment. The study, conducted by Heather Rowe, Sonia Mccallum, Minh Thi H Le and Renzo Vittorino concluded that the Day Stay Program offered important benefits for the prevention of more serious family problems and consequent health care cost savings
The ‘Empowering Somali Mums’ research project explores and documents the challenges faced by Somali Mothers and their 0-4 year old children so that Early Parenting professionals can provide culturally respectful and appropriate care for Somali families. Somali mothers from North Melbourne and Flemington were recruited for research groups attended by 28 mums, 27 phone interviews with Somali health and welfare professionals were conducted and we held a Somali Health workers forum with ten senior community workers. We wanted to understand the challenges which prevent Somali mums from accessing parenting assistance and how we can understand parenting from a Somali mum’s perspective.
Tweddle staff are undergoing cross-cultural training and building knowledge and resources that will help strengthen relationships between the Somali community, and other migrant communities. Tweddle provide Halal food, have private prayer space and families can bring up to three children to Tweddle. Thanks to the Victorian Women’s Trust (Con Irwin Sub Fund) for providing the grant that enabled this learning.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPer Kristian Lehre
We demonstrate how to estimate the expected optimisation time of UMDA, an estimation of distribution algorithm, using the level-based theorem. The talk was given at the GECCO 2015 conference in Madrid, Spain.
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsPK Lehre
We describe how to estimate the optimisation time of the UMDA, an estimation of distribution algorithm, using the level-based theorem. The paper was presented at GECCO 2015 in Madrid.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
One of the central tasks in computational mathematics and statistics is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation in high dimensions with large data sets. The algorithms are of iterative nature and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Accelerated reconstruction of a compressively sampled data streamPantelis Sopasakis
Recursive compressed sensing on a stream of data: The traditional compressed sensing approach is naturally offline, in that it amounts to sparsely sampling and reconstructing a given dataset. Recently, an online algorithm for performing compressed sensing on streaming data was proposed: the scheme uses recursive sampling of the input stream and recursive decompression to accurately estimate stream entries from the acquired noisy measurements.
In this paper, we develop a novel Newton-type forward-backward proximal method to recursively solve the regularized Least-Squares problem (LASSO) online. We establish global convergence of our method as well as a local quadratic convergence rate. Our simulations show a substantial speed-up over the state of the art which may render the proposed method suitable for applications with stringent real-time constraints.
Optimal multi-configuration approximation of an N-fermion wave functionjiang-min zhang
We propose a simple iterative algorithm to construct the optimal multi-configuration approximation of an N-fermion wave function. That is, M≥N single-particle orbitals are sought iteratively so that the projection of the given wave function in the CNM-dimensional configuration subspace is maximized. The algorithm has a monotonic convergence property and can be easily parallelized. The significance of the algorithm on the study of entanglement in a multi-fermion system and its implication on the multi-configuration time-dependent Hartree-Fock (MCTDHF) are discussed. The ground state and real-time dynamics of spinless fermions with nearest-neighbor interactions are studied using this algorithm, discussing several subtleties.
Theoretical and computational aspects of the SVM, EBCM, and PMM methods in li...
Rank awarealgs small11
1. IDCOM, University of Edinburgh
Rank Aware Algorithms for Joint Sparse
Recovery
Mike Davies*
Joint work with Yonina Eldar‡ and Jeff Blanchard†
* Institute of Digital Communications, University of Edinburgh
‡ Technion, Israel † Grinnell College, USA
2. IDCOM, University of Edinburgh
Outline of Talk
• Multiple Measurements vs Single Measurements
• Nec.+suff. conditions for Joint Sparse Recovery
• Reduced complexity combinatorial search
• Classical approaches to sparse MMV problem
– How good are SOMP and convex optimization?
• Rank Aware Pursuits
– Evolution of the rank of residual matrices
– A recovery guarantee
• Empirical simulations
3. IDCOM, University of Edinburgh
Sparse Single Measurement Vector
Problem
m x1 mxn n x1
Measurements Measurement
Matrix
Sparse Signal
k nonzero elements
Given y ∈ Rm and Φ ∈ Rm×n with m < n find:
x = argmin | supp(x)| s.t. Φx = y.
ˆ
x
4. IDCOM, University of Edinburgh
Sparse Multiple Measurement Vector
Problem
m×l m×n n×l
Measurement
Measurements Matrix
row support
Sparse Signal
k nonzero rows
Given Y ∈ Rm×l and Φ ∈ Rm×n with m < n find:
ˆ
X = argmin | supp(X)| s.t. ΦX = Y.
X
5. IDCOM, University of Edinburgh
MMV uniqueness
Worst Case
• Uniqueness of solution for sparse MMV problem is equivalent to that for
SMV problem. Simply replicate SMV problem:
X = {x, x, . . . , x}
Hence nec. + suff. condition to uniquely determine each k-sparse vector x is
given by SMV condition:
spark(Φ)
| supp(X)| = k <
2
Rank 'r' Case
• If Rank(Y)=r then the necessary + sufficient conditions are less restrictive
[Chen & Huo 2006, D. & Eldar 2010]:
spark(Φ) − 1 + rank(Y)
| supp(X)| = k <
2
Equivalently we can replace rank(Y) with rank(X).
More measurements (higher rank) makes recovery easier!
6. IDCOM, University of Edinburgh
MMV uniqueness
Generic scenario:
Typical matrices achieve maximal spark:
Φ ∈ Rm×n → spark(Φ) = m + 1
Typical matrices achieve maximal rank
X ∈ Rk×l → rank(X) = r = min{k, l}
Hence generically we have uniqueness if
m ≥ 2k − min{k, l} + 1 ≥ k + 1
When l ≥ k we typically only need k+1 measurements
7. IDCOM, University of Edinburgh
Exhaustive search solution
How does the rank change the exhaustive search?
SMV exhaustive search:
find , | | = k s.t. ΦX = Y
However since span(Y) ⊂ span(Φ ) and rank(Y) = r
∃γ⊂ , |γ| = k − r s.t. span([Φγ , Y ]) = span(Φ )
n
In fact we have a reduced k−r+1 combinatorial search.
8. IDCOM, University of Edinburgh
Geometric Picture for MMV
φ1 φ2 Y = ΦΛ XΛ,:
φ3
2−sparse vector ∈ span(Y )
span(Y )
If X is k-sparse and rank(Y) = r there exists a (k-r+1)-sparse vector in span(Y)
9. IDCOM, University of Edinburgh
Maximal Rank Exhaustive Search: MUSIC
When we have maximal rank(X) = k the exhaustive search is linear and
can be solved with a modified MUSIC algorithm.
Let U = orth(Y) This is an orthonormal basis for span(Φ )
Then under identifiablity conditions we have:
(I − UUT )φi 2 = 0, if and only if i ∈ .
(in practice select support by thresholding)
Theorem 1 (Feng 1996) Let Y = ΦX with | supp(X)| = k, rank(X) = k
ˆ
and k < spark(Φ) − 1. Then MUSIC is guaranteed to recover X (i.e. X = X).
10. IDCOM, University of Edinburgh
Maximal rank problem is not NP-hard
Furthermore there is no constraint on
n!
12. IDCOM, University of Edinburgh
Popular MMV sparse recovery solutions
Two classes of MMV sparse recovery algorithm:
greedy, e.g.
Algorithm 1 Simultaneous Orthogonal Matching Pursuit (SOMP)
1: initialization: R(0) = Y, X(0) = 0, 0 = ∅
2: for n = 1; n := n + 1 until stopping criterion do
3: in = argmaxi φT R(n−1) q
i
n n−1 n
4: = ∪i
(n) †
5: X n ,: = Φ n Y
6: R(n) = P ⊥(n) Y where P ⊥(n) := (I − Φ (n) Φ† (n) )
7: end for
and relaxed, e.g.
Algorithm 2 ℓ1 /ℓq Minimization
ˆ
X = argmin ||X||1,q s.t. ΦX = Y
X
13. IDCOM, University of Edinburgh
Do such MMV solutions exploit the
rank?
Answer: NO. [D. & Eldar 2010]
Theorem 2 (SOMP is not rank aware) Let τ be given such that 1 ≤ τ ≤ k
and suppose that
max ||Φ† φj ||1 > 1
j∈
for some support , | | = k. Then there exists an X with supp(X) = and
rank(X) = τ that SOMP cannot recover.
SMV OMP Exact
Recovery condition
Proof - Rank r perturbation of rank 1 problem approaches rank 1 recovery
property due to continuity norm.
14. IDCOM, University of Edinburgh
Do such MMV solutions exploit the
rank?
Answer: NO. [D. & Eldar 2010]
Theorem 3 (ℓ1 /ℓq minimization is not rank aware) Let τ be given such
that 1 ≤ τ ≤ k and suppose that there exists a z ∈ N (Φ) such that
||z ||1 > ||z c ||1
for some support , | | = k. Then there exists an X with supp(X) = ,
rank(X) l=Null
SMV 1 τ that the mixed norm solution cannot recover.
Space Property
Proof - Rank r perturbation of rank 1 problem approaches rank 1 recovery
property due to continuity of norm.
16. IDCOM, University of Edinburgh
Rank Aware Selection
Aim: to select individual atoms in a similar manner to modified MUSIC
Rank Aware Selection [D. & Eldar 2010]
At the nth iteration make the following selection:
(n) (n−1)
= ∪ argmax ||φT U(n−1) ||2
i
i
where U(n−1) = orth(R(n−1) )
Properties:
1. Worst case behaviour does not approach SMV case.
2. When rank(R) = k it always selects a correct atom as with
MUSIC
17. IDCOM, University of Edinburgh
Rank Aware OMP
Rank Aware OMP
Let's simply replace the selection step in SOMP with the rank aware
selection.
Does this provide guaranteed recovery in the full rank scenario?
Answer: NO.
Why?
We get rank degeneration of the residual matrix:
rank(R(i) ) ≤ min{rank(Y ), k − i}
As we take more steps the rank reduces to one while R(i) is typically still
k-sparse.
We lose the rank benefits as we iterate
18. IDCOM, University of Edinburgh
Rank Aware Order Recursive Matching
Pursuit
The fix...
We can fix this problem by forcing the sparsity to also reduce as a
function of iteration. This is achieved by:
Algorithm 1 Rank Aware Order Recursive Matching Pursuit (RA-ORMP)
1: Initialize R(0) = Y, X(0) = 0, 0 = ∅, P ⊥ = I
(0)
2: for n = 1; n := n + 1 until stopping criterion do
3: Calculate orthonormal basis for residual: U(n−1) = Orth(R(n−1) )
4: in = argmaxi∈ (n−1) φT U(n−1) 2 / P ⊥
i (n−1) φi 2
n
5: = n−1 ∪ in
(n)
6: X n ,: = Φ† n Y
7: R(n) = P ⊥(n) Y where P ⊥ := (I − Φ
(n) (n) Φ† (n) )
8: end for
˜
R(n)is (k-n)-sparse in the modified dictionary φi = P ⊥(n) φi / P ⊥(n) φi 2
19. IDCOM, University of Edinburgh
RA-OMP vs RA-ORMP
Comparison of how (typical) residual rank (–) and sparsity (–)
evolve as a function of iteration
RA-OMP RA-ORMP
k k
r r
k k
iteration # iteration #
- region where correct selection is not guaranteed
20. IDCOM, University of Edinburgh
SOMP/RA-OMP/RA-ORMP
Comparison
SOMP RA−OMP RA=ORMP
1 1 1
Prob of Exact Recovery
Prob of Exact Recovery
Prob of Exact Recovery
0.8 0.8 0.8
0.6 0.6 0.6
0.4 0.4 0.4
0.2 0.2 0.2
0 0 0
0 10 20 30 0 10 20 30 0 10 20 30
Sparsity k Sparsity k Sparsity k
n = 256, m = 32, l = 1,2,4,8,16,32. Dictionary ~ i.i.d. Gaussian and X
coefficients ~ Gaussian i.i.d. (note that this is beneficial to SOMP!)
21. IDCOM, University of Edinburgh
Rank Aware OMP
Alternative Solutions
Recently two independent solutions have been proposed that are variations on
a theme:
1. Compressive MUSIC [Kim et al 2010]
i. perform SOMP for k-r-1 steps but SOMP is rank blind
ii. apply modified MUSIC
2. Iterative MUSIC [Lee & Bresler 2010]
1. orthogonalize: U = orth(Y ) orthogonalization is not
2. apply SOMP to {Φ, U } for k-r-1 steps guaranteed beyond step 1
3. apply modified MUSIC
This motivates us to consider a minor modification of (2):
3 RA-OMP+MUSIC
i. perform RA-OMP for k-r-1 steps
ii. apply modified MUSIC
22. IDCOM, University of Edinburgh
Recovery guarantee
Two nice rank aware solutions
a) Apply RA-OMP for k-r-1 steps then complete with modified MUSIC
b) Apply RA-ORMP for k steps (if first k-r steps make correct selection we
have guaranteed recovery)
we now have the following recovery guarantee [Blanchard & D.]:
Theorem 4 (MMV CS recovery) Assume XΛ ∈ Rn×r is in general position
for some support set Λ, |Λ| = k > r and let Φ is a random matrix independent
of X, Φi,j ∼ N (0, m−1 ). Then (a) and (b) can recover X from Y with high
probability if:
log N
m ≥ const.k +1
r
That is: as r increases the effect of the log N term diminishes
23. IDCOM, University of Edinburgh
RA-OMP+MUSIC / RA-ORMP
Comparison
RA−OMP+MUSIC RA=ORMP
1 1
Prob of Exact Recovery
Prob of Exact Recovery
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 10 20 30 0 10 20 30
Sparsity k Sparsity k
n = 256, m = 32, l = 1,2,4,8,16,32. i.i.d. Gaussian Dictionary and X
coefficients ~ Gaussian i.i.d.
24. IDCOM, University of Edinburgh
Empirical Phase Transitions
RA−OMP+MUSIC 16 16
RA−ORMPl l= 16=
RA−OMP = 16 l
SOMP l =
50
45
40
35
30
m
25
20
15
10
5
5 10 15 20 25 30 35 40 45 50
k
Gaussian dictionary "phase transitions" with Gaussian
significant coefficients
25. IDCOM, University of Edinburgh
Correlated vs uncorrelated
coefficients
SOMP RA-ORMP
SOMP l = 16 RA−ORMP l = 16
50 50
45 45
40 40
35 35
30 30
m
m
25 25
20 20
15 15
10 10
5 5
5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50
k k
Gaussian dictionary "phase transitions" with uncorrelated
sparse coefficients
26. IDCOM, University of Edinburgh
Correlated vs uncorrelated
coefficients
SOMP RA-ORMP
SOMP l = 16, highly correlated RA−ORMP l = 16 highly correlated
50 50
45 45
40 40
35 35
30 30
m
m
25 25
20 20
15 15
10 10
5 5
5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50
k k
Gaussian dictionary "phase transitions" with highly correlated
sparse coefficients
27. IDCOM, University of Edinburgh
Summary
• MMV problem is easier than SMV problem in general
• Don't dismiss using exhaustive search (not always NP-hard!)
• Good rank aware greedy algorithms exist
Questions
• Can we extend these ideas to IHT or CoSaMP?
• How can we incorporate rank awareness into convex optimization?
28. IDCOM, University of Edinburgh
Workshop : Signal Processing with Adaptive Sparse
Structured Representations (SPARS '11)
June 27-30, 2011 - Edinburgh, (Scotland, UK)
Plenary speakers :
David L. Donoho, Stanford University, USA
Martin Vetterli, EPFL, Switzerland
Stephen J. Wright, University of Wisconsin, USA
David J. Brady, Duke University, Durham, USA
Yi Ma, University of Illinois at Urbana-Champaign, USA
Joel Tropp, California Institute of Technology, USA
Remi Gribonval, Centre de Recherche INRIA Rennes, France
Francis Bach, Laboratoire d'Informatique de l'E.N.S., France