Dynamic Compressive Sensing:
Sparse recovery algorithms for streaming
signals and video
M. Salman Asif
Advisor: Justin Romberg
Dynamic compressive sensing
2
Part 1
Dynamic updating
Use previous solutions and
dynamics to quickly solve the
recovery problems
Part 2
Dynamic modeling
Use the dynamical signal
structure to improve the
reconstruction
“warm start”
final solution
Dynamic compressive sensing
Part 2
Dynamic modeling
Use the dynamical signal
structure to improve the
reconstruction
3
Part 1
Dynamic updating
Use previous solutions and
dynamics to accelerate the
recovery process
.
.
.
Compressive sensing
• A theory that deals with signal recovery from
underdetermined systems:
y = ©x =
M < N
underdetermined
M £ N system
[Candes, Romberg, Tao, Donoho, Tropp, Baraniuk, Gilbert, Vershynin, Rudelson, …]
4
=
M < N
underdetermined
M £ N system
Incoherent measurement matrix
(e.g., subGaussian matrices)
Sparse signal
2. Sparse representation in a redundant dictionary
Similar principles: structure and incoherence
1. Compression with sampling
• Compressive sensing deals with signal recovery from
underdetermined systems:
Compressive sensing
y = ©x ´ ©ª®
Measurement matrix
Representation basis Sparse signal
x = ª® =
N
X
i=1
Ãi®i
=
Sparse representation
restricts the space of signals
minimize ¿kªT
xk1 +
1
2
k©x ¡ yk2
2
sparsity data fidelity
tradeoff parameter
5
• Compressive sensing deals with signal recovery from
underdetermined systems:
Compressive sensing
y = ©x ´ ©ª®
Measurement matrix
Representation basis Sparse signal
x = ª® =
N
X
i=1
Ãi®i
=
Sparse representation
restricts the space of signals
minimize k W ªT
xk1 +
1
2
k©x ¡ yk2
2
sparsity data fidelity
Weights
6
minimize kWªT
xk1 +
1
2
k©x ¡ yk2
2
• “Static” compressive sensing:
Fixed signal
Fixed set of measurements Fixed model for representation
x1 x2 y1 y2
innovations
dynamic model
1  2
ª1 ª2
minimize kWtªT
t xtk1 +
1
2
k©txt ¡ ytk2
2
y = ©x ´ ©ª®
“Dynamic” compressive sensing
7
Streaming measurements
Data-adaptive model
yt = ©txt ´ ©tªt®t dynamics
W1 W2
Iterative reweighting
minimize kWtªT
t xtk1 +
1
2
k©txt ¡ ytk2
2
+kFtxt ¡ xt+1kp + kBtxt ¡ xt¡1kp
• “Static” compressive sensing:
Fixed signal
Fixed set of measurements Fixed model for representation
y = Ax ´ Aª®
“Dynamic” compressive sensing
dynamics
x1 x2 x3
.
.
.
xT
F1 F2 FT¡1
B2 B3 BT
Spatio-temporal structure (redundancies)
in the videos as a dynamic model
yt = ©txt ´ ©tªt®t
Dynamic compressive sensing
9
Dynamic updating
Quickly update the solution to
accommodate changes
• homotopy
• Variations:
• Streaming signal
• Streaming measurements
• More…
Dynamic modeling
Improve reconstruction by exploiting
the dynamical signal structure
1. Low-complexity video
compression
2. Accelerated dynamic
MRI
“warm start”
final solution
.
.
.
Part 1: Dynamic `1 updating
10
“warm start”
final solution
Motivation: dynamic updating in LS
y = ©x
© x
y
• System model:
minimize k©x ¡ yk2 ! x0 = (©T
©)¡1
©T
y
• LS estimate
  is full rank
 x is arbitrary
=
• Updates for a time-varying signal with the same 
mainly incurs a one-time cost of factorization. 11
Recursive updates
• Sequential measurements:
w
© x
y
·
y
w
¸
=
·
©
Á
¸
x
Á
=
• Recursive LS
x1 = (©T
© + ÁT
Á)¡1
(©T
y + ÁT
w)
= x0 + K1(w ¡ Áx0)
K1 = (©T
©)¡1
ÁT
(1 + Á(©T
©)¡1
ÁT
)
Rank one update
12
Dynamics with the Kalman filter
• Linear dynamical system:
• Update requires few low-rank updates
yi = ©ixi + ei
xi+1 = Fixi + fi
minimize
k
X
i=1
¾ik©ixi ¡ yik2
2 + ¸ikxi ¡ Fi¡1xi¡1k2
2
Kalman filter equation
13
• Quickly update the solution of the `1 program as the
system parameters change.
• Variations:
– Time-varying signal
– Streaming measurements
– Iterative reweighting
– Streaming signals with overlapping measurements
– Sparse signal in a linear dynamical system (sparse Kalman filter)
Dynamic updating
14
y = ©¹
x + e ! minimize kWxk1 +
1
2
k©x ¡ yk2
2
• Quickly update the solution of the `1 program as the
system parameters change.
• Variations:
– Time-varying signal
– Streaming measurements
– Iterative reweighting
– Streaming signals with overlapping measurements
– Sparse signal in a linear dynamical system (sparse Kalman filter)
Dynamic updating
15
yt = ©t¹
xt + et ! minimize kWtxtk1 +
1
2
k©txt ¡ ytk2
2
variations
Time-varying signals
y1 = ©x1 + e1
• System model:
minimize ¿kxk1 +
1
2
k©x ¡ y2k2
2
• New `1 problem:
16
x1 ! x2 ) y1 ! y2
• Signal varies:
“Sparse innovations”
minimize ¿kxk1 +
1
2
k©x ¡ y1k2
2
• `1 problem:
minimize ¿kxk1 +
1
2
k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2
2
• Homotopy:
Homotopy parameter: 0 ! 1
[A., Romberg, “Dynamic L1 updating”, c. 2009]
Time-varying signals
y1 = ©x1 + e1
• System model:
minimize ¿kxk1 +
1
2
k©x ¡ y2k2
2
• New `1 problem:
17
x1 ! x2 ) y1 ! y2
• Signal varies:
“Sparse innovations”
minimize ¿kxk1 +
1
2
k©x ¡ y1k2
2
• `1 problem:
minimize ¿kxk1 +
1
2
k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2
2
Homotopy parameter: 0 ! 1
b
x1
b
x2
• Path from old solution to new
solution is piecewise linear and
it is parameterized by ²: 0 ! 1 ² = 0
² = 1
Time-varying signal
minimize ¿kxk1 +
1
2
k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2
2
• Homotopy:
©T
¡ (©x¤
¡ (1 ¡ ²)y1 ¡ ²y2) = ¡¿z
k©T
¡c (©x¤
¡ (1 ¡ ²)y1 ¡ ²y2)k1 < ¿
• Optimality conditions:
Must be obeyed by any solution x*
with support ¡ and sign sequence z
Optimality: set subdifferential of the objective to zero
Ã
¿g + ©T
(©x¤
¡ (1 ¡ ²)y1 ¡ ²y2) = 0;
g = @kx¤
k1 : kgk1 6 1; gT
x¤
= kx¤
k1
!
18
Time-varying signal
minimize ¿kxk1 +
1
2
k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2
2
• Homotopy:
©T
¡ (©x¤
¡ (1 ¡ ²)y1 ¡ ²y2) = ¡¿z
k©T
¡c (©x¤
¡ (1 ¡ ²)y1 ¡ ²y2)k1 < ¿
• Optimality conditions:
Must be obeyed by any solution x*
with support ¡ and sign sequence z
©T
¡ (©x¤
¡ (1 ¡ ²)y1 ¡ ²y2) + ±©T
¡ (©@x + y1 ¡ y2) = ¡¿z
k©T
¡c (©x¤
¡ (1 ¡ ²)y1 ¡ ²y2) + ±©T
¡c (©@x + y1 ¡ y2)k1 < ¿
• Change ² to ²+±:
@x =
(
(©T
¡ ©¡)¡1
©T
¡ (y2 ¡ y1); on ¡
0; on ¡c
19
Results
[A., Romberg, “Dynamic L1 updating”, c. 2009]
20
Signal type
DynamicX
(nProdAtA, CPU)
LASSO
(nProdAtA, CPU)
GPSR-BB
(nProdAtA, CPU)
FPC_AS
(nProdAtA, CPU)
N = 1024
M = 512
T = m/5, k ~ T/20
Values = +/- 1
(23.72, 0.132) (235, 0.924) (104.5, 0.18) (148.65, 0.177)
Blocks (2.7, 0.028) (76.8, 0.490) (17, 0.133) (53.5, 0.196)
Pcw. Poly. (13.83, 0.151) (150.2, 1.096) (26.05, 0.212) (66.89, 0.25)
House slices (26.2, 0.011) (53.4, 0.019) (92.24, 0.012) (90.9, 0.036)
nProdAtA: avg. number of matrix-vector products with  and T
CPU: average cputime to solve
¿ = 0:01kAT
yk1
GPSR: Gradient Projection for Sparse Reconstruction. M. Figueiredo, R. Nowak, S. Wright
FPC_AS: Fixed-point continuation and active set. Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang
“If all you have is a hammer, everything looks like a nail.”
Sparse Kalman filter
Reweighted `1
Dictionary update
`1 analysis
Streaming signal
Positivity/affine
constraint
Dantzig selector
21
`1 decoding
Sequential
measurements
Sparse Kalman filter
Reweighted `1
Dictionary update
`1 analysis
Streaming signal
Positivity/affine
constraint
Dantzig selector
22
`1 decoding
Sequential
measurements
minimize ¿kxk1 +
1
2
kAx ¡ yk2
2 +
1
2
kBx ¡ (1 ¡ ²)Bb
xold ¡ ²wk2
2
• Sequential measurements: Homotopy parameter: 0 ! 1
Dummy vector used to ensure
optimality of previous solution
with the changes
[A., Romberg, 2010]
Previous work on single measurement update: [Garrigues & El Ghaoui ’08; A., Romberg, ‘08]
Sparse Kalman filter
Reweighted `1
Dictionary update
`1 analysis
Streaming signal
Positivity/affine
constraint
Dantzig selector
23
`1 decoding
Sequential
measurements
minimize k[(1 ¡ ²)W + ² f
W]xk1 +
1
2
k©x ¡ yk2
2
• Iterative reweighting :
Homotopy parameter: 0 ! 1
Weights are updated
[A., Romberg, 2012]
Adaptive reweighting in
Chapter VI in the thesis
Sparse Kalman filter
Reweighted `1
Dictionary update
Streaming signal
Positivity/affine
constraint
Dantzig selector
24
`1 decoding
Sequential
measurements
minimize kWxk1 +
1
2
k©x ¡ yk2
2 + (1 ¡ ²)uT
x
“ ne homotopy to rule them all”
[A., Romberg, 2013]
-homotopy
• System model:
• Solve
is sparse
-homotopy
• System model:
• Instead, use the following versatile homotopy:
26
is sparse
Optimization problem to solve Homotopy part
A “dummy” variable that maintains
optimality of the starting point at
-homotopy
• System model:
• A versatile homotopy formulation:
27
is sparse
Optimization problem to solve Homotopy part
Optimality: set subdifferential of the objective to zero
Ã
Wg + ©T
(©x¤
¡ y) + (1 ¡ ²)u = 0;
g = @kx¤
k1 : kgk1 6 1; gT
x¤
= kx¤
k1
!
A “dummy” variable that
maintains optimality of
the starting point at
-homotopy
minimize kWxk1 +
1
2
k©x ¡ yk2
2 + (1 ¡ ²)uT
x
• Homotopy:
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u = ¡Wz
j©T
(©x¤
¡ y) + (1 ¡ ²)uj 6 w
• Optimality conditions:
Must be obeyed by any solution x*
with support ¡ and sign sequence z
28
-homotopy
minimize kWxk1 +
1
2
k©x ¡ yk2
2 + (1 ¡ ²)uT
x
• Homotopy:
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u + ± (©T
¡ ©@x ¡ u) = ¡Wz
jÁT
i (©x¤
¡ y) + (1 ¡ ²)ui + ± (ÁT
i ©@x ¡ ui)j 6 wi
• Change ² to ²+±:
29
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u = ¡Wz
j©T
(©x¤
¡ y) + (1 ¡ ²)uj 6 w
• Optimality conditions:
Must be obeyed by any solution x*
with support ¡ and sign sequence z
-homotopy
• Change ² to ²+±:
@x =
(
(©T
¡ ©¡)¡1
u¡; on ¡
0; on ¡c
30
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u + ± (©T
¡ ©@x ¡ u) = ¡Wz
jÁT
i (©x¤
¡ y) + (1 ¡ ²)ui + ± (ÁT
i ©@x ¡ ui)j 6 wi
minimize kWxk1 +
1
2
k©x ¡ yk2
2 + (1 ¡ ²)uT
x
• Homotopy:
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u = ¡Wz
j©T
(©x¤
¡ y) + (1 ¡ ²)uj 6 w
• Optimality conditions:
Must be obeyed by any solution x*
with support ¡ and sign sequence z
-homotopy
• Change ² to ²+±:
±¤
= min(±+
; ±¡
)
±+
= min
i2¡c
µ
wi ¡ pi
di
;
¡wi ¡ pi
di
¶
+
±¡
= min
i2¡
µ
¡x¤
i
@xi
¶
+
31
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u + ± (©T
¡ ©@x ¡ u) = ¡Wz
jÁT
i (©x¤
¡ y) + (1 ¡ ²)ui + ± (ÁT
i ©@x ¡ ui)j 6 wi
minimize kWxk1 +
1
2
k©x ¡ yk2
2 + (1 ¡ ²)uT
x
• Homotopy:
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u = ¡Wz
j©T
(©x¤
¡ y) + (1 ¡ ²)uj 6 w
• Optimality conditions:
Must be obeyed by any solution x*
with support ¡ and sign sequence z
-homotopy
minimize kWxk1 +
1
2
k©x ¡ yk2
2 + (1 ¡ ²)uT
x
• Homotopy:
• Change ² to ²+±:
32
@x =
(
(©T
¡ ©¡)¡1
u¡; on ¡
0; on ¡c
±¤
= min(±+
;±¡
)
±+
= min
i2¡c
µ
wi ¡ pi
di
;
¡wi ¡ pi
di
¶
+
±¡
= min
i2¡
µ
¡x¤
i
@xi
¶
+
©T
¡ (©x¤
¡ y) + (1 ¡ ²)u + ± (©T
¡ ©@x ¡ u) = ¡Wz
jÁT
i (©x¤
¡ y) + (1 ¡ ²)ui + ± (ÁT
i ©@x ¡ ui)j 6 wi
The Dantzig selector -homotopy
• Dantzig selector:
• Primal-dual -homotopy:
33
minimize kWxk1
subject to j©T
(©x ¡ y)j ¹ q
maximize ¡¸T
©T
y ¡ kQ¸k1
subject to j©T
©¸j ¹ w;
minimize kWxk1 + (1 ¡ ²)uT
x
subject to j©T
(©x ¡ y) + (1 ¡ ²)vj ¹ q
maximize ¡¸T
(©T
y ¡ (1 ¡ ²)v) ¡ kQ¸k1
subject to j©T
©¸ + (1 ¡ ²)uj ¹ w;
Primal
Dual
Primal homotopy
Dual homotopy
Sparse recovery: streaming system
• Signal observations:
• Sparse representation:
34
LOT
coefficients
LOT representation bases
Signal
LOT
windows
Measurement matrices Signal
Measurements Error
Sparse recovery: streaming system
• Time-varying signal represented with block bases
AND
• Streaming, disjoint measurements
35
DCT
coefficients
DCT representation
Signal
Measurement matrices Signal
Measurements Error
Sparse recovery: streaming system
• Time-varying signal represented with lapped bases
OR
• Streaming, overlapping measurements
36
LOT
coefficients
LOT representation bases
Signal
LOT
windows
Measurement matrices Signal
Measurements Error
Sparse recovery: streaming system
• Time-varying signal represented with lapped bases
OR
• Streaming, overlapping measurements
37
LOT
coefficients
LOT representation bases
Signal
LOT
windows
active
interval
Measurement matrices Signal
Measurements Error
active
interval
Sparse recovery: streaming system
• Iteratively estimate the signal over a sliding (active) interval:
38
Overlapping system matrix Sparse
vector
Error
minimize kW®k1 +
1
2
k¹
©~
ª® ¡ ~
yk2
2
minimize kW®k1 +
1
2
k¹
©~
ª® ¡ ~
yk2
2 + (1 ¡ ²)uT
®
Desired
Homotopy
Divide the system into two parts
Streaming signal recovery - Results
39
(Top-left) Linear chirp signal (zoomed in for first 2560 samples.
(Top-right) Error in the reconstruction at R=N/M = 4.
(Bottom-left) LOT coefficients. (Bottom-right) Error in LOT coefficients
Streaming signal recovery - Results
40
(left) SER at different R from ±1 random measurements at 35 db SNR
(middle) Count for matrix-vector multiplications
(right) Matlab execution time
SpaRSA: Sparse Reconstruction by Separable Approximation, Wright et al., 2009. http://www.lx.it.pt/~mtf/SpaRSA/
YALL1: Alternating direction algorithms for L1-problems in compressive sensing, Yang et al., 2011. http://yall1.blogs.rice.edu/
Streaming signal recovery - Results
41
(Top-left) Mishmash signal (zoomed in for first 2560 samples.
(Top-right) Error in the reconstruction at R=N/M = 4.
(Bottom-left) LOT coefficients. (Bottom-right) Error in LOT coefficients
Streaming signal recovery - Results
42
(left) SER at different R from ±1 random measurements at 35 db SNR
(middle) Count for matrix-vector multiplications
(right) Matlab execution time
Sparse recovery: dynamical system
• Signal observations:
• Sparse representation:
• Linear dynamic model:
43
Dynamic model
Related work: [Vaswani ’08; Carmi et al. ’09; Angelosante et al. ’09; Zaniel et al. ’10; Charles et al. ’11]
Sparse recovery: dynamical system
• Signal observations:
• Linear dynamic model:
• Sparse representation:
44
minimize kW®k1 +
1
2
k¹
©~
ª® ¡ ~
yk2
2 +
1
2
k¹
F~
ª® ¡ ~
qk2
2 + (1 ¡ ²)uT
®
Dynamic signal recovery - Results
45
(Top-left) HeaviSine signal (shifted copies) in image
(Top-right) Error in the reconstruction at R=N/M = 4.
(Bottom-left) Reconstructed signal at R=4.
(Bottom-right) Comparison of SER for the L1- and the L2-regularized problems
Dynamic signal recovery - Results
46
(left) SER at different R from ±1 random measurements at 35 db SNR
(middle) Count for matrix-vector multiplications
(right) Matlab execution time
Dynamic signal recovery - Results
47
(Top-left) Piece-Regular signal (shifted copies) in image
(Top-right) Error in the reconstruction at R=N/M = 4.
(Bottom-left) Reconstructed signal at R=4.
(Bottom-right) Comparison of SER for the L1- and the L2-regularized problems
Dynamic signal recovery - Results
48
(left) SER at different R from ±1 random measurements at 35 db SNR
(middle) Count for matrix-vector multiplications
(right) Matlab execution time
Part 2: Dynamic models in video
49
x1 x2 x3
.
.
.
xT
F1 F2 FT¡1
B2 B3 BT
Low-complexity video compression
50
Video compression
Compression is achieved by removing the
spatio-temporal redundancies in the videos
time space
51
Video compression
Compression is achieved by removing the spatio-temporal redundancies in the videos 52
Motion
comp.
I frame P or B frames
…
Video coding paradigm
Major blocks in the encoder:
Motion estimation, Transform coding, Entropy coding
53
Low
complexity
encoder
“Smart”
decoder
In Out
Standard compliant
bitstream
‘Smart’
encoder
“Dumb”
decoder
In Out
Shift processing burden from the encoder to the decoder!
Compressive encoder
video frame
with N pixels
vector
vector
Encoder complexity is minimal, while all the
computational load is shifted to the decoder
x y
M £ 1
N £ 1
A y
x =
Random Sensing matrix
M £ N
54
Structure in video (for recovery)
• Frame-by-frame recovery (same as recovery of independent images).
• Spatial structure:
– Wavelets
– Total-variation
DWT
• Recovery of multiple frames (images are temporally correlated)
• Temporal structure:
– Frame difference
– Inter-frame motion
x1 x2 x3
.
.
.
xT
F1 F2 FT¡1
B2 B3 BT 55
Structure…
DWT
Linear dynamical system:
x1 x2 x3
.
.
.
xT
F1 F2 FT¡1
B2 B3 BT
yi = Aixi + ei
xi = Fi¡1xi¡1 + fi
xi = Bi+1xi+1 + bi
A y
x =
Random
Sensing matrix
M £N
(Linear measurements)
(forward motion pred.)
(backward motion pred.)
56
Accelerated dynamic MRI
57
Accelerated acquisition in MRI?
• How the cardiac MRI works:
Fourier plane---Phase-encoding step
samples rows in the Fourier plane
Temporal
resolution
One cardiac cycle
A small spatial area is
scanned in every heart beat
(e.g., 8 lines per heartbeat)
Acquisition during breathholds
Direct tradeoff between temporal resolution and lines per heart beat segment
Number of lines per frame are the same 58
Parallel imaging
• Parallel imaging [SENSE, SMASH, SPACE-RIP, GRAPPA, …]:
2
6
6
6
4
y1
y2
.
.
.
yT
3
7
7
7
5
=
2
6
6
6
4
A1 0 : : : 0
0 A2 : : : 0
.
.
.
.
.
.
...
.
.
.
0 0 : : : AT
3
7
7
7
5
2
6
6
6
4
x1
x2
.
.
.
xT
3
7
7
7
5
+
2
6
6
6
4
e1
e2
.
.
.
eT
3
7
7
7
5
´ y = Ax + e
yi = Aixi + ei
Ai ´
2
6
4
F-i S1
i
.
.
.
F-i SC
i
3
7
5
Receiver coils
MRI data courtesy: Hamilton and Brummer
59
Motion-adaptive Spatio-temporal
Regularization (MASTeR)
• We model temporal variations in the images using
inter-frame motion!
• This way we are not using some fixed global model,
instead, we learn the structure directly from the
data.
Motion vectors
60
Motion-adaptive Spatio-temporal
Regularization (MASTeR)
• Spatial structure: wavelets
• Temporal structure: inter-frame motion
• Linear dynamical system model:
yi = Aixi + ei
xi = Fi¡1xi¡1 + fi
xi = Bi+1xi+1 + bi
(Linear measurements)
(forward motion pred.)
(backward motion pred.)
61
Video recovery
62
Video reconstruction
DWT x1 x2 x3
.
.
.
xT
F1 F2 FT¡1
B2 B3 BT
63
DWT
yi = Aixi + ei
xi = Fi¡1xi¡1 + fi
xi = Bi+1xi+1 + bi
(Linear measurements)
(forward motion pred.)
(backward motion pred.)
Pseudocode:
0. Find Initial frame estimates
Repeat
1. Calculate motion from frame estimates
2. Use the motion information to refine estimates
Video reconstruction
DWT x1 x2 x3
.
.
.
xT
F1 F2 FT¡1
B2 B3 BT
Backward motion diff.
Spatial regularity
Data fidelity
Forward motion diff.
minimize
x1;:::;xT
X
k
kAkxk ¡ ykk2 + ¿kkxkkª1
+
®kkFkxk ¡ xk+1kª2
+ ¯kkBkxk ¡ xk¡1kª3
64
DWT
Comparison of low-complexity encoders
coastguard
foreman
container
hall
65
CS-MC: L1 with motion compensation
CS-DIFF: L1 with frame difference
CS: frame-by-frame
qJPEG: mask-based DCT thresholding
ldct: linear DCT (zigzag) thresholding
CS measurements: partial wavelets + noiselets
compression ratio = N/M
Results
(short axis):
R = 8
R = 4
MASTeR
k-t
FOCUSS
with
ME/MC
(e)
(f)
(b)
(b)
(c)
(d)
(a)
y
x
k-t FOCUSS with ME/MC:
-Temporal DFT sparsity
-Motion residual with a
reference frame or temporal
average.
66
MASTeR: Motion-adaptive
spatio-temporal
regularization:
R: acceleration factor
Sampling: 8 low-freq. lines
+ random sampling
[Jung et al., k-t FOCUSS: a general compressed
sensing framework for high resolution
dynamic MRI. MRM, 2009]
67
k-t FOCUSS with
ME/MC (full)
k-t FOCUSS with
ME/MC (ROI)
MASTeR
k-t
FOCUSS
with
ME/MC
t
y
(a)
(b)
R=4
(c)
(e)
R=8
(d)
(f)
MASTeR (ROI)
MASTeR (full)
Results (short axis)
Conclusion
68
Dynamic modeling
Improve reconstruction by exploiting
the dynamical signal structure
• Motion-adaptive dynamical
system
1. Low-complexity video
compression
2. Accelerated dynamic MRI
Dynamic updating
Quickly update the solution to
accommodate changes
• homotopy
– Breaks updating into piecewise
linear steps
– Simple, inexpensive (rank-one
update)
“warm start”
final
solution
.
.
.
The more we know about the signal (dynamics)
the faster and/or more accurately we can reconstruct
Future directions
• -homotopy:
– Theoretical analysis
– Large-scale streaming problems
• Compressive sensing in videos:
– Adaptive sampling
– Distributed cameras
– Computational imaging (lightfield etc.)
• Medical imaging:
– MRI… (maybe hyper-polarized)
– Ultrasound
– EEG
69
Select publications
• Dynamic updating
– M. Asif and J. Romberg, ``Sparse recovery algorithm for streaming signals using L1-
homotopy,'' Submitted to IEEE Trans. Sig. Proc., June 2013. [Arxiv]
– M. Asif and J. Romberg, ``Fast and accurate algorithm for re-weighted L1-norm
minimization,'' Submitted to IEEE Trans. Sig. Proc., July 2012. [Arxiv]
– M. Asif and J. Romberg, ``Dynamic updating for L1 minimization,'' IEEE Journal of
Selected Topics in Signal Processing, 4(2) pp. 421--434, April 2010.
– M. Asif and J. Romberg, ``Sparse signal recovery and dynamic update of the
underdetermined system,'' Asilomar Conf. on Signals, Systems, and Computers, Pacific
Grove, November 2010.
– M. Asif and Justin Romberg, ``Basis pursuit with sequential measurements and time-
varying signals,'' in Proc. Computational Advances in Multi-Sensor Adaptive Processing
(CAMSAP), Aruba, December 2009.
– M. Asif and J. Romberg, ``Dynamic updating for sparse time-varying signals,'' Conference
on inf. sciences and systems (CISS), Baltimore, March 2009.
– M. Asif and J. Romberg, ``Streaming measurements in compressive sensing: L1 filtering,''
Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, October 2008.
• Dynamic model in video
– M. Asif, L. Hamilton, M. Brummer, and J. Romberg, ``Motion-adaptive spatio-temporal
regularization (MASTeR) for accelerated dynamic MRI,'' accepted in Magnetic Resonance
in Medicine, November 2012 [Early view DOI: 10.1002/mrm.24524].
– M. Asif, F. Fernandes, and J. Romberg, ''Low-complexity video compression and
compressive sensing,'' preprint 2012. 70
Select publications
• Sparsity and dynamics
– M. Asif, A. Charles, J. Romberg, and C. Rozell, ``Estimating and dynamic updating of
time-varying signals with sparse variations,'' in Proc. IEEE Int. Conf. on Acoustics, Speech,
and Signal Processing (ICASSP), Prague, Czech Republic, May 2011.
– A. Charles, M. Asif, J. Romberg, and C. Rozell, ``Sparse penalties in dynamical system
estimation,'', Conference on Inf. Sciences and Systems (CISS), Baltimore, Maryland,
March 2011.
• Streaming greedy pursuit
– P. Boufounos and M. Asif, ``Compressive sensing for streaming signals using the
streaming greedy pursuit,'' in Proc. Military Commun. Conf. (MILCOM), San Jose,
California, October 2010.
– M. Asif, D. Reddy, P. Boufounos, and A. Veeraraghavan, ``Streaming compressive sensing
for high-speed periodic videos,'' in Proc. IEEE Int. Conf. on Image Processing (ICIP), Hong
Kong, September 2010.
– P. Boufounos and M. Asif, ``Compressive sampling for streaming signals with sparse
frequency content,'' Conference on Inf. Sciences and Systems (CISS), Princeton, New
Jersey, March 2010.
• Channel protection
– M. Asif, W. Mantzel, and J. Romberg, ``Random channel coding and blind
deconvolution,'' Allerton Conf. on Communication, Control, and Computing, Monticello,
Illinois, October 2009.
– M. Asif, W. Mantzel, and J. Romberg, ``Channel protection: Random coding meets sparse
channels,'' Information Theory Workshop, Taormina, Italy, October 2009.
71
Acknowledgements
• Special thanks to my collaborators
– William Mantzel
– Petros Boufonous
– Felix Fernandes
– Adam Charles and Chris Rozell
– Lei Hamilton and Marijn Brummer
• and my advisor, Justin Romberg
72
Thank you!
Questions?
73
Email: sasif@gatech.edu
Web: http://users.ece.gatech.edu/~sasif/

asif-phd-slides-compressed-sensing-tutorial

  • 1.
    Dynamic Compressive Sensing: Sparserecovery algorithms for streaming signals and video M. Salman Asif Advisor: Justin Romberg
  • 2.
    Dynamic compressive sensing 2 Part1 Dynamic updating Use previous solutions and dynamics to quickly solve the recovery problems Part 2 Dynamic modeling Use the dynamical signal structure to improve the reconstruction “warm start” final solution
  • 3.
    Dynamic compressive sensing Part2 Dynamic modeling Use the dynamical signal structure to improve the reconstruction 3 Part 1 Dynamic updating Use previous solutions and dynamics to accelerate the recovery process . . .
  • 4.
    Compressive sensing • Atheory that deals with signal recovery from underdetermined systems: y = ©x = M < N underdetermined M £ N system [Candes, Romberg, Tao, Donoho, Tropp, Baraniuk, Gilbert, Vershynin, Rudelson, …] 4 = M < N underdetermined M £ N system Incoherent measurement matrix (e.g., subGaussian matrices) Sparse signal 2. Sparse representation in a redundant dictionary Similar principles: structure and incoherence 1. Compression with sampling
  • 5.
    • Compressive sensingdeals with signal recovery from underdetermined systems: Compressive sensing y = ©x ´ ©ª® Measurement matrix Representation basis Sparse signal x = ª® = N X i=1 Ãi®i = Sparse representation restricts the space of signals minimize ¿kªT xk1 + 1 2 k©x ¡ yk2 2 sparsity data fidelity tradeoff parameter 5
  • 6.
    • Compressive sensingdeals with signal recovery from underdetermined systems: Compressive sensing y = ©x ´ ©ª® Measurement matrix Representation basis Sparse signal x = ª® = N X i=1 Ãi®i = Sparse representation restricts the space of signals minimize k W ªT xk1 + 1 2 k©x ¡ yk2 2 sparsity data fidelity Weights 6
  • 7.
    minimize kWªT xk1 + 1 2 k©x¡ yk2 2 • “Static” compressive sensing: Fixed signal Fixed set of measurements Fixed model for representation x1 x2 y1 y2 innovations dynamic model 1  2 ª1 ª2 minimize kWtªT t xtk1 + 1 2 k©txt ¡ ytk2 2 y = ©x ´ ©ª® “Dynamic” compressive sensing 7 Streaming measurements Data-adaptive model yt = ©txt ´ ©tªt®t dynamics W1 W2 Iterative reweighting
  • 8.
    minimize kWtªT t xtk1+ 1 2 k©txt ¡ ytk2 2 +kFtxt ¡ xt+1kp + kBtxt ¡ xt¡1kp • “Static” compressive sensing: Fixed signal Fixed set of measurements Fixed model for representation y = Ax ´ Aª® “Dynamic” compressive sensing dynamics x1 x2 x3 . . . xT F1 F2 FT¡1 B2 B3 BT Spatio-temporal structure (redundancies) in the videos as a dynamic model yt = ©txt ´ ©tªt®t
  • 9.
    Dynamic compressive sensing 9 Dynamicupdating Quickly update the solution to accommodate changes • homotopy • Variations: • Streaming signal • Streaming measurements • More… Dynamic modeling Improve reconstruction by exploiting the dynamical signal structure 1. Low-complexity video compression 2. Accelerated dynamic MRI “warm start” final solution . . .
  • 10.
    Part 1: Dynamic`1 updating 10 “warm start” final solution
  • 11.
    Motivation: dynamic updatingin LS y = ©x © x y • System model: minimize k©x ¡ yk2 ! x0 = (©T ©)¡1 ©T y • LS estimate   is full rank  x is arbitrary = • Updates for a time-varying signal with the same  mainly incurs a one-time cost of factorization. 11
  • 12.
    Recursive updates • Sequentialmeasurements: w © x y · y w ¸ = · © Á ¸ x Á = • Recursive LS x1 = (©T © + ÁT Á)¡1 (©T y + ÁT w) = x0 + K1(w ¡ Áx0) K1 = (©T ©)¡1 ÁT (1 + Á(©T ©)¡1 ÁT ) Rank one update 12
  • 13.
    Dynamics with theKalman filter • Linear dynamical system: • Update requires few low-rank updates yi = ©ixi + ei xi+1 = Fixi + fi minimize k X i=1 ¾ik©ixi ¡ yik2 2 + ¸ikxi ¡ Fi¡1xi¡1k2 2 Kalman filter equation 13
  • 14.
    • Quickly updatethe solution of the `1 program as the system parameters change. • Variations: – Time-varying signal – Streaming measurements – Iterative reweighting – Streaming signals with overlapping measurements – Sparse signal in a linear dynamical system (sparse Kalman filter) Dynamic updating 14 y = ©¹ x + e ! minimize kWxk1 + 1 2 k©x ¡ yk2 2
  • 15.
    • Quickly updatethe solution of the `1 program as the system parameters change. • Variations: – Time-varying signal – Streaming measurements – Iterative reweighting – Streaming signals with overlapping measurements – Sparse signal in a linear dynamical system (sparse Kalman filter) Dynamic updating 15 yt = ©t¹ xt + et ! minimize kWtxtk1 + 1 2 k©txt ¡ ytk2 2 variations
  • 16.
    Time-varying signals y1 =©x1 + e1 • System model: minimize ¿kxk1 + 1 2 k©x ¡ y2k2 2 • New `1 problem: 16 x1 ! x2 ) y1 ! y2 • Signal varies: “Sparse innovations” minimize ¿kxk1 + 1 2 k©x ¡ y1k2 2 • `1 problem: minimize ¿kxk1 + 1 2 k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2 2 • Homotopy: Homotopy parameter: 0 ! 1 [A., Romberg, “Dynamic L1 updating”, c. 2009]
  • 17.
    Time-varying signals y1 =©x1 + e1 • System model: minimize ¿kxk1 + 1 2 k©x ¡ y2k2 2 • New `1 problem: 17 x1 ! x2 ) y1 ! y2 • Signal varies: “Sparse innovations” minimize ¿kxk1 + 1 2 k©x ¡ y1k2 2 • `1 problem: minimize ¿kxk1 + 1 2 k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2 2 Homotopy parameter: 0 ! 1 b x1 b x2 • Path from old solution to new solution is piecewise linear and it is parameterized by ²: 0 ! 1 ² = 0 ² = 1
  • 18.
    Time-varying signal minimize ¿kxk1+ 1 2 k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2 2 • Homotopy: ©T ¡ (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2) = ¡¿z k©T ¡c (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2)k1 < ¿ • Optimality conditions: Must be obeyed by any solution x* with support ¡ and sign sequence z Optimality: set subdifferential of the objective to zero à ¿g + ©T (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2) = 0; g = @kx¤ k1 : kgk1 6 1; gT x¤ = kx¤ k1 ! 18
  • 19.
    Time-varying signal minimize ¿kxk1+ 1 2 k©x ¡ (1 ¡ ²)y1 ¡ ²y2k2 2 • Homotopy: ©T ¡ (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2) = ¡¿z k©T ¡c (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2)k1 < ¿ • Optimality conditions: Must be obeyed by any solution x* with support ¡ and sign sequence z ©T ¡ (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2) + ±©T ¡ (©@x + y1 ¡ y2) = ¡¿z k©T ¡c (©x¤ ¡ (1 ¡ ²)y1 ¡ ²y2) + ±©T ¡c (©@x + y1 ¡ y2)k1 < ¿ • Change ² to ²+±: @x = ( (©T ¡ ©¡)¡1 ©T ¡ (y2 ¡ y1); on ¡ 0; on ¡c 19
  • 20.
    Results [A., Romberg, “DynamicL1 updating”, c. 2009] 20 Signal type DynamicX (nProdAtA, CPU) LASSO (nProdAtA, CPU) GPSR-BB (nProdAtA, CPU) FPC_AS (nProdAtA, CPU) N = 1024 M = 512 T = m/5, k ~ T/20 Values = +/- 1 (23.72, 0.132) (235, 0.924) (104.5, 0.18) (148.65, 0.177) Blocks (2.7, 0.028) (76.8, 0.490) (17, 0.133) (53.5, 0.196) Pcw. Poly. (13.83, 0.151) (150.2, 1.096) (26.05, 0.212) (66.89, 0.25) House slices (26.2, 0.011) (53.4, 0.019) (92.24, 0.012) (90.9, 0.036) nProdAtA: avg. number of matrix-vector products with  and T CPU: average cputime to solve ¿ = 0:01kAT yk1 GPSR: Gradient Projection for Sparse Reconstruction. M. Figueiredo, R. Nowak, S. Wright FPC_AS: Fixed-point continuation and active set. Z. Wen, W. Yin, D. Goldfarb, and Y. Zhang
  • 21.
    “If all youhave is a hammer, everything looks like a nail.” Sparse Kalman filter Reweighted `1 Dictionary update `1 analysis Streaming signal Positivity/affine constraint Dantzig selector 21 `1 decoding Sequential measurements
  • 22.
    Sparse Kalman filter Reweighted`1 Dictionary update `1 analysis Streaming signal Positivity/affine constraint Dantzig selector 22 `1 decoding Sequential measurements minimize ¿kxk1 + 1 2 kAx ¡ yk2 2 + 1 2 kBx ¡ (1 ¡ ²)Bb xold ¡ ²wk2 2 • Sequential measurements: Homotopy parameter: 0 ! 1 Dummy vector used to ensure optimality of previous solution with the changes [A., Romberg, 2010] Previous work on single measurement update: [Garrigues & El Ghaoui ’08; A., Romberg, ‘08]
  • 23.
    Sparse Kalman filter Reweighted`1 Dictionary update `1 analysis Streaming signal Positivity/affine constraint Dantzig selector 23 `1 decoding Sequential measurements minimize k[(1 ¡ ²)W + ² f W]xk1 + 1 2 k©x ¡ yk2 2 • Iterative reweighting : Homotopy parameter: 0 ! 1 Weights are updated [A., Romberg, 2012] Adaptive reweighting in Chapter VI in the thesis
  • 24.
    Sparse Kalman filter Reweighted`1 Dictionary update Streaming signal Positivity/affine constraint Dantzig selector 24 `1 decoding Sequential measurements minimize kWxk1 + 1 2 k©x ¡ yk2 2 + (1 ¡ ²)uT x “ ne homotopy to rule them all” [A., Romberg, 2013]
  • 25.
  • 26.
    -homotopy • System model: •Instead, use the following versatile homotopy: 26 is sparse Optimization problem to solve Homotopy part A “dummy” variable that maintains optimality of the starting point at
  • 27.
    -homotopy • System model: •A versatile homotopy formulation: 27 is sparse Optimization problem to solve Homotopy part Optimality: set subdifferential of the objective to zero à Wg + ©T (©x¤ ¡ y) + (1 ¡ ²)u = 0; g = @kx¤ k1 : kgk1 6 1; gT x¤ = kx¤ k1 ! A “dummy” variable that maintains optimality of the starting point at
  • 28.
    -homotopy minimize kWxk1 + 1 2 k©x¡ yk2 2 + (1 ¡ ²)uT x • Homotopy: ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u = ¡Wz j©T (©x¤ ¡ y) + (1 ¡ ²)uj 6 w • Optimality conditions: Must be obeyed by any solution x* with support ¡ and sign sequence z 28
  • 29.
    -homotopy minimize kWxk1 + 1 2 k©x¡ yk2 2 + (1 ¡ ²)uT x • Homotopy: ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u + ± (©T ¡ ©@x ¡ u) = ¡Wz jÁT i (©x¤ ¡ y) + (1 ¡ ²)ui + ± (ÁT i ©@x ¡ ui)j 6 wi • Change ² to ²+±: 29 ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u = ¡Wz j©T (©x¤ ¡ y) + (1 ¡ ²)uj 6 w • Optimality conditions: Must be obeyed by any solution x* with support ¡ and sign sequence z
  • 30.
    -homotopy • Change ²to ²+±: @x = ( (©T ¡ ©¡)¡1 u¡; on ¡ 0; on ¡c 30 ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u + ± (©T ¡ ©@x ¡ u) = ¡Wz jÁT i (©x¤ ¡ y) + (1 ¡ ²)ui + ± (ÁT i ©@x ¡ ui)j 6 wi minimize kWxk1 + 1 2 k©x ¡ yk2 2 + (1 ¡ ²)uT x • Homotopy: ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u = ¡Wz j©T (©x¤ ¡ y) + (1 ¡ ²)uj 6 w • Optimality conditions: Must be obeyed by any solution x* with support ¡ and sign sequence z
  • 31.
    -homotopy • Change ²to ²+±: ±¤ = min(±+ ; ±¡ ) ±+ = min i2¡c µ wi ¡ pi di ; ¡wi ¡ pi di ¶ + ±¡ = min i2¡ µ ¡x¤ i @xi ¶ + 31 ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u + ± (©T ¡ ©@x ¡ u) = ¡Wz jÁT i (©x¤ ¡ y) + (1 ¡ ²)ui + ± (ÁT i ©@x ¡ ui)j 6 wi minimize kWxk1 + 1 2 k©x ¡ yk2 2 + (1 ¡ ²)uT x • Homotopy: ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u = ¡Wz j©T (©x¤ ¡ y) + (1 ¡ ²)uj 6 w • Optimality conditions: Must be obeyed by any solution x* with support ¡ and sign sequence z
  • 32.
    -homotopy minimize kWxk1 + 1 2 k©x¡ yk2 2 + (1 ¡ ²)uT x • Homotopy: • Change ² to ²+±: 32 @x = ( (©T ¡ ©¡)¡1 u¡; on ¡ 0; on ¡c ±¤ = min(±+ ;±¡ ) ±+ = min i2¡c µ wi ¡ pi di ; ¡wi ¡ pi di ¶ + ±¡ = min i2¡ µ ¡x¤ i @xi ¶ + ©T ¡ (©x¤ ¡ y) + (1 ¡ ²)u + ± (©T ¡ ©@x ¡ u) = ¡Wz jÁT i (©x¤ ¡ y) + (1 ¡ ²)ui + ± (ÁT i ©@x ¡ ui)j 6 wi
  • 33.
    The Dantzig selector-homotopy • Dantzig selector: • Primal-dual -homotopy: 33 minimize kWxk1 subject to j©T (©x ¡ y)j ¹ q maximize ¡¸T ©T y ¡ kQ¸k1 subject to j©T ©¸j ¹ w; minimize kWxk1 + (1 ¡ ²)uT x subject to j©T (©x ¡ y) + (1 ¡ ²)vj ¹ q maximize ¡¸T (©T y ¡ (1 ¡ ²)v) ¡ kQ¸k1 subject to j©T ©¸ + (1 ¡ ²)uj ¹ w; Primal Dual Primal homotopy Dual homotopy
  • 34.
    Sparse recovery: streamingsystem • Signal observations: • Sparse representation: 34 LOT coefficients LOT representation bases Signal LOT windows Measurement matrices Signal Measurements Error
  • 35.
    Sparse recovery: streamingsystem • Time-varying signal represented with block bases AND • Streaming, disjoint measurements 35 DCT coefficients DCT representation Signal Measurement matrices Signal Measurements Error
  • 36.
    Sparse recovery: streamingsystem • Time-varying signal represented with lapped bases OR • Streaming, overlapping measurements 36 LOT coefficients LOT representation bases Signal LOT windows Measurement matrices Signal Measurements Error
  • 37.
    Sparse recovery: streamingsystem • Time-varying signal represented with lapped bases OR • Streaming, overlapping measurements 37 LOT coefficients LOT representation bases Signal LOT windows active interval Measurement matrices Signal Measurements Error active interval
  • 38.
    Sparse recovery: streamingsystem • Iteratively estimate the signal over a sliding (active) interval: 38 Overlapping system matrix Sparse vector Error minimize kW®k1 + 1 2 k¹ ©~ ª® ¡ ~ yk2 2 minimize kW®k1 + 1 2 k¹ ©~ ª® ¡ ~ yk2 2 + (1 ¡ ²)uT ® Desired Homotopy Divide the system into two parts
  • 39.
    Streaming signal recovery- Results 39 (Top-left) Linear chirp signal (zoomed in for first 2560 samples. (Top-right) Error in the reconstruction at R=N/M = 4. (Bottom-left) LOT coefficients. (Bottom-right) Error in LOT coefficients
  • 40.
    Streaming signal recovery- Results 40 (left) SER at different R from ±1 random measurements at 35 db SNR (middle) Count for matrix-vector multiplications (right) Matlab execution time SpaRSA: Sparse Reconstruction by Separable Approximation, Wright et al., 2009. http://www.lx.it.pt/~mtf/SpaRSA/ YALL1: Alternating direction algorithms for L1-problems in compressive sensing, Yang et al., 2011. http://yall1.blogs.rice.edu/
  • 41.
    Streaming signal recovery- Results 41 (Top-left) Mishmash signal (zoomed in for first 2560 samples. (Top-right) Error in the reconstruction at R=N/M = 4. (Bottom-left) LOT coefficients. (Bottom-right) Error in LOT coefficients
  • 42.
    Streaming signal recovery- Results 42 (left) SER at different R from ±1 random measurements at 35 db SNR (middle) Count for matrix-vector multiplications (right) Matlab execution time
  • 43.
    Sparse recovery: dynamicalsystem • Signal observations: • Sparse representation: • Linear dynamic model: 43 Dynamic model Related work: [Vaswani ’08; Carmi et al. ’09; Angelosante et al. ’09; Zaniel et al. ’10; Charles et al. ’11]
  • 44.
    Sparse recovery: dynamicalsystem • Signal observations: • Linear dynamic model: • Sparse representation: 44 minimize kW®k1 + 1 2 k¹ ©~ ª® ¡ ~ yk2 2 + 1 2 k¹ F~ ª® ¡ ~ qk2 2 + (1 ¡ ²)uT ®
  • 45.
    Dynamic signal recovery- Results 45 (Top-left) HeaviSine signal (shifted copies) in image (Top-right) Error in the reconstruction at R=N/M = 4. (Bottom-left) Reconstructed signal at R=4. (Bottom-right) Comparison of SER for the L1- and the L2-regularized problems
  • 46.
    Dynamic signal recovery- Results 46 (left) SER at different R from ±1 random measurements at 35 db SNR (middle) Count for matrix-vector multiplications (right) Matlab execution time
  • 47.
    Dynamic signal recovery- Results 47 (Top-left) Piece-Regular signal (shifted copies) in image (Top-right) Error in the reconstruction at R=N/M = 4. (Bottom-left) Reconstructed signal at R=4. (Bottom-right) Comparison of SER for the L1- and the L2-regularized problems
  • 48.
    Dynamic signal recovery- Results 48 (left) SER at different R from ±1 random measurements at 35 db SNR (middle) Count for matrix-vector multiplications (right) Matlab execution time
  • 49.
    Part 2: Dynamicmodels in video 49 x1 x2 x3 . . . xT F1 F2 FT¡1 B2 B3 BT
  • 50.
  • 51.
    Video compression Compression isachieved by removing the spatio-temporal redundancies in the videos time space 51
  • 52.
    Video compression Compression isachieved by removing the spatio-temporal redundancies in the videos 52 Motion comp. I frame P or B frames …
  • 53.
    Video coding paradigm Majorblocks in the encoder: Motion estimation, Transform coding, Entropy coding 53 Low complexity encoder “Smart” decoder In Out Standard compliant bitstream ‘Smart’ encoder “Dumb” decoder In Out Shift processing burden from the encoder to the decoder!
  • 54.
    Compressive encoder video frame withN pixels vector vector Encoder complexity is minimal, while all the computational load is shifted to the decoder x y M £ 1 N £ 1 A y x = Random Sensing matrix M £ N 54
  • 55.
    Structure in video(for recovery) • Frame-by-frame recovery (same as recovery of independent images). • Spatial structure: – Wavelets – Total-variation DWT • Recovery of multiple frames (images are temporally correlated) • Temporal structure: – Frame difference – Inter-frame motion x1 x2 x3 . . . xT F1 F2 FT¡1 B2 B3 BT 55
  • 56.
    Structure… DWT Linear dynamical system: x1x2 x3 . . . xT F1 F2 FT¡1 B2 B3 BT yi = Aixi + ei xi = Fi¡1xi¡1 + fi xi = Bi+1xi+1 + bi A y x = Random Sensing matrix M £N (Linear measurements) (forward motion pred.) (backward motion pred.) 56
  • 57.
  • 58.
    Accelerated acquisition inMRI? • How the cardiac MRI works: Fourier plane---Phase-encoding step samples rows in the Fourier plane Temporal resolution One cardiac cycle A small spatial area is scanned in every heart beat (e.g., 8 lines per heartbeat) Acquisition during breathholds Direct tradeoff between temporal resolution and lines per heart beat segment Number of lines per frame are the same 58
  • 59.
    Parallel imaging • Parallelimaging [SENSE, SMASH, SPACE-RIP, GRAPPA, …]: 2 6 6 6 4 y1 y2 . . . yT 3 7 7 7 5 = 2 6 6 6 4 A1 0 : : : 0 0 A2 : : : 0 . . . . . . ... . . . 0 0 : : : AT 3 7 7 7 5 2 6 6 6 4 x1 x2 . . . xT 3 7 7 7 5 + 2 6 6 6 4 e1 e2 . . . eT 3 7 7 7 5 ´ y = Ax + e yi = Aixi + ei Ai ´ 2 6 4 F-i S1 i . . . F-i SC i 3 7 5 Receiver coils MRI data courtesy: Hamilton and Brummer 59
  • 60.
    Motion-adaptive Spatio-temporal Regularization (MASTeR) •We model temporal variations in the images using inter-frame motion! • This way we are not using some fixed global model, instead, we learn the structure directly from the data. Motion vectors 60
  • 61.
    Motion-adaptive Spatio-temporal Regularization (MASTeR) •Spatial structure: wavelets • Temporal structure: inter-frame motion • Linear dynamical system model: yi = Aixi + ei xi = Fi¡1xi¡1 + fi xi = Bi+1xi+1 + bi (Linear measurements) (forward motion pred.) (backward motion pred.) 61
  • 62.
  • 63.
    Video reconstruction DWT x1x2 x3 . . . xT F1 F2 FT¡1 B2 B3 BT 63 DWT yi = Aixi + ei xi = Fi¡1xi¡1 + fi xi = Bi+1xi+1 + bi (Linear measurements) (forward motion pred.) (backward motion pred.)
  • 64.
    Pseudocode: 0. Find Initialframe estimates Repeat 1. Calculate motion from frame estimates 2. Use the motion information to refine estimates Video reconstruction DWT x1 x2 x3 . . . xT F1 F2 FT¡1 B2 B3 BT Backward motion diff. Spatial regularity Data fidelity Forward motion diff. minimize x1;:::;xT X k kAkxk ¡ ykk2 + ¿kkxkkª1 + ®kkFkxk ¡ xk+1kª2 + ¯kkBkxk ¡ xk¡1kª3 64 DWT
  • 65.
    Comparison of low-complexityencoders coastguard foreman container hall 65 CS-MC: L1 with motion compensation CS-DIFF: L1 with frame difference CS: frame-by-frame qJPEG: mask-based DCT thresholding ldct: linear DCT (zigzag) thresholding CS measurements: partial wavelets + noiselets compression ratio = N/M
  • 66.
    Results (short axis): R =8 R = 4 MASTeR k-t FOCUSS with ME/MC (e) (f) (b) (b) (c) (d) (a) y x k-t FOCUSS with ME/MC: -Temporal DFT sparsity -Motion residual with a reference frame or temporal average. 66 MASTeR: Motion-adaptive spatio-temporal regularization: R: acceleration factor Sampling: 8 low-freq. lines + random sampling [Jung et al., k-t FOCUSS: a general compressed sensing framework for high resolution dynamic MRI. MRM, 2009]
  • 67.
    67 k-t FOCUSS with ME/MC(full) k-t FOCUSS with ME/MC (ROI) MASTeR k-t FOCUSS with ME/MC t y (a) (b) R=4 (c) (e) R=8 (d) (f) MASTeR (ROI) MASTeR (full) Results (short axis)
  • 68.
    Conclusion 68 Dynamic modeling Improve reconstructionby exploiting the dynamical signal structure • Motion-adaptive dynamical system 1. Low-complexity video compression 2. Accelerated dynamic MRI Dynamic updating Quickly update the solution to accommodate changes • homotopy – Breaks updating into piecewise linear steps – Simple, inexpensive (rank-one update) “warm start” final solution . . . The more we know about the signal (dynamics) the faster and/or more accurately we can reconstruct
  • 69.
    Future directions • -homotopy: –Theoretical analysis – Large-scale streaming problems • Compressive sensing in videos: – Adaptive sampling – Distributed cameras – Computational imaging (lightfield etc.) • Medical imaging: – MRI… (maybe hyper-polarized) – Ultrasound – EEG 69
  • 70.
    Select publications • Dynamicupdating – M. Asif and J. Romberg, ``Sparse recovery algorithm for streaming signals using L1- homotopy,'' Submitted to IEEE Trans. Sig. Proc., June 2013. [Arxiv] – M. Asif and J. Romberg, ``Fast and accurate algorithm for re-weighted L1-norm minimization,'' Submitted to IEEE Trans. Sig. Proc., July 2012. [Arxiv] – M. Asif and J. Romberg, ``Dynamic updating for L1 minimization,'' IEEE Journal of Selected Topics in Signal Processing, 4(2) pp. 421--434, April 2010. – M. Asif and J. Romberg, ``Sparse signal recovery and dynamic update of the underdetermined system,'' Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, November 2010. – M. Asif and Justin Romberg, ``Basis pursuit with sequential measurements and time- varying signals,'' in Proc. Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Aruba, December 2009. – M. Asif and J. Romberg, ``Dynamic updating for sparse time-varying signals,'' Conference on inf. sciences and systems (CISS), Baltimore, March 2009. – M. Asif and J. Romberg, ``Streaming measurements in compressive sensing: L1 filtering,'' Asilomar Conf. on Signals, Systems, and Computers, Pacific Grove, CA, October 2008. • Dynamic model in video – M. Asif, L. Hamilton, M. Brummer, and J. Romberg, ``Motion-adaptive spatio-temporal regularization (MASTeR) for accelerated dynamic MRI,'' accepted in Magnetic Resonance in Medicine, November 2012 [Early view DOI: 10.1002/mrm.24524]. – M. Asif, F. Fernandes, and J. Romberg, ''Low-complexity video compression and compressive sensing,'' preprint 2012. 70
  • 71.
    Select publications • Sparsityand dynamics – M. Asif, A. Charles, J. Romberg, and C. Rozell, ``Estimating and dynamic updating of time-varying signals with sparse variations,'' in Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing (ICASSP), Prague, Czech Republic, May 2011. – A. Charles, M. Asif, J. Romberg, and C. Rozell, ``Sparse penalties in dynamical system estimation,'', Conference on Inf. Sciences and Systems (CISS), Baltimore, Maryland, March 2011. • Streaming greedy pursuit – P. Boufounos and M. Asif, ``Compressive sensing for streaming signals using the streaming greedy pursuit,'' in Proc. Military Commun. Conf. (MILCOM), San Jose, California, October 2010. – M. Asif, D. Reddy, P. Boufounos, and A. Veeraraghavan, ``Streaming compressive sensing for high-speed periodic videos,'' in Proc. IEEE Int. Conf. on Image Processing (ICIP), Hong Kong, September 2010. – P. Boufounos and M. Asif, ``Compressive sampling for streaming signals with sparse frequency content,'' Conference on Inf. Sciences and Systems (CISS), Princeton, New Jersey, March 2010. • Channel protection – M. Asif, W. Mantzel, and J. Romberg, ``Random channel coding and blind deconvolution,'' Allerton Conf. on Communication, Control, and Computing, Monticello, Illinois, October 2009. – M. Asif, W. Mantzel, and J. Romberg, ``Channel protection: Random coding meets sparse channels,'' Information Theory Workshop, Taormina, Italy, October 2009. 71
  • 72.
    Acknowledgements • Special thanksto my collaborators – William Mantzel – Petros Boufonous – Felix Fernandes – Adam Charles and Chris Rozell – Lei Hamilton and Marijn Brummer • and my advisor, Justin Romberg 72
  • 73.
    Thank you! Questions? 73 Email: sasif@gatech.edu Web:http://users.ece.gatech.edu/~sasif/