SlideShare a Scribd company logo
1 of 83
Download to read offline
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
A Fast Hadamard Transform for Signals with
Sub-linear Sparsity
Robin Scheibler Saeid Haghighatshoar Martin Vetterli
School of Computer and Communication Sciences
École Polytechnique Fédérale de Lausanne, Switzerland
October 28, 2013
SparseFHT 1 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Why the Hadamard transform ?
I Historically, low computation
approximation to DFT.
I Coding, 1969 Mariner Mars probe.
I Communication, orthogonal codes
in WCDMA.
I Compressed sensing, maximally
incoherent with Dirac basis.
I Spectroscopy, design of
instruments with lower noise.
I Recent advances in sparse FFT.
16 ⇥ 16 Hadamard matrix
Mariner probe
SparseFHT 2 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Fast Hadamard transform
I Butterfly structure similar to FFT.
I Time complexity O(N log2 N).
I Sample complexity N.
+ Universal, i.e. works for all signals.
Does not exploit signal structure
(e.g. sparsity).
Can we do better ?
SparseFHT 3 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Fast Hadamard transform
I Butterfly structure similar to FFT.
I Time complexity O(N log2 N).
I Sample complexity N.
+ Universal, i.e. works for all signals.
Does not exploit signal structure
(e.g. sparsity).
Can we do better ?
SparseFHT 3 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Fast Hadamard transform
I Butterfly structure similar to FFT.
I Time complexity O(N log2 N).
I Sample complexity N.
+ Universal, i.e. works for all signals.
Does not exploit signal structure
(e.g. sparsity).
Can we do better ?
SparseFHT 3 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Fast Hadamard transform
I Butterfly structure similar to FFT.
I Time complexity O(N log2 N).
I Sample complexity N.
+ Universal, i.e. works for all signals.
Does not exploit signal structure
(e.g. sparsity).
Can we do better ?
SparseFHT 3 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Contribution: Sparse fast Hadamard transform
Assumptions
I The signal is exaclty K-sparse in the transform domain.
I Sub-linear sparsity regime K = O(N↵), 0 < ↵ < 1.
I Support of the signal is uniformly random.
Contribution
An algorithm computing the K non-zero coefficients with:
I Time complexity O(K log2 K log2
N
K ).
I Sample complexity O(K log2
N
K ).
I Probability of failure asymptotically vanishes.
SparseFHT 4 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Contribution: Sparse fast Hadamard transform
Assumptions
I The signal is exaclty K-sparse in the transform domain.
I Sub-linear sparsity regime K = O(N↵), 0 < ↵ < 1.
I Support of the signal is uniformly random.
Contribution
An algorithm computing the K non-zero coefficients with:
I Time complexity O(K log2 K log2
N
K ).
I Sample complexity O(K log2
N
K ).
I Probability of failure asymptotically vanishes.
SparseFHT 4 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Outline
1. Sparse FHT algorithm
2. Analysis of probability of failure
3. Empirical results
SparseFHT 5 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
I = {0, . . . , 23
1}
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
I = {(0, 0, 0), . . . , (1, 1, 1)}
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
Xk0,...,kn 1
=
1X
m0=0
· · ·
1X
mn 1=0
( 1)k0m0+···+kn 1mn 1
xm0,...,mn 1
,
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
Xk =
X
m2Fn
2
( 1)hk , mi
xm, k, m 2 Fn
2, hk , mi =
n 1X
i=0
ki mi.
Treat indices as binary vectors.
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Another look at the Hadamard transform
I Consider indices of x 2 RN,
N = 2n.
I Take the binary expansion of
indices.
I Represent signal on hypercube.
I Take DFT in every direction.
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(1,1,0)(1,0,0)
(0,0,1) (0,1,1)
Xk =
X
m2Fn
2
( 1)hk , mi
xm, k, m 2 Fn
2, hk , mi =
n 1X
i=0
ki mi.
Treat indices as binary vectors.
SparseFHT 6 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Hadamard property I: downsampling/aliasing
Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n
2 ,
where rows of H are a subset of rows of identity matrix,
xHT m
WHT
!
X
i2N(H)
XHT k+i, m, k 2 Fb
2.
e.g. H =
⇥
0b⇥(n b) Ib
⇤
selects the b high order bits.
SparseFHT 7 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Hadamard property I: downsampling/aliasing
Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n
2 ,
where rows of H are a subset of rows of identity matrix,
xHT m
WHT
!
X
i2N(H)
XHT k+i, m, k 2 Fb
2.
e.g. H =
⇥
0b⇥(n b) Ib
⇤
selects the b high order bits.
SparseFHT 7 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Hadamard property I: downsampling/aliasing
Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n
2 ,
where rows of H are a subset of rows of identity matrix,
xHT m
WHT
!
X
i2N(H)
XHT k+i, m, k 2 Fb
2.
e.g. H =
⇥
0b⇥(n b) Ib
⇤
selects the b high order bits.
(0,0,0) (0,1,0)
(1,0,1)
(1,1,1)(0,1,1)(0,0,1)
(1,0,0) (1,1,0)
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(0,1,1)
(0,0,1)
(1,0,0) (1,1,0)
WHT
SparseFHT 7 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Hadamard property I: downsampling/aliasing
Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n
2 ,
where rows of H are a subset of rows of identity matrix,
xHT m
WHT
!
X
i2N(H)
XHT k+i, m, k 2 Fb
2.
e.g. H =
⇥
0b⇥(n b) Ib
⇤
selects the b high order bits.
(0,0,0) (0,1,0)
(0,1,1)(0,0,1)
(0,0,0) (0,1,0)
(1,0,1) (1,1,1)
(0,1,1)
(0,0,1)
(1,0,0) (1,1,0)
(1,0,1)
(1,1,1)
(1,0,0) (1,1,0)
WHT
SparseFHT 7 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Aliasing induced bipartite graph
time-domain
Hadamard-domain
I Downsampling induces an aliasing pattern.
I Different downsamplings produce different patterns.
SparseFHT 8 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Aliasing induced bipartite graph
time-domain
Hadamard-domain
4-WHT
I Downsampling induces an aliasing pattern.
I Different downsamplings produce different patterns.
SparseFHT 8 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Aliasing induced bipartite graph
time-domain
Hadamard-domain
4-WHT
I Downsampling induces an aliasing pattern.
I Different downsamplings produce different patterns.
SparseFHT 8 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Aliasing induced bipartite graph
time-domain
Hadamard-domain
4-WHT 4-WHT
I Downsampling induces an aliasing pattern.
I Different downsamplings produce different patterns.
SparseFHT 8 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Aliasing induced bipartite graph
time-domain
Hadamard-domain
4-WHT 4-WHT
I Downsampling induces an aliasing pattern.
I Different downsamplings produce different patterns.
SparseFHT 8 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Aliasing induced bipartite graph
time-domain
Hadamard-domain
4-WHT 4-WHT
Checks
Variables
I Downsampling induces an aliasing pattern.
I Different downsamplings produce different patterns.
SparseFHT 8 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Genie-aided peeling decoder
A genie indicates us
I if a check is connected to only one variable (singleton),
I in that case, the genie also gives the index of that variable.
Success
Peeling decoder algorithm:
1. Find a singleton check: {X1, X8, X11}
2. Peel it off.
3. Repeat until nothing left.
SparseFHT 9 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Hadamard property II: shift/modulation
Theorem (shift/modulation)
Given p 2 Fn
2,
xm+p
WHT
! Xk ( 1)hp , ki
.
Consequence
The signal can be modulated in frequency by manipulating the
time-domain samples.
SparseFHT 10 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Hadamard property II: shift/modulation
Theorem (shift/modulation)
Given p 2 Fn
2,
xm+p
WHT
! Xk ( 1)hp , ki
.
Consequence
The signal can be modulated in frequency by manipulating the
time-domain samples.
SparseFHT 10 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Collision: if 2 variables connected to same check.
I
Xi ( 1)hp , ii+Xj ( 1)hp , ji
Xi +Xj
6= ±1, (mild assumption on distribution of X).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Collision: if 2 variables connected to same check.
I
Xi ( 1)hp , ii+Xj ( 1)hp , ji
Xi +Xj
6= ±1, (mild assumption on distribution of X).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Collision: if 2 variables connected to same check.
I
Xi ( 1)hp , ii+Xj ( 1)hp , ji
Xi +Xj
6= ±1, (mild assumption on distribution of X).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Singleton: only one variable connected to check.
I
Xi ( 1)hp , ii
Xi
= ( 1)hp , ii = ±1. We can know hp , ii!
I O(log2
N
K ) measurements sufficient to recover index i,
(dimension of null-space of downsampling matrix H).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Singleton: only one variable connected to check.
I
Xi ( 1)hp , ii
Xi
= ( 1)hp , ii = ±1. We can know hp , ii!
I O(log2
N
K ) measurements sufficient to recover index i,
(dimension of null-space of downsampling matrix H).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Singleton: only one variable connected to check.
I
Xi ( 1)hp , ii
Xi
= ( 1)hp , ii = ±1. We can know hp , ii!
I O(log2
N
K ) measurements sufficient to recover index i,
(dimension of null-space of downsampling matrix H).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
How to construct the Genie
Non-modulated Modulated
I Singleton: only one variable connected to check.
I
Xi ( 1)hp , ii
Xi
= ( 1)hp , ii = ±1. We can know hp , ii!
I O(log2
N
K ) measurements sufficient to recover index i,
(dimension of null-space of downsampling matrix H).
SparseFHT 11 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Sparse fast Hadamard transform
Algorithm
1. Set number of checks per downsampling B = O(K).
2. Choose C downsampling matrices H1, . . . , HC.
3. Compute C(log2 N/K + 1) size-K fast Hadamard
transform, each takes O(K log2 K).
4. Decode non-zero coefficients using peeling decoder.
Performance
I Time complexity – O(K log2 K log2 N/K).
I Sample complexity – O(K log2
N
K ).
I How to construct H1, . . . , HC ? Probability of success ?
SparseFHT 12 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Sparse fast Hadamard transform
Algorithm
1. Set number of checks per downsampling B = O(K).
2. Choose C downsampling matrices H1, . . . , HC.
3. Compute C(log2 N/K + 1) size-K fast Hadamard
transform, each takes O(K log2 K).
4. Decode non-zero coefficients using peeling decoder.
Performance
I Time complexity – O(K log2 K log2 N/K).
I Sample complexity – O(K log2
N
K ).
I How to construct H1, . . . , HC ? Probability of success ?
SparseFHT 12 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Sparse fast Hadamard transform
Algorithm
1. Set number of checks per downsampling B = O(K).
2. Choose C downsampling matrices H1, . . . , HC.
3. Compute C(log2 N/K + 1) size-K fast Hadamard
transform, each takes O(K log2 K).
4. Decode non-zero coefficients using peeling decoder.
Performance
I Time complexity – O(K log2 K log2 N/K).
I Sample complexity – O(K log2
N
K ).
I How to construct H1, . . . , HC ? Probability of success ?
SparseFHT 12 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Very sparse regime
Setting
I K = O(N↵), 0 < ↵ < 1/3.
I Uniformly random support.
I Study asymptotic probability of failure as n ! 1.
Downsampling matrices construction
I Achieves values ↵ = 1
C , i.e. b = n
C .
I Deterministic downsampling matrices H1, . . . , HC,
SparseFHT 13 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Very sparse regime
Setting
I K = O(N↵), 0 < ↵ < 1/3.
I Uniformly random support.
I Study asymptotic probability of failure as n ! 1.
Downsampling matrices construction
I Achieves values ↵ = 1
C , i.e. b = n
C .
I Deterministic downsampling matrices H1, . . . , HC,
SparseFHT 13 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Very sparse regime
Setting
I K = O(N↵), 0 < ↵ < 1/3.
I Uniformly random support.
I Study asymptotic probability of failure as n ! 1.
Downsampling matrices construction
I Achieves values ↵ = 1
C , i.e. b = n
C .
I Deterministic downsampling matrices H1, . . . , HC,
SparseFHT 13 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Very sparse regime
Setting
I K = O(N↵), 0 < ↵ < 1/3.
I Uniformly random support.
I Study asymptotic probability of failure as n ! 1.
Downsampling matrices construction
I Achieves values ↵ = 1
C , i.e. b = n
C .
I Deterministic downsampling matrices H1, . . . , HC,
SparseFHT 13 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Very sparse regime
Setting
I K = O(N↵), 0 < ↵ < 1/3.
I Uniformly random support.
I Study asymptotic probability of failure as n ! 1.
Downsampling matrices construction
I Achieves values ↵ = 1
C , i.e. b = n
C .
I Deterministic downsampling matrices H1, . . . , HC,
SparseFHT 13 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Very sparse regime
Setting
I K = O(N↵), 0 < ↵ < 1/3.
I Uniformly random support.
I Study asymptotic probability of failure as n ! 1.
Downsampling matrices construction
I Achieves values ↵ = 1
C , i.e. b = n
C .
I Deterministic downsampling matrices H1, . . . , HC,
SparseFHT 13 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Uniformly random support model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Uniformly random support model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Uniformly random support model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Uniformly random support model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Uniformly random support model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Balls-and-bins model
Balls-and-bins model
I Theorem: Both constructions are equivalent.
Proof: By construction, all rows of Hi are linearly independent.
I Reduces to LDPC decoding analysis.
I Error correcting code design (Luby et al. 2001).
I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013).
SparseFHT 14 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Extension to less-sparse regime
I K = O(N↵), 2/3  ↵ < 1.
I Balls-and-bins model not equivalent anymore.
I Let ↵ = 1 1
C . Construct H1, . . . , HC,
I By construction: N(Hi)
T
N(Hj) = 0.
SparseFHT 15 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
SparseFHT – Probability of success
Probability of success - N = 222
0 1/3 2/3 1
0
0.2
0.4
0.6
0.8
1
↵
SparseFHT 16 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
SparseFHT vs. FHT
Runtime [µs] – N = 215
0 1/3 2/3 1
0
200
400
600
800
1000
Sparse FHT
FHT
↵
SparseFHT 17 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Conclusion
Contribution
I Sparse fast Hadamard algorithm.
I Time complexity O(K log2 K log2
N
K ).
I Sample complexity O(K log2
N
K ).
I Probability of success asymptotically equal to 1.
What’s next ?
I Investigate noisy case.
SparseFHT 18 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Conclusion
Contribution
I Sparse fast Hadamard algorithm.
I Time complexity O(K log2 K log2
N
K ).
I Sample complexity O(K log2
N
K ).
I Probability of success asymptotically equal to 1.
What’s next ?
I Investigate noisy case.
SparseFHT 18 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Thanks for your attention!
Code and figures available at
http://lcav.epfl.ch/page-99903.html
SparseFHT 19 / 20 EPFL
Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion
Reference
[1] M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, and
D.A. Spielman,
Efficient erasure correcting codes,
IEEE Trans. Inform. Theory, vol. 47, no. 2,
pp. 569–584, 2001.
[2] S. Pawar and K. Ramchandran,
Computing a k-sparse n-length discrete Fourier transform
using at most 4k samples and O(k log k) complexity,
arXiv.org, vol. cs.DS. 04-May-2013.
SparseFHT 20 / 20 EPFL

More Related Content

What's hot

Dsp U Lec08 Fir Filter Design
Dsp U   Lec08 Fir Filter DesignDsp U   Lec08 Fir Filter Design
Dsp U Lec08 Fir Filter Designtaha25
 
DSP_2018_FOEHU - Lec 06 - FIR Filter Design
DSP_2018_FOEHU - Lec 06 - FIR Filter DesignDSP_2018_FOEHU - Lec 06 - FIR Filter Design
DSP_2018_FOEHU - Lec 06 - FIR Filter DesignAmr E. Mohamed
 
DSP_FOEHU - Lec 10 - FIR Filter Design
DSP_FOEHU - Lec 10 - FIR Filter DesignDSP_FOEHU - Lec 10 - FIR Filter Design
DSP_FOEHU - Lec 10 - FIR Filter DesignAmr E. Mohamed
 
Dft and its applications
Dft and its applicationsDft and its applications
Dft and its applicationsAgam Goel
 
fft using labview
fft using labviewfft using labview
fft using labviewkiranrockz
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsPierre Jacob
 
DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...
DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...
DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...Advanced-Concepts-Team
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010Christian Robert
 
Towards a stable definition of Algorithmic Randomness
Towards a stable definition of Algorithmic RandomnessTowards a stable definition of Algorithmic Randomness
Towards a stable definition of Algorithmic RandomnessHector Zenil
 
Nyquist criterion for zero ISI
Nyquist criterion for zero ISINyquist criterion for zero ISI
Nyquist criterion for zero ISIGunasekara Reddy
 
An Intuitive Approach to Fourier Optics
An Intuitive Approach to Fourier OpticsAn Intuitive Approach to Fourier Optics
An Intuitive Approach to Fourier Opticsjose0055
 
A novel particle swarm optimization for papr reduction of ofdm systems
A novel particle swarm optimization for papr reduction of ofdm systemsA novel particle swarm optimization for papr reduction of ofdm systems
A novel particle swarm optimization for papr reduction of ofdm systemsaliasghar1989
 
Smooth entropies a tutorial
Smooth entropies a tutorialSmooth entropies a tutorial
Smooth entropies a tutorialwtyru1989
 
Optics Fourier Transform Ii
Optics Fourier Transform IiOptics Fourier Transform Ii
Optics Fourier Transform Iidiarmseven
 

What's hot (18)

Dsp U Lec08 Fir Filter Design
Dsp U   Lec08 Fir Filter DesignDsp U   Lec08 Fir Filter Design
Dsp U Lec08 Fir Filter Design
 
Hm2513521357
Hm2513521357Hm2513521357
Hm2513521357
 
DSP_2018_FOEHU - Lec 06 - FIR Filter Design
DSP_2018_FOEHU - Lec 06 - FIR Filter DesignDSP_2018_FOEHU - Lec 06 - FIR Filter Design
DSP_2018_FOEHU - Lec 06 - FIR Filter Design
 
DSP_FOEHU - Lec 10 - FIR Filter Design
DSP_FOEHU - Lec 10 - FIR Filter DesignDSP_FOEHU - Lec 10 - FIR Filter Design
DSP_FOEHU - Lec 10 - FIR Filter Design
 
Dft and its applications
Dft and its applicationsDft and its applications
Dft and its applications
 
fft using labview
fft using labviewfft using labview
fft using labview
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov models
 
Lecture 9
Lecture 9Lecture 9
Lecture 9
 
10.1.1.2.9988
10.1.1.2.998810.1.1.2.9988
10.1.1.2.9988
 
DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...
DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...
DPPs everywhere: repulsive point processes for Monte Carlo integration, signa...
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010
 
Towards a stable definition of Algorithmic Randomness
Towards a stable definition of Algorithmic RandomnessTowards a stable definition of Algorithmic Randomness
Towards a stable definition of Algorithmic Randomness
 
Nyquist criterion for zero ISI
Nyquist criterion for zero ISINyquist criterion for zero ISI
Nyquist criterion for zero ISI
 
An Intuitive Approach to Fourier Optics
An Intuitive Approach to Fourier OpticsAn Intuitive Approach to Fourier Optics
An Intuitive Approach to Fourier Optics
 
Fourier transforms
Fourier transforms Fourier transforms
Fourier transforms
 
A novel particle swarm optimization for papr reduction of ofdm systems
A novel particle swarm optimization for papr reduction of ofdm systemsA novel particle swarm optimization for papr reduction of ofdm systems
A novel particle swarm optimization for papr reduction of ofdm systems
 
Smooth entropies a tutorial
Smooth entropies a tutorialSmooth entropies a tutorial
Smooth entropies a tutorial
 
Optics Fourier Transform Ii
Optics Fourier Transform IiOptics Fourier Transform Ii
Optics Fourier Transform Ii
 

Viewers also liked

LiFi Visible light Communication technology
LiFi Visible light Communication technologyLiFi Visible light Communication technology
LiFi Visible light Communication technologyhawkisk
 
Soil pollution, health effect of the soil
Soil pollution, health effect of the soilSoil pollution, health effect of the soil
Soil pollution, health effect of the soilJasmine John
 
visible light communication
visible light communicationvisible light communication
visible light communicationHossam Zein
 
Visible light communication
Visible light communicationVisible light communication
Visible light communicationSaginesh Mk
 
Visible light communication
Visible light communicationVisible light communication
Visible light communicationParth Saxena
 
Visible light communication
Visible light communicationVisible light communication
Visible light communicationNaveen Sihag
 

Viewers also liked (11)

03 image transform
03 image transform03 image transform
03 image transform
 
Visible Light Communication
Visible Light CommunicationVisible Light Communication
Visible Light Communication
 
VISIBLE LIGHT COMMUNICATION
VISIBLE LIGHT COMMUNICATIONVISIBLE LIGHT COMMUNICATION
VISIBLE LIGHT COMMUNICATION
 
LiFi Visible light Communication technology
LiFi Visible light Communication technologyLiFi Visible light Communication technology
LiFi Visible light Communication technology
 
Soil pollution, health effect of the soil
Soil pollution, health effect of the soilSoil pollution, health effect of the soil
Soil pollution, health effect of the soil
 
visible light communication
visible light communicationvisible light communication
visible light communication
 
Lifi ppt
Lifi pptLifi ppt
Lifi ppt
 
UWB and applications
UWB and applicationsUWB and applications
UWB and applications
 
Visible light communication
Visible light communicationVisible light communication
Visible light communication
 
Visible light communication
Visible light communicationVisible light communication
Visible light communication
 
Visible light communication
Visible light communicationVisible light communication
Visible light communication
 

Similar to A Fast Hadamard Transform for Signals with Sub-linear Sparsity

Performance analysis of wavelet based blind detection and hop time estimation...
Performance analysis of wavelet based blind detection and hop time estimation...Performance analysis of wavelet based blind detection and hop time estimation...
Performance analysis of wavelet based blind detection and hop time estimation...Saira Shahid
 
FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...
FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...
FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...IDES Editor
 
hankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijchankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijcVasilis Tsoulkas
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Pierre Jacob
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT Claudio Attaccalite
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing ssuser2797e4
 
signal space analysis.ppt
signal space analysis.pptsignal space analysis.ppt
signal space analysis.pptPatrickMumba7
 
H infinity optimal_approximation_for_cau
H infinity optimal_approximation_for_cauH infinity optimal_approximation_for_cau
H infinity optimal_approximation_for_cauAl Vc
 
"Evaluation of the Hilbert Huang transformation of transient signals for brid...
"Evaluation of the Hilbert Huang transformation of transient signals for brid..."Evaluation of the Hilbert Huang transformation of transient signals for brid...
"Evaluation of the Hilbert Huang transformation of transient signals for brid...TRUSS ITN
 
conference_poster_4
conference_poster_4conference_poster_4
conference_poster_4Jiayi Jiang
 
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICDESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICVLSICS Design
 
Integration with kernel methods, Transported meshfree methods
Integration with kernel methods, Transported meshfree methodsIntegration with kernel methods, Transported meshfree methods
Integration with kernel methods, Transported meshfree methodsMercier Jean-Marc
 
EBDSS Max Research Report - Final
EBDSS  Max  Research Report - FinalEBDSS  Max  Research Report - Final
EBDSS Max Research Report - FinalMax Robertson
 
WaveletTutorial.pdf
WaveletTutorial.pdfWaveletTutorial.pdf
WaveletTutorial.pdfshreyassr9
 

Similar to A Fast Hadamard Transform for Signals with Sub-linear Sparsity (20)

Performance analysis of wavelet based blind detection and hop time estimation...
Performance analysis of wavelet based blind detection and hop time estimation...Performance analysis of wavelet based blind detection and hop time estimation...
Performance analysis of wavelet based blind detection and hop time estimation...
 
Kanal wireless dan propagasi
Kanal wireless dan propagasiKanal wireless dan propagasi
Kanal wireless dan propagasi
 
Lecture 3 sapienza 2017
Lecture 3 sapienza 2017Lecture 3 sapienza 2017
Lecture 3 sapienza 2017
 
FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...
FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...
FPGA Implementation of Large Area Efficient and Low Power Geortzel Algorithm ...
 
hankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijchankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijc
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT
 
EC8553 Discrete time signal processing
EC8553 Discrete time signal processing EC8553 Discrete time signal processing
EC8553 Discrete time signal processing
 
signal space analysis.ppt
signal space analysis.pptsignal space analysis.ppt
signal space analysis.ppt
 
Pres metabief2020jmm
Pres metabief2020jmmPres metabief2020jmm
Pres metabief2020jmm
 
H infinity optimal_approximation_for_cau
H infinity optimal_approximation_for_cauH infinity optimal_approximation_for_cau
H infinity optimal_approximation_for_cau
 
Nc2421532161
Nc2421532161Nc2421532161
Nc2421532161
 
"Evaluation of the Hilbert Huang transformation of transient signals for brid...
"Evaluation of the Hilbert Huang transformation of transient signals for brid..."Evaluation of the Hilbert Huang transformation of transient signals for brid...
"Evaluation of the Hilbert Huang transformation of transient signals for brid...
 
conference_poster_4
conference_poster_4conference_poster_4
conference_poster_4
 
Er24902905
Er24902905Er24902905
Er24902905
 
Lt2419681970
Lt2419681970Lt2419681970
Lt2419681970
 
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGICDESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
DESIGN OF QUATERNARY LOGICAL CIRCUIT USING VOLTAGE AND CURRENT MODE LOGIC
 
Integration with kernel methods, Transported meshfree methods
Integration with kernel methods, Transported meshfree methodsIntegration with kernel methods, Transported meshfree methods
Integration with kernel methods, Transported meshfree methods
 
EBDSS Max Research Report - Final
EBDSS  Max  Research Report - FinalEBDSS  Max  Research Report - Final
EBDSS Max Research Report - Final
 
WaveletTutorial.pdf
WaveletTutorial.pdfWaveletTutorial.pdf
WaveletTutorial.pdf
 

Recently uploaded

College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxDeepakSakkari2
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx959SahilShah
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxvipinkmenon1
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 

Recently uploaded (20)

College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
Biology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptxBiology for Computer Engineers Course Handout.pptx
Biology for Computer Engineers Course Handout.pptx
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Meera Call 7001035870 Meet With Nagpur Escorts
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Application of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptxApplication of Residue Theorem to evaluate real integrations.pptx
Application of Residue Theorem to evaluate real integrations.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Introduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptxIntroduction to Microprocesso programming and interfacing.pptx
Introduction to Microprocesso programming and interfacing.pptx
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 

A Fast Hadamard Transform for Signals with Sub-linear Sparsity

  • 1. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion A Fast Hadamard Transform for Signals with Sub-linear Sparsity Robin Scheibler Saeid Haghighatshoar Martin Vetterli School of Computer and Communication Sciences École Polytechnique Fédérale de Lausanne, Switzerland October 28, 2013 SparseFHT 1 / 20 EPFL
  • 2. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Why the Hadamard transform ? I Historically, low computation approximation to DFT. I Coding, 1969 Mariner Mars probe. I Communication, orthogonal codes in WCDMA. I Compressed sensing, maximally incoherent with Dirac basis. I Spectroscopy, design of instruments with lower noise. I Recent advances in sparse FFT. 16 ⇥ 16 Hadamard matrix Mariner probe SparseFHT 2 / 20 EPFL
  • 3. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Fast Hadamard transform I Butterfly structure similar to FFT. I Time complexity O(N log2 N). I Sample complexity N. + Universal, i.e. works for all signals. Does not exploit signal structure (e.g. sparsity). Can we do better ? SparseFHT 3 / 20 EPFL
  • 4. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Fast Hadamard transform I Butterfly structure similar to FFT. I Time complexity O(N log2 N). I Sample complexity N. + Universal, i.e. works for all signals. Does not exploit signal structure (e.g. sparsity). Can we do better ? SparseFHT 3 / 20 EPFL
  • 5. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Fast Hadamard transform I Butterfly structure similar to FFT. I Time complexity O(N log2 N). I Sample complexity N. + Universal, i.e. works for all signals. Does not exploit signal structure (e.g. sparsity). Can we do better ? SparseFHT 3 / 20 EPFL
  • 6. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Fast Hadamard transform I Butterfly structure similar to FFT. I Time complexity O(N log2 N). I Sample complexity N. + Universal, i.e. works for all signals. Does not exploit signal structure (e.g. sparsity). Can we do better ? SparseFHT 3 / 20 EPFL
  • 7. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Contribution: Sparse fast Hadamard transform Assumptions I The signal is exaclty K-sparse in the transform domain. I Sub-linear sparsity regime K = O(N↵), 0 < ↵ < 1. I Support of the signal is uniformly random. Contribution An algorithm computing the K non-zero coefficients with: I Time complexity O(K log2 K log2 N K ). I Sample complexity O(K log2 N K ). I Probability of failure asymptotically vanishes. SparseFHT 4 / 20 EPFL
  • 8. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Contribution: Sparse fast Hadamard transform Assumptions I The signal is exaclty K-sparse in the transform domain. I Sub-linear sparsity regime K = O(N↵), 0 < ↵ < 1. I Support of the signal is uniformly random. Contribution An algorithm computing the K non-zero coefficients with: I Time complexity O(K log2 K log2 N K ). I Sample complexity O(K log2 N K ). I Probability of failure asymptotically vanishes. SparseFHT 4 / 20 EPFL
  • 9. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Outline 1. Sparse FHT algorithm 2. Analysis of probability of failure 3. Empirical results SparseFHT 5 / 20 EPFL
  • 10. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. I = {0, . . . , 23 1} SparseFHT 6 / 20 EPFL
  • 11. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. I = {(0, 0, 0), . . . , (1, 1, 1)} SparseFHT 6 / 20 EPFL
  • 12. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) SparseFHT 6 / 20 EPFL
  • 13. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) SparseFHT 6 / 20 EPFL
  • 14. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) SparseFHT 6 / 20 EPFL
  • 15. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) SparseFHT 6 / 20 EPFL
  • 16. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) Xk0,...,kn 1 = 1X m0=0 · · · 1X mn 1=0 ( 1)k0m0+···+kn 1mn 1 xm0,...,mn 1 , SparseFHT 6 / 20 EPFL
  • 17. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) Xk = X m2Fn 2 ( 1)hk , mi xm, k, m 2 Fn 2, hk , mi = n 1X i=0 ki mi. Treat indices as binary vectors. SparseFHT 6 / 20 EPFL
  • 18. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Another look at the Hadamard transform I Consider indices of x 2 RN, N = 2n. I Take the binary expansion of indices. I Represent signal on hypercube. I Take DFT in every direction. (0,0,0) (0,1,0) (1,0,1) (1,1,1) (1,1,0)(1,0,0) (0,0,1) (0,1,1) Xk = X m2Fn 2 ( 1)hk , mi xm, k, m 2 Fn 2, hk , mi = n 1X i=0 ki mi. Treat indices as binary vectors. SparseFHT 6 / 20 EPFL
  • 19. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Hadamard property I: downsampling/aliasing Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n 2 , where rows of H are a subset of rows of identity matrix, xHT m WHT ! X i2N(H) XHT k+i, m, k 2 Fb 2. e.g. H = ⇥ 0b⇥(n b) Ib ⇤ selects the b high order bits. SparseFHT 7 / 20 EPFL
  • 20. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Hadamard property I: downsampling/aliasing Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n 2 , where rows of H are a subset of rows of identity matrix, xHT m WHT ! X i2N(H) XHT k+i, m, k 2 Fb 2. e.g. H = ⇥ 0b⇥(n b) Ib ⇤ selects the b high order bits. SparseFHT 7 / 20 EPFL
  • 21. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Hadamard property I: downsampling/aliasing Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n 2 , where rows of H are a subset of rows of identity matrix, xHT m WHT ! X i2N(H) XHT k+i, m, k 2 Fb 2. e.g. H = ⇥ 0b⇥(n b) Ib ⇤ selects the b high order bits. (0,0,0) (0,1,0) (1,0,1) (1,1,1)(0,1,1)(0,0,1) (1,0,0) (1,1,0) (0,0,0) (0,1,0) (1,0,1) (1,1,1) (0,1,1) (0,0,1) (1,0,0) (1,1,0) WHT SparseFHT 7 / 20 EPFL
  • 22. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Hadamard property I: downsampling/aliasing Given B = 2b, a divider of N = 2n, and H 2 Fb⇥n 2 , where rows of H are a subset of rows of identity matrix, xHT m WHT ! X i2N(H) XHT k+i, m, k 2 Fb 2. e.g. H = ⇥ 0b⇥(n b) Ib ⇤ selects the b high order bits. (0,0,0) (0,1,0) (0,1,1)(0,0,1) (0,0,0) (0,1,0) (1,0,1) (1,1,1) (0,1,1) (0,0,1) (1,0,0) (1,1,0) (1,0,1) (1,1,1) (1,0,0) (1,1,0) WHT SparseFHT 7 / 20 EPFL
  • 23. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Aliasing induced bipartite graph time-domain Hadamard-domain I Downsampling induces an aliasing pattern. I Different downsamplings produce different patterns. SparseFHT 8 / 20 EPFL
  • 24. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Aliasing induced bipartite graph time-domain Hadamard-domain 4-WHT I Downsampling induces an aliasing pattern. I Different downsamplings produce different patterns. SparseFHT 8 / 20 EPFL
  • 25. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Aliasing induced bipartite graph time-domain Hadamard-domain 4-WHT I Downsampling induces an aliasing pattern. I Different downsamplings produce different patterns. SparseFHT 8 / 20 EPFL
  • 26. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Aliasing induced bipartite graph time-domain Hadamard-domain 4-WHT 4-WHT I Downsampling induces an aliasing pattern. I Different downsamplings produce different patterns. SparseFHT 8 / 20 EPFL
  • 27. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Aliasing induced bipartite graph time-domain Hadamard-domain 4-WHT 4-WHT I Downsampling induces an aliasing pattern. I Different downsamplings produce different patterns. SparseFHT 8 / 20 EPFL
  • 28. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Aliasing induced bipartite graph time-domain Hadamard-domain 4-WHT 4-WHT Checks Variables I Downsampling induces an aliasing pattern. I Different downsamplings produce different patterns. SparseFHT 8 / 20 EPFL
  • 29. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 30. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 31. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 32. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 33. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 34. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 35. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Genie-aided peeling decoder A genie indicates us I if a check is connected to only one variable (singleton), I in that case, the genie also gives the index of that variable. Success Peeling decoder algorithm: 1. Find a singleton check: {X1, X8, X11} 2. Peel it off. 3. Repeat until nothing left. SparseFHT 9 / 20 EPFL
  • 36. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Hadamard property II: shift/modulation Theorem (shift/modulation) Given p 2 Fn 2, xm+p WHT ! Xk ( 1)hp , ki . Consequence The signal can be modulated in frequency by manipulating the time-domain samples. SparseFHT 10 / 20 EPFL
  • 37. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Hadamard property II: shift/modulation Theorem (shift/modulation) Given p 2 Fn 2, xm+p WHT ! Xk ( 1)hp , ki . Consequence The signal can be modulated in frequency by manipulating the time-domain samples. SparseFHT 10 / 20 EPFL
  • 38. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Collision: if 2 variables connected to same check. I Xi ( 1)hp , ii+Xj ( 1)hp , ji Xi +Xj 6= ±1, (mild assumption on distribution of X). SparseFHT 11 / 20 EPFL
  • 39. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Collision: if 2 variables connected to same check. I Xi ( 1)hp , ii+Xj ( 1)hp , ji Xi +Xj 6= ±1, (mild assumption on distribution of X). SparseFHT 11 / 20 EPFL
  • 40. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Collision: if 2 variables connected to same check. I Xi ( 1)hp , ii+Xj ( 1)hp , ji Xi +Xj 6= ±1, (mild assumption on distribution of X). SparseFHT 11 / 20 EPFL
  • 41. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Singleton: only one variable connected to check. I Xi ( 1)hp , ii Xi = ( 1)hp , ii = ±1. We can know hp , ii! I O(log2 N K ) measurements sufficient to recover index i, (dimension of null-space of downsampling matrix H). SparseFHT 11 / 20 EPFL
  • 42. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Singleton: only one variable connected to check. I Xi ( 1)hp , ii Xi = ( 1)hp , ii = ±1. We can know hp , ii! I O(log2 N K ) measurements sufficient to recover index i, (dimension of null-space of downsampling matrix H). SparseFHT 11 / 20 EPFL
  • 43. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Singleton: only one variable connected to check. I Xi ( 1)hp , ii Xi = ( 1)hp , ii = ±1. We can know hp , ii! I O(log2 N K ) measurements sufficient to recover index i, (dimension of null-space of downsampling matrix H). SparseFHT 11 / 20 EPFL
  • 44. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion How to construct the Genie Non-modulated Modulated I Singleton: only one variable connected to check. I Xi ( 1)hp , ii Xi = ( 1)hp , ii = ±1. We can know hp , ii! I O(log2 N K ) measurements sufficient to recover index i, (dimension of null-space of downsampling matrix H). SparseFHT 11 / 20 EPFL
  • 45. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Sparse fast Hadamard transform Algorithm 1. Set number of checks per downsampling B = O(K). 2. Choose C downsampling matrices H1, . . . , HC. 3. Compute C(log2 N/K + 1) size-K fast Hadamard transform, each takes O(K log2 K). 4. Decode non-zero coefficients using peeling decoder. Performance I Time complexity – O(K log2 K log2 N/K). I Sample complexity – O(K log2 N K ). I How to construct H1, . . . , HC ? Probability of success ? SparseFHT 12 / 20 EPFL
  • 46. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Sparse fast Hadamard transform Algorithm 1. Set number of checks per downsampling B = O(K). 2. Choose C downsampling matrices H1, . . . , HC. 3. Compute C(log2 N/K + 1) size-K fast Hadamard transform, each takes O(K log2 K). 4. Decode non-zero coefficients using peeling decoder. Performance I Time complexity – O(K log2 K log2 N/K). I Sample complexity – O(K log2 N K ). I How to construct H1, . . . , HC ? Probability of success ? SparseFHT 12 / 20 EPFL
  • 47. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Sparse fast Hadamard transform Algorithm 1. Set number of checks per downsampling B = O(K). 2. Choose C downsampling matrices H1, . . . , HC. 3. Compute C(log2 N/K + 1) size-K fast Hadamard transform, each takes O(K log2 K). 4. Decode non-zero coefficients using peeling decoder. Performance I Time complexity – O(K log2 K log2 N/K). I Sample complexity – O(K log2 N K ). I How to construct H1, . . . , HC ? Probability of success ? SparseFHT 12 / 20 EPFL
  • 48. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Very sparse regime Setting I K = O(N↵), 0 < ↵ < 1/3. I Uniformly random support. I Study asymptotic probability of failure as n ! 1. Downsampling matrices construction I Achieves values ↵ = 1 C , i.e. b = n C . I Deterministic downsampling matrices H1, . . . , HC, SparseFHT 13 / 20 EPFL
  • 49. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Very sparse regime Setting I K = O(N↵), 0 < ↵ < 1/3. I Uniformly random support. I Study asymptotic probability of failure as n ! 1. Downsampling matrices construction I Achieves values ↵ = 1 C , i.e. b = n C . I Deterministic downsampling matrices H1, . . . , HC, SparseFHT 13 / 20 EPFL
  • 50. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Very sparse regime Setting I K = O(N↵), 0 < ↵ < 1/3. I Uniformly random support. I Study asymptotic probability of failure as n ! 1. Downsampling matrices construction I Achieves values ↵ = 1 C , i.e. b = n C . I Deterministic downsampling matrices H1, . . . , HC, SparseFHT 13 / 20 EPFL
  • 51. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Very sparse regime Setting I K = O(N↵), 0 < ↵ < 1/3. I Uniformly random support. I Study asymptotic probability of failure as n ! 1. Downsampling matrices construction I Achieves values ↵ = 1 C , i.e. b = n C . I Deterministic downsampling matrices H1, . . . , HC, SparseFHT 13 / 20 EPFL
  • 52. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Very sparse regime Setting I K = O(N↵), 0 < ↵ < 1/3. I Uniformly random support. I Study asymptotic probability of failure as n ! 1. Downsampling matrices construction I Achieves values ↵ = 1 C , i.e. b = n C . I Deterministic downsampling matrices H1, . . . , HC, SparseFHT 13 / 20 EPFL
  • 53. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Very sparse regime Setting I K = O(N↵), 0 < ↵ < 1/3. I Uniformly random support. I Study asymptotic probability of failure as n ! 1. Downsampling matrices construction I Achieves values ↵ = 1 C , i.e. b = n C . I Deterministic downsampling matrices H1, . . . , HC, SparseFHT 13 / 20 EPFL
  • 54. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Uniformly random support model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 55. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Uniformly random support model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 56. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Uniformly random support model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 57. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Uniformly random support model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 58. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Uniformly random support model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 59. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 60. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 61. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 62. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 63. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 64. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 65. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 66. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 67. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Balls-and-bins model Balls-and-bins model I Theorem: Both constructions are equivalent. Proof: By construction, all rows of Hi are linearly independent. I Reduces to LDPC decoding analysis. I Error correcting code design (Luby et al. 2001). I FFAST (Sparse FFT algorithm) (Pawar & Ramchandran 2013). SparseFHT 14 / 20 EPFL
  • 68. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 69. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 70. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 71. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 72. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 73. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 74. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 75. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 76. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 77. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Extension to less-sparse regime I K = O(N↵), 2/3  ↵ < 1. I Balls-and-bins model not equivalent anymore. I Let ↵ = 1 1 C . Construct H1, . . . , HC, I By construction: N(Hi) T N(Hj) = 0. SparseFHT 15 / 20 EPFL
  • 78. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion SparseFHT – Probability of success Probability of success - N = 222 0 1/3 2/3 1 0 0.2 0.4 0.6 0.8 1 ↵ SparseFHT 16 / 20 EPFL
  • 79. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion SparseFHT vs. FHT Runtime [µs] – N = 215 0 1/3 2/3 1 0 200 400 600 800 1000 Sparse FHT FHT ↵ SparseFHT 17 / 20 EPFL
  • 80. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Conclusion Contribution I Sparse fast Hadamard algorithm. I Time complexity O(K log2 K log2 N K ). I Sample complexity O(K log2 N K ). I Probability of success asymptotically equal to 1. What’s next ? I Investigate noisy case. SparseFHT 18 / 20 EPFL
  • 81. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Conclusion Contribution I Sparse fast Hadamard algorithm. I Time complexity O(K log2 K log2 N K ). I Sample complexity O(K log2 N K ). I Probability of success asymptotically equal to 1. What’s next ? I Investigate noisy case. SparseFHT 18 / 20 EPFL
  • 82. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Thanks for your attention! Code and figures available at http://lcav.epfl.ch/page-99903.html SparseFHT 19 / 20 EPFL
  • 83. Introduction Sparse FHT algorithm Analysis of probability of failure Empirical results Conclusion Reference [1] M.G. Luby, M. Mitzenmacher, M.A. Shokrollahi, and D.A. Spielman, Efficient erasure correcting codes, IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 569–584, 2001. [2] S. Pawar and K. Ramchandran, Computing a k-sparse n-length discrete Fourier transform using at most 4k samples and O(k log k) complexity, arXiv.org, vol. cs.DS. 04-May-2013. SparseFHT 20 / 20 EPFL