Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Recovering Lost Sensor Data through Compressed Sensing

Data loss in wireless sensing applications is inevitable, both due to communication impairments as well as faulty sensors. We introduce an idea using Compressed Sensing (CS) that exploits knowledge of signal model for recovering lost sensor data. In particular, we show that if the signal to be acquired is compressible, it is possible to use CS not only to reduce the acquisition rate but to also improve robustness to losses.This becomes possible because CS employs randomness within the sampling process and to the receiver, lost data is virtually indistinguishable from randomly sampled data.To ensure performance, all that is required is that the sensor over-sample the phenomena, by a rate proportional to the expected loss. In this talk, we will cover a brief introduction to Compressed Sensing and then illustrate the recovery mechanism we call CS Erasure Coding (CSEC). We show that CSEC is efficient for handling missing data in erasure (lossy) channels, that it parallels the performance of competitive coding schemes and that it is also computationally cheaper. We support our proposal through extensive performance studies on real world wireless channels.

  • Login to see the comments

Recovering Lost Sensor Data through Compressed Sensing

  1. 1. Recovering Lost Sensor Data through Compressed Sensing Zainul Charbiwala Collaborators:Younghun Kim, Sadaf Zahedi, Supriyo Chakraborty, Ting He (IBM), Chatschik Bisdikian (IBM), Mani Srivastava
  2. 2. The Big Picture Lossy Communication Link zainul@ee.ucla.edu - CSEC - Jan 2010 2
  3. 3. The Big Picture Lossy Communication Link zainul@ee.ucla.edu - CSEC - Jan 2010 2
  4. 4. The Big Picture Lossy Communication Link zainul@ee.ucla.edu - CSEC - Jan 2010 2
  5. 5. The Big Picture Lossy Communication Link How do we recover from this loss? zainul@ee.ucla.edu - CSEC - Jan 2010 2
  6. 6. The Big Picture Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets zainul@ee.ucla.edu - CSEC - Jan 2010 2
  7. 7. The Big Picture Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets zainul@ee.ucla.edu - CSEC - Jan 2010 2
  8. 8. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits zainul@ee.ucla.edu - CSEC - Jan 2010 2
  9. 9. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits zainul@ee.ucla.edu - CSEC - Jan 2010 2
  10. 10. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits zainul@ee.ucla.edu - CSEC - Jan 2010 2
  11. 11. The Big Picture Generate Error Correction Bits Lossy Communication Link How do we recover from this loss? • Retransmit the lost packets • Proactively encode the data with some protection bits • Can we do something better ? zainul@ee.ucla.edu - CSEC - Jan 2010 2
  12. 12. The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  13. 13. The Big Picture - Using Compressed Sensing Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  14. 14. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  15. 15. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  16. 16. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC zainul@ee.ucla.edu - CSEC - Jan 2010 3
  17. 17. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? zainul@ee.ucla.edu - CSEC - Jan 2010 3
  18. 18. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel zainul@ee.ucla.edu - CSEC - Jan 2010 3
  19. 19. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel • CS uses randomized sampling/projections zainul@ee.ucla.edu - CSEC - Jan 2010 3
  20. 20. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel • CS uses randomized sampling/projections • Random losses look like additional randomness ! zainul@ee.ucla.edu - CSEC - Jan 2010 3
  21. 21. The Big Picture - Using Compressed Sensing Generate Compressed Measurements Lossy Communication Link CSEC Recover from Received Compressed Measurements How does this work ? • Use knowledge of signal model and channel • CS uses randomized sampling/projections • Random losses look like additional randomness ! Rest of this talk focuses on describing “How” and “How Well” this works zainul@ee.ucla.edu - CSEC - Jan 2010 3
  22. 22. Talk Outline ‣ A Quick Intro to Compressed Sensing ‣ CS Erasure Coding for Recovering Lost Sensor Data ‣ Evaluating CSEC’s cost and performance ‣ Concluding Remarks zainul@ee.ucla.edu - CSEC - Jan 2010 4
  23. 23. Why Compressed Sensing ? Physical Sampling Compression Communication Application Signal zainul@ee.ucla.edu - CSEC - Jan 2010 5
  24. 24. Why Compressed Sensing ? Physical Sampling Compression Communication Application Signal Physical Compressive Communication Decoding Application Signal Sampling Shifts computation to a capable server zainul@ee.ucla.edu - CSEC - Jan 2010 5
  25. 25. Transform Domain Analysis zainul@ee.ucla.edu - CSEC - Jan 2010 6
  26. 26. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain zainul@ee.ucla.edu - CSEC - Jan 2010 6
  27. 27. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly zainul@ee.ucla.edu - CSEC - Jan 2010 6
  28. 28. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly zainul@ee.ucla.edu - CSEC - Jan 2010 6
  29. 29. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly ‣ Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. zainul@ee.ucla.edu - CSEC - Jan 2010 6
  30. 30. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly ‣ Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. ‣ Or, in this case, by the index of the FFT coefficient and its complex value zainul@ee.ucla.edu - CSEC - Jan 2010 6
  31. 31. Transform Domain Analysis ‣ We usually acquire signals in the time or spatial domain ‣ By looking at the signal in another domain, the signal may be represented more compactly ‣ Eg: a sine wave can be expressed by 3 parameters: frequency, amplitude and phase. ‣ Or, in this case, by the index of the FFT coefficient and its complex value ‣ Sine wave is sparse in frequency domain zainul@ee.ucla.edu - CSEC - Jan 2010 6
  32. 32. Lossy Compression zainul@ee.ucla.edu - CSEC - Jan 2010 7
  33. 33. Lossy Compression ‣ This is known as Transform Domain Compression zainul@ee.ucla.edu - CSEC - Jan 2010 7
  34. 34. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal zainul@ee.ucla.edu - CSEC - Jan 2010 7
  35. 35. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals zainul@ee.ucla.edu - CSEC - Jan 2010 7
  36. 36. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals ‣ A necessary property for transforms is invertibility zainul@ee.ucla.edu - CSEC - Jan 2010 7
  37. 37. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals ‣ A necessary property for transforms is invertibility ‣ It would also be nice if there were efficient algorithms to convert the signals to transform between domains zainul@ee.ucla.edu - CSEC - Jan 2010 7
  38. 38. Lossy Compression ‣ This is known as Transform Domain Compression ‣ The domain in which the signal can be most compactly represented depends on the signal ‣ The signal processing world has been coming up with domains for many classes of signals ‣ A necessary property for transforms is invertibility ‣ It would also be nice if there were efficient algorithms to convert the signals to transform between domains ‣ But why is it called lossy compression? zainul@ee.ucla.edu - CSEC - Jan 2010 7
  39. 39. Lossy Compression ‣ When we transform the signal to the right domain, some coefficients stand out but lots will be near zero ‣ The top few coeffs describe the signal “well enough” zainul@ee.ucla.edu - CSEC - Jan 2010 8
  40. 40. Lossy Compression ‣ When we transform the signal to the right domain, some coefficients stand out but lots will be near zero ‣ The top few coeffs describe the signal “well enough” zainul@ee.ucla.edu - CSEC - Jan 2010 8
  41. 41. Lossy Compression ‣ When we transform the signal to the right domain, some coefficients stand out but lots will be near zero ‣ The top few coeffs describe the signal “well enough” zainul@ee.ucla.edu - CSEC - Jan 2010 8
  42. 42. Lossy Compression zainul@ee.ucla.edu - CSEC - Jan 2010 9
  43. 43. Lossy Compression • JPEG(100%) : 407462 bytes, ~ 2x gain zainul@ee.ucla.edu - CSEC - Jan 2010 9
  44. 44. Lossy Compression • JPEG(100%) : • JPEG (10%) : 407462 bytes, ~ 7544 bytes, 2x gain ~ 100x gain zainul@ee.ucla.edu - CSEC - Jan 2010 9
  45. 45. Lossy Compression • JPEG(100%) : • JPEG (10%) : • JPEG (1%) : 407462 bytes, ~ 7544 bytes, 2942 bytes, 2x gain ~ 100x gain ~ 260x gain zainul@ee.ucla.edu - CSEC - Jan 2010 9
  46. 46. Compressing a Sine Wave zainul@ee.ucla.edu - CSEC - Jan 2010 10
  47. 47. Compressing a Sine Wave ‣ Assume we’re interesting in acquiring a single sine wave x(t) in a noiseless environment zainul@ee.ucla.edu - CSEC - Jan 2010 10
  48. 48. Compressing a Sine Wave ‣ Assume we’re interesting in acquiring a single sine wave x(t) in a noiseless environment ‣ An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. zainul@ee.ucla.edu - CSEC - Jan 2010 10
  49. 49. Compressing a Sine Wave ‣ Assume we’re interesting in acquiring a single sine wave x(t) in a noiseless environment ‣ An infinite duration sine wave can be expressed using three parameters: frequency f, amplitude a and phase φ. ‣ Question: What’s the best way to find the parameters ? zainul@ee.ucla.edu - CSEC - Jan 2010 10
  50. 50. Compressing a Sine Wave zainul@ee.ucla.edu - CSEC - Jan 2010 11
  51. 51. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements zainul@ee.ucla.edu - CSEC - Jan 2010 11
  52. 52. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements ‣ Questions: zainul@ee.ucla.edu - CSEC - Jan 2010 11
  53. 53. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements ‣ Questions: ‣ What are “good” measurements ? zainul@ee.ucla.edu - CSEC - Jan 2010 11
  54. 54. Compressing a Sine Wave ‣ Technically, to estimate three parameters one needs three good measurements ‣ Questions: ‣ What are “good” measurements ? ‣ How do you estimate f, a, φ from three measurements ? zainul@ee.ucla.edu - CSEC - Jan 2010 11
  55. 55. Compressed Sensing zainul@ee.ucla.edu - CSEC - Jan 2010 12
  56. 56. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 zainul@ee.ucla.edu - CSEC - Jan 2010 12
  57. 57. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: zainul@ee.ucla.edu - CSEC - Jan 2010 12
  58. 58. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = a sin(2π ft i + φ ) ∀i ∈{1, 2, 3} zainul@ee.ucla.edu - CSEC - Jan 2010 12
  59. 59. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = a sin(2π ft i + φ ) ∀i ∈{1, 2, 3} φ ‣ Feasible solution space is much smaller a zainul@ee.ucla.edu - CSEC - Jan 2010 12
  60. 60. Compressed Sensing ‣ With three samples: z1, z2, z3 of the sine wave at times t1, t2, t3 ‣ We know that any solution of f, a and φ must meet the three constraints and spans a 3D space: z i = x(t i ) = a sin(2π ft i + φ ) ∀i ∈{1, 2, 3} φ ‣ Feasible solution space is much smaller ‣ As the number of constraints grows (from more measurements), the feasible solution a space shrinks ‣ Exhaustive search over this space reveals the right answer zainul@ee.ucla.edu - CSEC - Jan 2010 12
  61. 61. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. zainul@ee.ucla.edu - CSEC - Jan 2010 13
  62. 62. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. zainul@ee.ucla.edu - CSEC - Jan 2010 13
  63. 63. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by x color zainul@ee.ucla.edu - CSEC - Jan 2010 13
  64. 64. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by Ψ (Fourier Transform) x color zainul@ee.ucla.edu - CSEC - Jan 2010 13
  65. 65. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by y = Ψ (Fourier Transform) x color zainul@ee.ucla.edu - CSEC - Jan 2010 13
  66. 66. Formulating the Problem ‣ We could also represent f, a and φ as a very long, but mostly empty FFT coefficient vector. Sine wave. Amplitude represented by y = Ψ (Fourier Transform) x color − j2 π ft + φ ae zainul@ee.ucla.edu - CSEC - Jan 2010 13
  67. 67. Sampling Matrix ‣ We could also write out the sampling process in matrix form zainul@ee.ucla.edu - CSEC - Jan 2010 14
  68. 68. Sampling Matrix ‣ We could also write out the sampling process in matrix form x zainul@ee.ucla.edu - CSEC - Jan 2010 14
  69. 69. Sampling Matrix ‣ We could also write out the sampling process in matrix form Φ x zainul@ee.ucla.edu - CSEC - Jan 2010 14
  70. 70. Sampling Matrix ‣ We could also write out the sampling process in matrix form z = Φ x zainul@ee.ucla.edu - CSEC - Jan 2010 14
  71. 71. Sampling Matrix ‣ We could also write out the sampling process in matrix form z = Φ x Three non-zero entries at some “good” locations zainul@ee.ucla.edu - CSEC - Jan 2010 14
  72. 72. Sampling Matrix ‣ We could also write out the sampling process in matrix form Three measurements z = Φ x Three non-zero entries at some “good” locations zainul@ee.ucla.edu - CSEC - Jan 2010 14
  73. 73. Sampling Matrix ‣ We could also write out the sampling process in matrix form Three measurements z = Φ x k n Three non-zero entries at some “good” locations zainul@ee.ucla.edu - CSEC - Jan 2010 14
  74. 74. Exhaustive Search ‣ Objective of exhaustive search: ‣ Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) ‣ Our search is now guided by the fact that y is a sparse vector ‣ Rewriting constraints: z = Φx y = Ψx −1 z = ΦΨ y zainul@ee.ucla.edu - CSEC - Jan 2010 15
  75. 75. Exhaustive Search ‣ Objective of exhaustive search: ‣ Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) ‣ Our search is now guided by the fact that y is a sparse vector ‣ Rewriting constraints: ˆ % y = arg min y l 0 % y z = Φx y = Ψx % s.t. z = ΦΨ y −1 z = ΦΨ y −1 y l0 @ {i : yi ≠ 0} zainul@ee.ucla.edu - CSEC - Jan 2010 15
  76. 76. Exhaustive Search ‣ Objective of exhaustive search: ‣ Find an estimate of the vector y that meets the constraints and is the most compact representation of x (also called the sparsest representation) ‣ Our search is now guided by the fact that y is a sparse vector ‣ Rewriting constraints: ˆ % y = arg min y l 0 % y z = Φx y = Ψx % s.t. z = ΦΨ y −1 z = ΦΨ y −1 y l0 @ {i : yi ≠ 0} This optimization problem is NP-Hard ! zainul@ee.ucla.edu - CSEC - Jan 2010 15
  77. 77. l1 Minimization ‣ Approximate the l0 norm to an l1 norm ˆ % y = arg min y l 1 % y y = ∑ yi l1 % −1 i s.t. z = ΦΨ y ‣ This problem can now be solved efficiently using linear programming techniques ‣ This approximation was not new ‣ The big leap in Compressed Sensing was a theorem that showed that under the right conditions, this approximation was exact! zainul@ee.ucla.edu - CSEC - Jan 2010 16
  78. 78. The Restricted Isometry Property ˆ % y = arg min y l 1 % y % s.t. z = ΦΨ y −1 Rewrite as: % z = Ay For any positive integer constant s, find the smallest δs such that: 2 2 2 (1 − δ s ) y ≤ Ay ≤ (1 + δ s ) y holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries zainul@ee.ucla.edu - CSEC - Jan 2010 17
  79. 79. The Restricted Isometry Property ˆ % y = arg min y l 1 % y % s.t. z = ΦΨ y −1 Rewrite as: % z = Ay For any positive integer constant s, find the smallest δs such that: 2 2 2 (1 − δ s ) y ≤ Ay ≤ (1 + δ s ) y holds for all s-sparse vectors y A vector is said to s-sparse if it has at most s non-zero entries The closer δs(A) is to 0, the better the matrix combination A is at capturing unique features of the signal zainul@ee.ucla.edu - CSEC - Jan 2010 17
  80. 80. CS Recovery Theorem Theorem: Assume that δs(A) <√2-1 for some matrix A, then the solution to the l1 minimization problem obeys: [Candes-Romberg- ˆ y− y ˆ ≤ C 0 y − ys Tao-05] l1 l1 C0 ˆ y− y l2 ≤ ˆ y − ys l1 s for some small positive constant C0 ys is an approximation of a non-sparse vector with only its s-largest entries If y is s-sparse, the reconstruction is exact zainul@ee.ucla.edu - CSEC - Jan 2010 18
  81. 81. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  82. 82. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  83. 83. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  84. 84. Gaussian Random Projections ‣ 1 Gaussian: independent realizations of N (0, ) n z = Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 19
  85. 85. Bernoulli Random Projections ‣  +1 −1  Realizations of equiprobable Bernoulli RV  ,   n n z = Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 20
  86. 86. Uniform Random Sampling ‣ Select samples uniformly randomly z = Φ * Ψ-1 (Inverse Fourier Transform) * y zainul@ee.ucla.edu - CSEC - Jan 2010 21
  87. 87. Per-Module Energy Consumption on Mica ˆ y %! ;<= >?) 001 ;@=A3(1B $! 23456(789: #! H "! ! etection pro- "!&'( #!&'( $!&'( #+!&' "!#%&' "!#%&' on )* )* )* )* ,-.* ,*/001 "!!! ctions that de- ;<= >?) 001 ;@=A3(1B ;D<<A<E(1A85(78F: C+! m. Model pa- nergy accurate +!! MicaZ [18]. The #+! zed sampling is Irwin-Hall dis- ! "!&'( #!&'( $!&'( #+!&' "!#%&' "!#%&' niform random )* )* )* )* ,-.* ,*/001 rmed using an Figure 3: Power and Duty Cycle costs for Compres- -bit operation. sive Sensing versus Nyquist Sampling w/ local FFT. he 4KB ‣RAMFFT computation higher than transmission cost of each block for every one second window. This equates to ‣ Highest the achievable duty cycle ofrandom number generator consumer in CS is the the node, lower values for which further improve the overall energy efficiency. The ADC la- on of the com- tency is clearly visible here as - CSEC - Jan 2010 component for zainul@ee.ucla.edu the dominant 22
  88. 88. Compressive Sampling Physical Sampling Time domain Signal samples n x ∈° z = In x Physical Compressive Randomized Signal Sampling measurements n x ∈° z = Φx kxn k<n zainul@ee.ucla.edu - CSEC - Jan 2010 23
  89. 89. Compressive Sampling Physical Sampling Compression Compressed Signal domain samples n x ∈° z = In x y = Ψz nxn Physical Compressive Compressed Decoding Signal Sampling domain samples n x ∈° z = Φx % y = arg min y l 1 kxn % y k<n % s.t. z = ΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 24
  90. 90. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn zainul@ee.ucla.edu - CSEC - Jan 2010 25
  91. 91. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples When communication channel is lossy: zainul@ee.ucla.edu - CSEC - Jan 2010 25
  92. 92. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples When communication channel is lossy: • Use retransmissions to recover lost data zainul@ee.ucla.edu - CSEC - Jan 2010 25
  93. 93. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples When communication channel is lossy: • Use retransmissions to recover lost data • Or, use error (erasure) correcting codes zainul@ee.ucla.edu - CSEC - Jan 2010 25
  94. 94. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Missing Communication samples zainul@ee.ucla.edu - CSEC - Jan 2010 26
  95. 95. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples zainul@ee.ucla.edu - CSEC - Jan 2010 26
  96. 96. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz nxn Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples + w = Ωy wl = Cw y = ( CΩ ) wl ˆ mxn m>n zainul@ee.ucla.edu - CSEC - Jan 2010 26
  97. 97. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz Done at nxn application layer Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples + w = Ωy wl = Cw y = ( CΩ ) wl ˆ mxn m>n zainul@ee.ucla.edu - CSEC - Jan 2010 26
  98. 98. Handling Missing Data Physical Compressed Sampling Compression Signal domain samples n x ∈° z = In x y = Ψz Done at nxn application layer Recovered Channel Channel Missing Communication compressed Coding Decoding samples domain samples + w = Ωy wl = Cw y = ( CΩ ) wl ˆ mxn m>n Done at physical layer Can’t exploit signal characteristics zainul@ee.ucla.edu - CSEC - Jan 2010 26
  99. 99. CS Erasure Coding Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 kxn % y k<n % s.t. zl = CΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 27
  100. 100. CS Erasure Coding Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 kxn % y k<n % s.t. zl = CΦΨ y −1 Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 mxn % y k<m<n % s.t. zl = CΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 27
  101. 101. CS Erasure Coding Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 kxn % y k<n % s.t. zl = CΦΨ y −1 Over-sampling in CS is Erasure Coding ! Physical Compressive Compressed Communication Decoding Signal Sampling domain samples n x ∈° z = Φx zl = Cz % y = arg min y l 1 mxn % y k<m<n % s.t. zl = CΦΨ y −1 zainul@ee.ucla.edu - CSEC - Jan 2010 27
  102. 102. Features of CS Erasure Coding ‣ No need of additional channel coding block ‣ Redundancy achieved by oversampling ‣ Recovery is resilient to incorrect channel estimates ‣ Traditional channel coding fails if redundancy is inadequate ‣ Decoding is free if CS was used for compression anyway zainul@ee.ucla.edu - CSEC - Jan 2010 28
  103. 103. Features of CS Erasure Coding ‣ No need of additional channel coding block ‣ Redundancy achieved by oversampling ‣ Recovery is resilient to incorrect channel estimates ‣ Traditional channel coding fails if redundancy is inadequate ‣ Decoding is free if CS was used for compression anyway ‣ Intuition: ‣ Channel Coding spreads information out over measurements ‣ Compression (Source Coding) - compact information in few measurements ‣ CSEC - spreads information while compacting ! zainul@ee.ucla.edu - CSEC - Jan 2010 28
  104. 104. Effects of Missing Samples on CS z = Φ x zainul@ee.ucla.edu - CSEC - Jan 2010 29
  105. 105. Effects of Missing Samples on CS z = Φ x Missing samples at the receiver zainul@ee.ucla.edu - CSEC - Jan 2010 29
  106. 106. Effects of Missing Samples on CS z = Φ x Missing samples at the Same as missing receiver rows in the sampling matrix zainul@ee.ucla.edu - CSEC - Jan 2010 29
  107. 107. Effects of Missing Samples on CS z = Φ x What happens if we over-sample? zainul@ee.ucla.edu - CSEC - Jan 2010 29
  108. 108. Effects of Missing Samples on CS z = Φ x What happens if we over-sample? • Can we recover the lost data? zainul@ee.ucla.edu - CSEC - Jan 2010 29
  109. 109. Effects of Missing Samples on CS z = Φ x What happens if we over-sample? • Can we recover the lost data? • How much over-sampling is needed? zainul@ee.ucla.edu - CSEC - Jan 2010 29
  110. 110. Some CS Results ‣ Theorem: If k samples of a length n signal are acquired uniformly randomly (if each sample is equiprobable) and reconstruction is performed in the Fourier basis: k [Rudelson06] s≤C · 4 ′ w.h.p. log (n) ‣ Where s is the sparsity of the signal zainul@ee.ucla.edu - CSEC - Jan 2010 30
  111. 111. Extending CS Results ‣ Claim: When m>k samples are acquired uniformly randomly and communicated through a memoryless binary erasure channel that drops m-k samples, the received k samples are still equiprobable. ‣ Implies that bound on sparsity condition should hold. ‣ If bound is tight, over-sampling rate (m-k) is same as loss rate [Charbiwala10] zainul@ee.ucla.edu - CSEC - Jan 2010 31
  112. 112. Evaluating the RIP Create CS Compute RIP Simulate Sampling+Domain constant of Channel Matrix received matrix Φ * Ψ-1 (Inverse Fourier Transform) 103 instances, size 256x1024 zainul@ee.ucla.edu - CSEC - Jan 2010 32
  113. 113. Evaluating the RIP Create CS Compute RIP Simulate Sampling+Domain constant of Channel Matrix received matrix A= Φ* Ψ-1 103 instances, size 256x1024 zainul@ee.ucla.edu - CSEC - Jan 2010 32
  114. 114. Evaluating the RIP Create CS Compute RIP Simulate Sampling+Domain constant of Channel Matrix received matrix A= Φ* Ψ-1 A’= C*Φ* Ψ-1 103 instances, size 256x1024 zainul@ee.ucla.edu - CSEC - Jan 2010 32
  115. 115. RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) zainul@ee.ucla.edu - CSEC - Jan 2010 33
  116. 116. RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant zainul@ee.ucla.edu - CSEC - Jan 2010 33
  117. 117. RIP Verification in Memoryless Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant 20 % Oversampling - RIP constant recovers zainul@ee.ucla.edu - CSEC - Jan 2010 33
  118. 118. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) zainul@ee.ucla.edu - CSEC - Jan 2010 34
  119. 119. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation zainul@ee.ucla.edu - CSEC - Jan 2010 34
  120. 120. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn’t recover zainul@ee.ucla.edu - CSEC - Jan 2010 34
  121. 121. RIP Verification in Bursty Channels Fourier Random Sampling Baseline performance - No Loss (Shading: Min - Max) 20 % Loss - Increase in RIP constant and large variation 20 % Oversampling - RIP constant reduces but doesn’t recover Oversampling + Interleaving - RIP constant recovers zainul@ee.ucla.edu - CSEC - Jan 2010 34
  122. 122. Signal Recovery Performance Evaluation Create CS Interleave Lossy CS Reconstruction Signal Sampling Samples Channel Recovery Error? zainul@ee.ucla.edu - CSEC - Jan 2010 35
  123. 123. In Memoryless Channels Baseline performance - No Loss zainul@ee.ucla.edu - CSEC - Jan 2010 36
  124. 124. In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability zainul@ee.ucla.edu - CSEC - Jan 2010 36
  125. 125. In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery zainul@ee.ucla.edu - CSEC - Jan 2010 36
  126. 126. In Memoryless Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely zainul@ee.ucla.edu - CSEC - Jan 2010 36
  127. 127. In Bursty Channels Baseline performance - No Loss zainul@ee.ucla.edu - CSEC - Jan 2010 37
  128. 128. In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability zainul@ee.ucla.edu - CSEC - Jan 2010 37
  129. 129. In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - doesn’t recover completely zainul@ee.ucla.edu - CSEC - Jan 2010 37
  130. 130. In Bursty Channels Baseline performance - No Loss 20 % Loss - Drop in recovery probability Oversampling + Interleaving - Still incomplete recovery 20 % Oversampling - doesn’t recover completely zainul@ee.ucla.edu - CSEC - Jan 2010 37
  131. 131. In Bursty Channels Worse than baseline Baseline performance - No Loss 20 % Loss - Drop in recovery probability Oversampling + Interleaving - Still incomplete recovery 20 % Oversampling - doesn’t recover completely Better than baseline ‣ Recovery incomplete because of low interleaving depth ‣ Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance zainul@ee.ucla.edu - CSEC - Jan 2010 37
  132. 132. In Bursty Channels Worse than baseline Baseline performance - No Loss 20 % Loss - Drop in recovery probability Oversampling + Interleaving - Still incomplete recovery 20 % Oversampling - doesn’t recover completely Better than baseline ‣ Recovery incomplete because of low interleaving depth ‣ Recovery better at high sparsity because bursty channels deliver bigger packets on average, but with higher variance zainul@ee.ucla.edu - CSEC - Jan 2010 37
  133. 133. In Real 802.15.4 Channel Baseline performance - No Loss zainul@ee.ucla.edu - CSEC - Jan 2010 38
  134. 134. In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability zainul@ee.ucla.edu - CSEC - Jan 2010 38
  135. 135. In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery zainul@ee.ucla.edu - CSEC - Jan 2010 38
  136. 136. In Real 802.15.4 Channel Baseline performance - No Loss 20 % Loss - Drop in recovery probability 20 % Oversampling - complete recovery Less than 20 % Oversampling - recovery does not fail completely zainul@ee.ucla.edu - CSEC - Jan 2010 38
  137. 137. Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mJ) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense Sense, CS Sense Sense, CSEC and Compress and and Compress and Send (FFT) Send Send and Send and (1/4th with Send Send rate) Reed with Solomon RS zainul@ee.ucla.edu - CSEC - Jan 2010 39
  138. 138. Cost of CSEC 5 Rnd ADC FFT Radio TX RS 4 Energy/block (mJ) 3 2 1 0 m=256 S-n-S m=10 C-n-S m=64 CS k=320 S-n-S+RS k=16 C-n-S+RS k=80 CSEC Sense Sense, CS Sense Sense, CSEC and Compress and and Compress and Send (FFT) Send Send and Send and (1/4th with Send Send rate) Reed with Solomon RS zainul@ee.ucla.edu - CSEC - Jan 2010 39
  139. 139. Summary ‣ Oversampling is a valid erasure coding strategy for compressive reconstruction ‣ For binary erasure channels, an oversampling rate equal to loss rate is sufficient (empirical) ‣ CS erasure coding can be rate-less like fountain codes ‣ Allows adaptation to varying channel conditions ‣ Can be computationally more efficient than traditional erasure codes zainul@ee.ucla.edu - CSEC - Jan 2010 40
  140. 140. Closing Remarks ‣ CSEC spreads information out while compacting ‣ No free lunch syndrome: Data rate requirement is higher than if using good source and channel coding independently ‣ But, then, computation cost is higher too ‣ CSEC requires knowledge of signal model ‣ If signal is non-stationary, model needs to be updated during recovery ‣ This can be done using over-sampling too ‣ CSEC requires knowledge of channel conditions ‣ Can use CS streaming with feedback zainul@ee.ucla.edu - CSEC - Jan 2010 41
  141. 141. Thank You

×