SlideShare a Scribd company logo
Compressive
  Sensing
   Gabriel Peyré
 www.numerical-tours.com
Overview


•Compressive Sensing Acquisition

•Theoretical Guarantees

•Fourier Domain Measurements

•Parameters Selection
Single Pixel Camera (Rice)
˜
f
Single Pixel Camera (Rice)
˜
f




y[i] = f,   i
                  P measures   N micro-mirrors
Single Pixel Camera (Rice)
˜
f




y[i] = f,   i
                  P measures   N micro-mirrors




                P/N = 1    P/N = 0.16   P/N = 0.02
CS Hardware Model
                                              ˜
CS is about designing hardware: input signals f    L2 (R2 ).
Physical hardware resolution limit: target resolution f   RN .

                 array                       micro
  ˜
  f   L2                       f    RN       mirrors       y     RP
               resolution
                                               K
            CS hardware
CS Hardware Model
                                              ˜
CS is about designing hardware: input signals f    L2 (R2 ).
Physical hardware resolution limit: target resolution f   RN .

                 array                       micro
  ˜
  f   L2                       f    RN       mirrors       y     RP
               resolution
                                               K
            CS hardware


                     ,
                     ,
                ...




                     ,
CS Hardware Model
                                              ˜
CS is about designing hardware: input signals f    L2 (R2 ).
Physical hardware resolution limit: target resolution f   RN .

                 array                       micro
  ˜
  f   L2                       f    RN       mirrors       y     RP
               resolution
                                               K
            CS hardware


                     ,
                                                       Operator K
                     ,                                                f
                ...




                     ,
Sparse CS Recovery
                                f0   RN
f0   RN sparse in ortho-basis




                                         N
                                x0   R
Sparse CS Recovery
                                       f0   RN
f0   RN sparse in ortho-basis

(Discretized) sampling acquisition:
      y = Kf0 + w = K      (x0 ) + w
                    =




                                                N
                                       x0   R
Sparse CS Recovery
                                               f0   RN
f0   RN sparse in ortho-basis

(Discretized) sampling acquisition:
      y = Kf0 + w = K           (x0 ) + w
                    =

K drawn from the Gaussian matrix ensemble
        Ki,j    N (0, P   1/2
                                ) i.i.d.
     drawn from the Gaussian matrix ensemble
                                                        N
                                               x0   R
Sparse CS Recovery
                                                       f0   RN
f0     RN sparse in ortho-basis

(Discretized) sampling acquisition:
        y = Kf0 + w = K            (x0 ) + w
                      =

K drawn from the Gaussian matrix ensemble
           Ki,j   N (0, P    1/2
                                   ) i.i.d.
       drawn from the Gaussian matrix ensemble
                                                                N
                                                       x0   R
     Sparse recovery:           min           ||x||1
                            || x y|| ||w||
CS Simulation Example




Original f0
               = translation invariant
                 wavelet frame
Overview


•Compressive Sensing Acquisition

•Theoretical Guarantees

•Fourier Domain Measurements

•Parameters Selection
CS with RIP

 1
     recovery:
                                                   y=      x0 + w
         x⇥
               argmin ||x||1        where
                  || x y||                         ||w||


Restricted Isometry Constants:
     ⇥ ||x||0     k,   (1    k )||x||2   || x||2    (1 +    k )||x||2
CS with RIP

 1
     recovery:
                                                   y=      x0 + w
         x⇥
               argmin ||x||1        where
                  || x y||                         ||w||


Restricted Isometry Constants:
     ⇥ ||x||0     k,   (1    k )||x||2   || x||2    (1 +    k )||x||2


Theorem:          If   2k2 1, then           [Candes 2009]
                          C0
            ||x0 x || ⇥ ||x0 xk ||1 + C1
                           k
     where xk is the best k-term approximation of x0 .
Singular Values Distributions
Eigenvalues of           I    I with |I| = k are essentially in [a, b]
 a = (1      ) 2
                             and b = (1        )2
                                       P=200, k=10
                                                      where      = k/P
    1.5



When k = P
     1
                             + , the eigenvalue distribution tends to
    0.5
                    1
     0
          f (⇥) =                 (⇥        b)+ (a           ⇥)+       [Marcenko-Pastur]
                  2⇤ ⇥
          0        0.5            1                  1.5           2         2.5

                                       P=200, k=30



          f ( )
     1

    0.8

    0.6
                                                           P = 200, k = 30
    0.4

    0.2

     0
          0        0.5            1                  1.5           2         2.5




              Large deviation inequality [Ledoux]
                                       P=200, k=50


    0.8

    0.6

    0.4

    0.2

     0
          0        0.5            1                  1.5           2         2.5
Singular Values Distributions
Eigenvalues of           I       with |I| = k are essentially in [a, b]
                                  I
 a = (1      ) 2
                              and b = (1        )2
                                           P=200, k=10
                                                       where      = k/P
    1.5



When k = P
     1
                             + , the eigenvalue distribution tends to
    0.5
                    1
     0
          f (⇥) =                     (⇥        b)+ (a           ⇥)+       [Marcenko-Pastur]
                  2⇤ ⇥
          0        0.5                1                  1.5           2         2.5

                                           P=200, k=30



          f ( )
     1

    0.8

    0.6
                                                               P = 200, k = 30
    0.4

    0.2

     0
          0        0.5                1                  1.5           2         2.5




              Large deviation inequality [Ledoux]
                                           P=200, k=50


    0.8

    0.6

    0.4
                                          C
 Theorem:
    0.2
                   If         k                 P
     0
          0        0.5
                                      log(N/P )
                                      1                  1.5           2         2.5




               then      2k           2       1 with high probability.
Numerics with RIP
Stability constant of A:
      (1   ⇥1 (A))|| ||2   ||A ||2   (1 + ⇥2 (A))|| ||2

           smallest / largest eigenvalues of A A
Numerics with RIP
Stability constant of A:
      (1       ⇥1 (A))|| ||2        ||A ||2   (1 + ⇥2 (A))|| ||2

               smallest / largest eigenvalues of A A

Upper/lower RIC:
                                                                   ˆ2
                                                                   k
           i
           k   = max     i(    I)
                 |I|=k
                                                    2   1          ˆ2
                                                                   k
           k   = min(    k, k)
                         1 2



Monte-Carlo estimation:
         ˆk    k                                                   k
                                                 N = 4000, P = 1000
Polytope Noiseless Recovery
Counting faces of random polytopes:      [Donoho]
  All x0 such that ||x0 ||0 Call (P/N )P are identifiable.
   Most x0 such that ||x0 ||0   Cmost (P/N )P are identifiable.

   Call (1/4)   0.065
                                  1

                                0.9


 Cmost (1/4)    0.25            0.8

                                0.7

                                0.6

   Sharp constants.             0.5

                                0.4

   No noise robustness.         0.3

                                0.2

                                0.1

                                  0
                                      50   100   150   200   250   300   350   400




                            RIP
                                       All                   Most
Polytope Noiseless Recovery
Counting faces of random polytopes:      [Donoho]
  All x0 such that ||x0 ||0 Call (P/N )P are identifiable.
   Most x0 such that ||x0 ||0   Cmost (P/N )P are identifiable.

   Call (1/4)   0.065
                                  1

                                0.9


 Cmost (1/4)    0.25            0.8

                                0.7

                                0.6

   Sharp constants.             0.5

                                0.4

   No noise robustness.         0.3

                                0.2
   Computation of               0.1


 “pathological” signals           0
                                      50   100   150   200   250   300   350   400


[Dossal, P, Fadili, 2010]
                            RIP
                                       All                   Most
Overview


•Compressive Sensing Acquisition

•Theoretical Guarantees

•Fourier Domain Measurements

•Parameters Selection
Tomography and Fourier Measures
Tomography and Fourier Measures
                                                      ˆ
                                                      f = FFT2(f )




                                                             k

Fourier slice theorem:      ˆ       ˆ
                            p (⇥) = f (⇥ cos( ), ⇥ sin( ))
                             1D          2D Fourier

                                             t R
Partial Fourier measurements: {p       k
                                         (t)}0 k<K
    Equivalent to:             ˆ
                         Kf = (f [!])!2⌦
Regularized Inversion
Noisy measurements:     ⇥                    ˆ
                                    , y[ ] = f0 [ ] + w[ ].
      Noise:   w[⇥]   N (0, ), white noise.
1
    regularization:
                    1                 ˆ[⇤]|2 +
         f = argmin
           ⇥
                            |y[⇤]     f               |⇥f, ⇥m ⇤|.
                  f 2                             m
MRI Imaging
              From [Lutsig et al.]
MRI Reconstruction
   From [Lutsig et al.]
                                        randomization
Fourier sub-sampling pattern:




High resolution    Low resolution   Linear              Sparsity
Structured Measurements
Gaussian matrices: intractable for large N .

Random partial orthogonal matrix:     {   } orthogonal basis.
 Kf = (h'! , f i)!2⌦     where |⌦| = P uniformly random.

Fast measurements: (e.g. Fourier basis)
Structured Measurements
Gaussian matrices: intractable for large N .

Random partial orthogonal matrix:       {     } orthogonal basis.
 Kf = (h'! , f i)!2⌦     where |⌦| = P uniformly random.

Fast measurements: (e.g. Fourier basis)
                            ⌅                               ⌅
Mutual incoherence:    µ=       N max |⇥⇥ ,    m ⇤|   [1,       N]
                                   ,m
Structured Measurements
Gaussian matrices: intractable for large N .

Random partial orthogonal matrix:       {     } orthogonal basis.
 Kf = (h'! , f i)!2⌦     where |⌦| = P uniformly random.

Fast measurements: (e.g. Fourier basis)
                            ⌅                               ⌅
Mutual incoherence:    µ=       N max |⇥⇥ ,    m ⇤|   [1,       N]
                                   ,m




   Theorem: with high probability on ,          =K
                   CP                     p
         If k 6 2          , then 2k 6 2 1
                µ log(N )4        [Rudelson, Vershynin, 2006]
            not universal: requires incoherence.
Overview


•Compressive Sensing Acquisition

•Theoretical Guarantees

•Fourier Domain Measurements

•Parameter Selection
Risk Minimization
                                1        2
Estimator: e.g.   x (y) 2 argmin ||y   x|| + ||x||1
                             x  2
I     >0                   a regularization parameter.
                   Risk Minimization
                         How to choose the value of the parameter      ?
                                               1                       2
  Estimator: e.g. x (y) 2 argmin ||y
Risk-based selection of
                                                            x|| + ||x||1
                                      x        2
                                                ?(y, ) ||2 )x ,
 Risk associated risk: R(of ) = Ew (||x (y)x x0 wrt 0
I Average to : measure the expected quality of
           ?                           R( ) = Ew ||x?(y, )   x0||2 .
               (y) = argmin R( )                      Plugin-estimator: x   ? (y)   (y)
I   The optimal (theoretical)   minimizes the risk.




                          The risk is unknown since it depends on x0.
                         Can we estimate the risk solely from x?(y, )?
I     >0                   a regularization parameter.
                   Risk Minimization
                         How to choose the value of the parameter      ?
                                               1                       2
  Estimator: e.g. x (y) 2 argmin ||y
Risk-based selection of
                                                            x|| + ||x||1
                                      x        2
                                                ?(y, ) ||2 )x ,
 Risk associated risk: R(of ) = Ew (||x (y)x x0 wrt 0
I Average to : measure the expected quality of
           ?                           R( ) = Ew ||x?(y, )   x0||2 .
               (y) = argmin R( )                      Plugin-estimator: x   ? (y)   (y)
I   The optimal (theoretical)   minimizes the risk.




                Ew is not accessible ! use one observation.
    But:                  The risk is unknown since it depends on x0.
                         Can we estimate the risk solely from x?(y, )?
I     >0                   a regularization parameter.
                   Risk Minimization
                         How to choose the value of the parameter      ?
                                               1                       2
  Estimator: e.g. x (y) 2 argmin ||y
Risk-based selection of
                                                            x|| + ||x||1
                                      x        2
                                                ?(y, ) ||2 )x ,
 Risk associated risk: R(of ) = Ew (||x (y)x x0 wrt 0
I Average to : measure the expected quality of
           ?                           R( ) = Ew ||x?(y, )   x0||2 .
               (y) = argmin R( )                      Plugin-estimator: x   ? (y)   (y)
I   The optimal (theoretical)   minimizes the risk.




                Ew is not accessible ! use one observation.
    But:               The risk is unknown since it depends on x0.
                x0 is not accessible ! needs risk estimators.
                                                            ?
                         Can we estimate the risk solely from x (y, )?
Prediction Risk Estimation
Prediction: µ (y) = x (y)
Sensitivity analysis: if µ is weakly di↵erentiable
                                            2
    µ (y + ) = µ (y) + @µ (y) · + O(|| || )
Prediction Risk Estimation
Prediction: µ (y) = x (y)
Sensitivity analysis: if µ is weakly di↵erentiable
                                            2
    µ (y + ) = µ (y) + @µ (y) · + O(|| || )
Stein Unbiased Risk Estimator:
                                2   2          2
    SURE (y) = ||y    µ (y)||           P +2       df (y)
    df (y) = tr(@µ (y)) = div(µ )(y)
Prediction Risk Estimation
 Prediction: µ (y) = x (y)
 Sensitivity analysis: if µ is weakly di↵erentiable
                                             2
     µ (y + ) = µ (y) + @µ (y) · + O(|| || )
 Stein Unbiased Risk Estimator:
                                 2   2          2
     SURE (y) = ||y    µ (y)||           P +2       df (y)
     df (y) = tr(@µ (y)) = div(µ )(y)

                                                                   2
Theorem: [Stein, 1981] Ew (SURE (y)) = Ew (|| x0             µ (y)|| )
Prediction Risk Estimation
 Prediction: µ (y) = x (y)
 Sensitivity analysis: if µ is weakly di↵erentiable
                                             2
     µ (y + ) = µ (y) + @µ (y) · + O(|| || )
 Stein Unbiased Risk Estimator:
                                 2   2          2
     SURE (y) = ||y    µ (y)||           P +2       df (y)
     df (y) = tr(@µ (y)) = div(µ )(y)

                                                                   2
Theorem: [Stein, 1981] Ew (SURE (y)) = Ew (|| x0             µ (y)|| )

 Other estimators: GCV, BIC, AIC, . . .
Prediction Risk Estimation
 Prediction: µ (y) = x (y)
 Sensitivity analysis: if µ is weakly di↵erentiable
                                             2
     µ (y + ) = µ (y) + @µ (y) · + O(|| || )
 Stein Unbiased Risk Estimator:
                                 2   2          2
     SURE (y) = ||y    µ (y)||           P +2       df (y)
     df (y) = tr(@µ (y)) = div(µ )(y)

                                                                       2
Theorem: [Stein, 1981] Ew (SURE (y)) = Ew (|| x0             µ (y)|| )

 Other estimators: GCV, BIC, AIC, . . .
 Generalized SURE: estimate Ew (||Pker(     )?   (x0     x (y))||2 )
Computation for L1 Regularization
                                1        2
Sparse estimator: x (y) 2 argmin ||y   x|| + ||x||1
                             x  2
Computation for L1 Regularization
                                1              2
Sparse estimator: x (y) 2 argmin ||y         x|| + ||x||1
                             x  2

  Theorem: for all y, there exists x? s.t. I injective.
     df (y) = div ( x ) (y) = ||x? ||0    [Dossal et al. 2011]
(a) y

                               Computation for L1 Regularization Regulariz
                                                                                                                    6




                                                          Quadratic lo
                                           ?                                                                     x 10
                                     (b) x (y, ) at the optimal
                                                          2    4        6                                  2.5
                                                                                                                                                                        Projection Risk
                                                                                                                                                                        GSURE

                                             1                                                                                                                          True Risk

          Sparse estimator: x (y) 21.5 using multi-scale 2 + ||x||thresholding
                                     argmin 2||y
                    Compressed-sensing                x|| wavelet 1
                                                      (a) y
                                             2




                                                                                          Quadratic loss
                                        x
 )
                                                                                                                                                          6
                                                                                                           1.5                                        x 10
  )
      ⌘
          ?
              Theorem: for all y, there exists x? s.t. I injective.
                                                        2.5

                 df (y) = div ( x ) (y)1 = ||x? ||01  [Dossal et al. 2011]
      (b) x?(y, ) at the optimal                                                      2                                 4          66
                                          (b) x?(y, ) at the optimal                                                    2               4       8 8 10    10
                                                                                                                                                          12
                                                                                                                            Regularization parameter λ
                                                                                                                               Regularization parameter λ


          2R         P ⇥N          Compressed-sensing using multi-scale wavelet thresholding
                   realization of a random vector. P = N/4                             2




                                                                                                                             Quadratic loss
pressed-sensing using multi-scale wavelet thresholding
 are indexed by I,                                                                                                  6
                                                x 10
         : TI wavelets.        (c) xM L
                                            2.5
                                                                                                                                                                        Projection Risk
on GJ :                                                                           6                                                                                     GSURE
                                                                               x 10                                                                                     True Risk




                                                                                          Quadratic loss
                                                                         2.5
                                                                                                            2
                                                                                                                                              1.5                                  Projecti
                                                                                          Quadratic loss
                                                     (c) xM L
                                                                                                                                                                                   GSURE
                                                                                                                                                                                   True Ri
                                                                                                           1.5
      A[J]DI sI .
                                                                          2
                                                          atic loss




 )?                                                                                                                                              1
                    +                                xat ? (y)                                                                                               2                4          6
                        y
                    (c) xM L         (d) x?(y,
                                         (d) x?(y,
                                                     ) the optimal
                                                     ) at the optimal
                                                                                                            1
                                                                                                                        2               4         ?  6         8
                                                                                                                                              Regularization parameter λ
                                                                                                                                                                         10        12
                                                                                                                                                                                  Regulariz
where, for any z 2 RP , ⌫ = ⌫(z) solves the following linear system
                  Anisotropic Total-Variation
                                       ✓ ⇤
                                               DJ
                                                   ◆✓ ◆ ✓ ⇤ ◆ 6
                                                       ⌫           z 10 6
                                                                    x 10
                                          DJ⇤ 0        ⌫
                                                       ˜
                                                           =        x .
                                                                  0 1
                                                              2.5
                                                               2.5
                                             Extension to ` analysis, TV.
 I   In practice, with law of large number, the empirical mean is replaced for the expectation.
 I   The computation of ⌫(z) is achieved by solving the [Vaiter et al. conjugate gradient solver
                                                          linear system with a 2012]
                                                                  : vertical sub-sampling.
 Numerical example
                                                                 Finite di↵erences gradient:
Super-resolution using (anisotropic) Total-Variation
      Observations y                                                           2
                                                                               2
                                                                          D = [@1 , @2 ]
           (a) y




                                                                           Quadratic loss
           (a) y




                                                                           Quadratic loss
                                                                  6
                                                               x 10
                                                    2.5
                                                                                                       Projection Risk
                                                                                                       GSURE
                                                                                                       True Risk
                                       Quadratic loss
                                                           2
                   (a) y
                                          Quadratic loss


                                                                                            1.5
                                                                                            1.5

                                                    1.5




                                                           1                                 1
                                                                                             1
       ?
       ?
           x (y)
        (b) x?(y, ? at the optimal
                   )                                                  2            4          ?6   8   10        12
Conclusion
Sparsity: approximate signals with few atoms.


         dictionary
Conclusion
 Sparsity: approximate signals with few atoms.


          dictionary



Compressed sensing ideas:
      Randomized sensors + sparse recovery.
      Number of measurements signal complexity.
      CS is about designing new hardware.
Conclusion
 Sparsity: approximate signals with few atoms.


           dictionary



Compressed sensing ideas:
       Randomized sensors + sparse recovery.
       Number of measurements signal complexity.
       CS is about designing new hardware.

The devil is in the constants:
       Worse case analysis is problematic.
       Designing good signal models.

More Related Content

What's hot

Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009
Juan Luis Nieves
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformationzertux
 
Practical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT MethodsPractical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT MethodsNaughty Dog
 
SPU Optimizations-part 1
SPU Optimizations-part 1SPU Optimizations-part 1
SPU Optimizations-part 1Naughty Dog
 
SPU Optimizations - Part 2
SPU Optimizations - Part 2SPU Optimizations - Part 2
SPU Optimizations - Part 2Naughty Dog
 
An evaluation of gnss code and phase solutions
An evaluation of gnss code and phase solutionsAn evaluation of gnss code and phase solutions
An evaluation of gnss code and phase solutions
Alexander Decker
 
Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Jia-Bin Huang
 
Dsp U Lec07 Realization Of Discrete Time Systems
Dsp U   Lec07 Realization Of Discrete Time SystemsDsp U   Lec07 Realization Of Discrete Time Systems
Dsp U Lec07 Realization Of Discrete Time Systems
taha25
 
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)Jia-Bin Huang
 
fourier series and fourier transform
fourier series and fourier transformfourier series and fourier transform
fourier series and fourier transform
Vikas Rathod
 
Dsp U Lec08 Fir Filter Design
Dsp U   Lec08 Fir Filter DesignDsp U   Lec08 Fir Filter Design
Dsp U Lec08 Fir Filter Design
taha25
 
Bouguet's MatLab Camera Calibration Toolbox
Bouguet's MatLab Camera Calibration ToolboxBouguet's MatLab Camera Calibration Toolbox
Bouguet's MatLab Camera Calibration Toolbox
Yuji Oyamada
 
Deblurring in ct
Deblurring in ctDeblurring in ct
Deblurring in ct
Kriti Sharma
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectorszukun
 
Bouguet's MatLab Camera Calibration Toolbox for Stereo Camera
Bouguet's MatLab Camera Calibration Toolbox for Stereo CameraBouguet's MatLab Camera Calibration Toolbox for Stereo Camera
Bouguet's MatLab Camera Calibration Toolbox for Stereo Camera
Yuji Oyamada
 
Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)
Jia-Bin Huang
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
NITHIN KALLE PALLY
 
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Joo-Haeng Lee
 
Quantum Probabilities and Quantum-inspired Information Retrieval
Quantum Probabilities and Quantum-inspired Information RetrievalQuantum Probabilities and Quantum-inspired Information Retrieval
Quantum Probabilities and Quantum-inspired Information Retrieval
Ingo Frommholz
 
Detection Tracking and Recognition of Human Poses for a Real Time Spatial Game
Detection Tracking and Recognition of Human Poses for a Real Time Spatial GameDetection Tracking and Recognition of Human Poses for a Real Time Spatial Game
Detection Tracking and Recognition of Human Poses for a Real Time Spatial Game
Wolfgang Hürst
 

What's hot (20)

Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009Color Img at Prisma Network meeting 2009
Color Img at Prisma Network meeting 2009
 
Fourier transformation
Fourier transformationFourier transformation
Fourier transformation
 
Practical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT MethodsPractical Spherical Harmonics Based PRT Methods
Practical Spherical Harmonics Based PRT Methods
 
SPU Optimizations-part 1
SPU Optimizations-part 1SPU Optimizations-part 1
SPU Optimizations-part 1
 
SPU Optimizations - Part 2
SPU Optimizations - Part 2SPU Optimizations - Part 2
SPU Optimizations - Part 2
 
An evaluation of gnss code and phase solutions
An evaluation of gnss code and phase solutionsAn evaluation of gnss code and phase solutions
An evaluation of gnss code and phase solutions
 
Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)Learning Moving Cast Shadows for Foreground Detection (VS 2008)
Learning Moving Cast Shadows for Foreground Detection (VS 2008)
 
Dsp U Lec07 Realization Of Discrete Time Systems
Dsp U   Lec07 Realization Of Discrete Time SystemsDsp U   Lec07 Realization Of Discrete Time Systems
Dsp U Lec07 Realization Of Discrete Time Systems
 
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
A Physical Approach to Moving Cast Shadow Detection (ICASSP 2009)
 
fourier series and fourier transform
fourier series and fourier transformfourier series and fourier transform
fourier series and fourier transform
 
Dsp U Lec08 Fir Filter Design
Dsp U   Lec08 Fir Filter DesignDsp U   Lec08 Fir Filter Design
Dsp U Lec08 Fir Filter Design
 
Bouguet's MatLab Camera Calibration Toolbox
Bouguet's MatLab Camera Calibration ToolboxBouguet's MatLab Camera Calibration Toolbox
Bouguet's MatLab Camera Calibration Toolbox
 
Deblurring in ct
Deblurring in ctDeblurring in ct
Deblurring in ct
 
Modern features-part-1-detectors
Modern features-part-1-detectorsModern features-part-1-detectors
Modern features-part-1-detectors
 
Bouguet's MatLab Camera Calibration Toolbox for Stereo Camera
Bouguet's MatLab Camera Calibration Toolbox for Stereo CameraBouguet's MatLab Camera Calibration Toolbox for Stereo Camera
Bouguet's MatLab Camera Calibration Toolbox for Stereo Camera
 
Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)Estimating Human Pose from Occluded Images (ACCV 2009)
Estimating Human Pose from Occluded Images (ACCV 2009)
 
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transformsDISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
DISTINGUISH BETWEEN WALSH TRANSFORM AND HAAR TRANSFORMDip transforms
 
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
Note on Coupled Line Cameras for Rectangle Reconstruction (ACDDE 2012)
 
Quantum Probabilities and Quantum-inspired Information Retrieval
Quantum Probabilities and Quantum-inspired Information RetrievalQuantum Probabilities and Quantum-inspired Information Retrieval
Quantum Probabilities and Quantum-inspired Information Retrieval
 
Detection Tracking and Recognition of Human Poses for a Real Time Spatial Game
Detection Tracking and Recognition of Human Poses for a Real Time Spatial GameDetection Tracking and Recognition of Human Poses for a Real Time Spatial Game
Detection Tracking and Recognition of Human Poses for a Real Time Spatial Game
 

Similar to Compressed Sensing and Tomography

CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future TrendCVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trendzukun
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
Gabriel Peyré
 
Engr 371 final exam april 2010
Engr 371 final exam april 2010Engr 371 final exam april 2010
Engr 371 final exam april 2010amnesiann
 
Cosmological Perturbations and Numerical Simulations
Cosmological Perturbations and Numerical SimulationsCosmological Perturbations and Numerical Simulations
Cosmological Perturbations and Numerical Simulations
Ian Huston
 
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
Alex Pruden
 
Lesson 16 The Spectral Theorem and Applications
Lesson 16  The Spectral Theorem and ApplicationsLesson 16  The Spectral Theorem and Applications
Lesson 16 The Spectral Theorem and ApplicationsMatthew Leingang
 
6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf
shruti533256
 
An automated and user-friendly optical tweezers for biomolecular investigat...
An automated and user-friendly optical  tweezers for biomolecular  investigat...An automated and user-friendly optical  tweezers for biomolecular  investigat...
An automated and user-friendly optical tweezers for biomolecular investigat...
Dr. Pranav Rathi
 
nips勉強会_Toward Property-Based Classification of Clustering Paradigms
nips勉強会_Toward Property-Based Classification of Clustering Paradigmsnips勉強会_Toward Property-Based Classification of Clustering Paradigms
nips勉強会_Toward Property-Based Classification of Clustering Paradigmstksakaki
 
Spherical CNNs paper reading
Spherical CNNs paper readingSpherical CNNs paper reading
Spherical CNNs paper reading
Kohta Ishikawa
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Mgm
MgmMgm
Topology Matters in Communication
Topology Matters in CommunicationTopology Matters in Communication
Topology Matters in Communication
cseiitgn
 
CVPR2010: higher order models in computer vision: Part 4
CVPR2010: higher order models in computer vision: Part 4 CVPR2010: higher order models in computer vision: Part 4
CVPR2010: higher order models in computer vision: Part 4 zukun
 
Logistic Regression(SGD)
Logistic Regression(SGD)Logistic Regression(SGD)
Logistic Regression(SGD)Prentice Xu
 

Similar to Compressed Sensing and Tomography (20)

CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future TrendCVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
CVPR2010: Advanced ITinCVPR in a Nutshell: part 7: Future Trend
 
A Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New OneA Review of Proximal Methods, with a New One
A Review of Proximal Methods, with a New One
 
Engr 371 final exam april 2010
Engr 371 final exam april 2010Engr 371 final exam april 2010
Engr 371 final exam april 2010
 
Cosmological Perturbations and Numerical Simulations
Cosmological Perturbations and Numerical SimulationsCosmological Perturbations and Numerical Simulations
Cosmological Perturbations and Numerical Simulations
 
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
zkStudyClub: PLONKUP & Reinforced Concrete [Luke Pearson, Joshua Fitzgerald, ...
 
Lesson 16 The Spectral Theorem and Applications
Lesson 16  The Spectral Theorem and ApplicationsLesson 16  The Spectral Theorem and Applications
Lesson 16 The Spectral Theorem and Applications
 
6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf6-Nfa & equivalence with RE.pdf
6-Nfa & equivalence with RE.pdf
 
An automated and user-friendly optical tweezers for biomolecular investigat...
An automated and user-friendly optical  tweezers for biomolecular  investigat...An automated and user-friendly optical  tweezers for biomolecular  investigat...
An automated and user-friendly optical tweezers for biomolecular investigat...
 
Fourier transform
Fourier transformFourier transform
Fourier transform
 
nips勉強会_Toward Property-Based Classification of Clustering Paradigms
nips勉強会_Toward Property-Based Classification of Clustering Paradigmsnips勉強会_Toward Property-Based Classification of Clustering Paradigms
nips勉強会_Toward Property-Based Classification of Clustering Paradigms
 
0708 ch 7 day 8
0708 ch 7 day 80708 ch 7 day 8
0708 ch 7 day 8
 
Fourier series 2
Fourier series 2Fourier series 2
Fourier series 2
 
Spherical CNNs paper reading
Spherical CNNs paper readingSpherical CNNs paper reading
Spherical CNNs paper reading
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Mgm
MgmMgm
Mgm
 
Pda
PdaPda
Pda
 
Topology Matters in Communication
Topology Matters in CommunicationTopology Matters in Communication
Topology Matters in Communication
 
CVPR2010: higher order models in computer vision: Part 4
CVPR2010: higher order models in computer vision: Part 4 CVPR2010: higher order models in computer vision: Part 4
CVPR2010: higher order models in computer vision: Part 4
 
Logistic Regression(SGD)
Logistic Regression(SGD)Logistic Regression(SGD)
Logistic Regression(SGD)
 
Lecture6 svdd
Lecture6 svddLecture6 svdd
Lecture6 svdd
 

More from Gabriel Peyré

Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Gabriel Peyré
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Gabriel Peyré
 
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsLow Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Gabriel Peyré
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
Gabriel Peyré
 
Model Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesModel Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular Gauges
Gabriel Peyré
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
Gabriel Peyré
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
Gabriel Peyré
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
Gabriel Peyré
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
Gabriel Peyré
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
Gabriel Peyré
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
Gabriel Peyré
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
Gabriel Peyré
 
Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
Gabriel Peyré
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
Gabriel Peyré
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
Gabriel Peyré
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
Gabriel Peyré
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
Gabriel Peyré
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
Gabriel Peyré
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
Gabriel Peyré
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
Gabriel Peyré
 

More from Gabriel Peyré (20)

Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
 
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
Low Complexity Regularization of Inverse Problems - Course #2 Recovery Guaran...
 
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse ProblemsLow Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
Low Complexity Regularization of Inverse Problems - Course #1 Inverse Problems
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
 
Model Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular GaugesModel Selection with Piecewise Regular Gauges
Model Selection with Piecewise Regular Gauges
 
Signal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems RegularizationSignal Processing Course : Inverse Problems Regularization
Signal Processing Course : Inverse Problems Regularization
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
 
Geodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and GraphicsGeodesic Method in Computer Vision and Graphics
Geodesic Method in Computer Vision and Graphics
 
Learning Sparse Representation
Learning Sparse RepresentationLearning Sparse Representation
Learning Sparse Representation
 
Adaptive Signal and Image Processing
Adaptive Signal and Image ProcessingAdaptive Signal and Image Processing
Adaptive Signal and Image Processing
 
Mesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh ParameterizationMesh Processing Course : Mesh Parameterization
Mesh Processing Course : Mesh Parameterization
 
Mesh Processing Course : Multiresolution
Mesh Processing Course : MultiresolutionMesh Processing Course : Multiresolution
Mesh Processing Course : Multiresolution
 
Mesh Processing Course : Introduction
Mesh Processing Course : IntroductionMesh Processing Course : Introduction
Mesh Processing Course : Introduction
 
Mesh Processing Course : Geodesics
Mesh Processing Course : GeodesicsMesh Processing Course : Geodesics
Mesh Processing Course : Geodesics
 
Mesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic SamplingMesh Processing Course : Geodesic Sampling
Mesh Processing Course : Geodesic Sampling
 
Mesh Processing Course : Differential Calculus
Mesh Processing Course : Differential CalculusMesh Processing Course : Differential Calculus
Mesh Processing Course : Differential Calculus
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
 
Signal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse RecoverySignal Processing Course : Theory for Sparse Recovery
Signal Processing Course : Theory for Sparse Recovery
 
Signal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the CourseSignal Processing Course : Presentation of the Course
Signal Processing Course : Presentation of the Course
 
Signal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal BasesSignal Processing Course : Orthogonal Bases
Signal Processing Course : Orthogonal Bases
 

Recently uploaded

The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Akanksha trivedi rama nursing college kanpur.
 
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdfMASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
goswamiyash170123
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
tarandeep35
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
Krisztián Száraz
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...
NelTorrente
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
Priyankaranawat4
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
chanes7
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
amberjdewit93
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
deeptiverma2406
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
chanes7
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
thanhdowork
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 

Recently uploaded (20)

The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama UniversityNatural birth techniques - Mrs.Akanksha Trivedi Rama University
Natural birth techniques - Mrs.Akanksha Trivedi Rama University
 
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdfMASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
MASS MEDIA STUDIES-835-CLASS XI Resource Material.pdf
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...
MATATAG CURRICULUM: ASSESSING THE READINESS OF ELEM. PUBLIC SCHOOL TEACHERS I...
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdfANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
ANATOMY AND BIOMECHANICS OF HIP JOINT.pdf
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
 
Digital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental DesignDigital Artefact 1 - Tiny Home Environmental Design
Digital Artefact 1 - Tiny Home Environmental Design
 
Best Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDABest Digital Marketing Institute In NOIDA
Best Digital Marketing Institute In NOIDA
 
Digital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments UnitDigital Artifact 1 - 10VCD Environments Unit
Digital Artifact 1 - 10VCD Environments Unit
 
A Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptxA Survey of Techniques for Maximizing LLM Performance.pptx
A Survey of Techniques for Maximizing LLM Performance.pptx
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 

Compressed Sensing and Tomography

  • 1. Compressive Sensing Gabriel Peyré www.numerical-tours.com
  • 2. Overview •Compressive Sensing Acquisition •Theoretical Guarantees •Fourier Domain Measurements •Parameters Selection
  • 3. Single Pixel Camera (Rice) ˜ f
  • 4. Single Pixel Camera (Rice) ˜ f y[i] = f, i P measures N micro-mirrors
  • 5. Single Pixel Camera (Rice) ˜ f y[i] = f, i P measures N micro-mirrors P/N = 1 P/N = 0.16 P/N = 0.02
  • 6. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f RN . array micro ˜ f L2 f RN mirrors y RP resolution K CS hardware
  • 7. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f RN . array micro ˜ f L2 f RN mirrors y RP resolution K CS hardware , , ... ,
  • 8. CS Hardware Model ˜ CS is about designing hardware: input signals f L2 (R2 ). Physical hardware resolution limit: target resolution f RN . array micro ˜ f L2 f RN mirrors y RP resolution K CS hardware , Operator K , f ... ,
  • 9. Sparse CS Recovery f0 RN f0 RN sparse in ortho-basis N x0 R
  • 10. Sparse CS Recovery f0 RN f0 RN sparse in ortho-basis (Discretized) sampling acquisition: y = Kf0 + w = K (x0 ) + w = N x0 R
  • 11. Sparse CS Recovery f0 RN f0 RN sparse in ortho-basis (Discretized) sampling acquisition: y = Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N (0, P 1/2 ) i.i.d. drawn from the Gaussian matrix ensemble N x0 R
  • 12. Sparse CS Recovery f0 RN f0 RN sparse in ortho-basis (Discretized) sampling acquisition: y = Kf0 + w = K (x0 ) + w = K drawn from the Gaussian matrix ensemble Ki,j N (0, P 1/2 ) i.i.d. drawn from the Gaussian matrix ensemble N x0 R Sparse recovery: min ||x||1 || x y|| ||w||
  • 13. CS Simulation Example Original f0 = translation invariant wavelet frame
  • 14. Overview •Compressive Sensing Acquisition •Theoretical Guarantees •Fourier Domain Measurements •Parameters Selection
  • 15. CS with RIP 1 recovery: y= x0 + w x⇥ argmin ||x||1 where || x y|| ||w|| Restricted Isometry Constants: ⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 + k )||x||2
  • 16. CS with RIP 1 recovery: y= x0 + w x⇥ argmin ||x||1 where || x y|| ||w|| Restricted Isometry Constants: ⇥ ||x||0 k, (1 k )||x||2 || x||2 (1 + k )||x||2 Theorem: If 2k2 1, then [Candes 2009] C0 ||x0 x || ⇥ ||x0 xk ||1 + C1 k where xk is the best k-term approximation of x0 .
  • 17. Singular Values Distributions Eigenvalues of I I with |I| = k are essentially in [a, b] a = (1 ) 2 and b = (1 )2 P=200, k=10 where = k/P 1.5 When k = P 1 + , the eigenvalue distribution tends to 0.5 1 0 f (⇥) = (⇥ b)+ (a ⇥)+ [Marcenko-Pastur] 2⇤ ⇥ 0 0.5 1 1.5 2 2.5 P=200, k=30 f ( ) 1 0.8 0.6 P = 200, k = 30 0.4 0.2 0 0 0.5 1 1.5 2 2.5 Large deviation inequality [Ledoux] P=200, k=50 0.8 0.6 0.4 0.2 0 0 0.5 1 1.5 2 2.5
  • 18. Singular Values Distributions Eigenvalues of I with |I| = k are essentially in [a, b] I a = (1 ) 2 and b = (1 )2 P=200, k=10 where = k/P 1.5 When k = P 1 + , the eigenvalue distribution tends to 0.5 1 0 f (⇥) = (⇥ b)+ (a ⇥)+ [Marcenko-Pastur] 2⇤ ⇥ 0 0.5 1 1.5 2 2.5 P=200, k=30 f ( ) 1 0.8 0.6 P = 200, k = 30 0.4 0.2 0 0 0.5 1 1.5 2 2.5 Large deviation inequality [Ledoux] P=200, k=50 0.8 0.6 0.4 C Theorem: 0.2 If k P 0 0 0.5 log(N/P ) 1 1.5 2 2.5 then 2k 2 1 with high probability.
  • 19. Numerics with RIP Stability constant of A: (1 ⇥1 (A))|| ||2 ||A ||2 (1 + ⇥2 (A))|| ||2 smallest / largest eigenvalues of A A
  • 20. Numerics with RIP Stability constant of A: (1 ⇥1 (A))|| ||2 ||A ||2 (1 + ⇥2 (A))|| ||2 smallest / largest eigenvalues of A A Upper/lower RIC: ˆ2 k i k = max i( I) |I|=k 2 1 ˆ2 k k = min( k, k) 1 2 Monte-Carlo estimation: ˆk k k N = 4000, P = 1000
  • 21. Polytope Noiseless Recovery Counting faces of random polytopes: [Donoho] All x0 such that ||x0 ||0 Call (P/N )P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N )P are identifiable. Call (1/4) 0.065 1 0.9 Cmost (1/4) 0.25 0.8 0.7 0.6 Sharp constants. 0.5 0.4 No noise robustness. 0.3 0.2 0.1 0 50 100 150 200 250 300 350 400 RIP All Most
  • 22. Polytope Noiseless Recovery Counting faces of random polytopes: [Donoho] All x0 such that ||x0 ||0 Call (P/N )P are identifiable. Most x0 such that ||x0 ||0 Cmost (P/N )P are identifiable. Call (1/4) 0.065 1 0.9 Cmost (1/4) 0.25 0.8 0.7 0.6 Sharp constants. 0.5 0.4 No noise robustness. 0.3 0.2 Computation of 0.1 “pathological” signals 0 50 100 150 200 250 300 350 400 [Dossal, P, Fadili, 2010] RIP All Most
  • 23. Overview •Compressive Sensing Acquisition •Theoretical Guarantees •Fourier Domain Measurements •Parameters Selection
  • 25. Tomography and Fourier Measures ˆ f = FFT2(f ) k Fourier slice theorem: ˆ ˆ p (⇥) = f (⇥ cos( ), ⇥ sin( )) 1D 2D Fourier t R Partial Fourier measurements: {p k (t)}0 k<K Equivalent to: ˆ Kf = (f [!])!2⌦
  • 26. Regularized Inversion Noisy measurements: ⇥ ˆ , y[ ] = f0 [ ] + w[ ]. Noise: w[⇥] N (0, ), white noise. 1 regularization: 1 ˆ[⇤]|2 + f = argmin ⇥ |y[⇤] f |⇥f, ⇥m ⇤|. f 2 m
  • 27. MRI Imaging From [Lutsig et al.]
  • 28. MRI Reconstruction From [Lutsig et al.] randomization Fourier sub-sampling pattern: High resolution Low resolution Linear Sparsity
  • 29. Structured Measurements Gaussian matrices: intractable for large N . Random partial orthogonal matrix: { } orthogonal basis. Kf = (h'! , f i)!2⌦ where |⌦| = P uniformly random. Fast measurements: (e.g. Fourier basis)
  • 30. Structured Measurements Gaussian matrices: intractable for large N . Random partial orthogonal matrix: { } orthogonal basis. Kf = (h'! , f i)!2⌦ where |⌦| = P uniformly random. Fast measurements: (e.g. Fourier basis) ⌅ ⌅ Mutual incoherence: µ= N max |⇥⇥ , m ⇤| [1, N] ,m
  • 31. Structured Measurements Gaussian matrices: intractable for large N . Random partial orthogonal matrix: { } orthogonal basis. Kf = (h'! , f i)!2⌦ where |⌦| = P uniformly random. Fast measurements: (e.g. Fourier basis) ⌅ ⌅ Mutual incoherence: µ= N max |⇥⇥ , m ⇤| [1, N] ,m Theorem: with high probability on , =K CP p If k 6 2 , then 2k 6 2 1 µ log(N )4 [Rudelson, Vershynin, 2006] not universal: requires incoherence.
  • 32. Overview •Compressive Sensing Acquisition •Theoretical Guarantees •Fourier Domain Measurements •Parameter Selection
  • 33. Risk Minimization 1 2 Estimator: e.g. x (y) 2 argmin ||y x|| + ||x||1 x 2
  • 34. I >0 a regularization parameter. Risk Minimization How to choose the value of the parameter ? 1 2 Estimator: e.g. x (y) 2 argmin ||y Risk-based selection of x|| + ||x||1 x 2 ?(y, ) ||2 )x , Risk associated risk: R(of ) = Ew (||x (y)x x0 wrt 0 I Average to : measure the expected quality of ? R( ) = Ew ||x?(y, ) x0||2 . (y) = argmin R( ) Plugin-estimator: x ? (y) (y) I The optimal (theoretical) minimizes the risk. The risk is unknown since it depends on x0. Can we estimate the risk solely from x?(y, )?
  • 35. I >0 a regularization parameter. Risk Minimization How to choose the value of the parameter ? 1 2 Estimator: e.g. x (y) 2 argmin ||y Risk-based selection of x|| + ||x||1 x 2 ?(y, ) ||2 )x , Risk associated risk: R(of ) = Ew (||x (y)x x0 wrt 0 I Average to : measure the expected quality of ? R( ) = Ew ||x?(y, ) x0||2 . (y) = argmin R( ) Plugin-estimator: x ? (y) (y) I The optimal (theoretical) minimizes the risk. Ew is not accessible ! use one observation. But: The risk is unknown since it depends on x0. Can we estimate the risk solely from x?(y, )?
  • 36. I >0 a regularization parameter. Risk Minimization How to choose the value of the parameter ? 1 2 Estimator: e.g. x (y) 2 argmin ||y Risk-based selection of x|| + ||x||1 x 2 ?(y, ) ||2 )x , Risk associated risk: R(of ) = Ew (||x (y)x x0 wrt 0 I Average to : measure the expected quality of ? R( ) = Ew ||x?(y, ) x0||2 . (y) = argmin R( ) Plugin-estimator: x ? (y) (y) I The optimal (theoretical) minimizes the risk. Ew is not accessible ! use one observation. But: The risk is unknown since it depends on x0. x0 is not accessible ! needs risk estimators. ? Can we estimate the risk solely from x (y, )?
  • 37. Prediction Risk Estimation Prediction: µ (y) = x (y) Sensitivity analysis: if µ is weakly di↵erentiable 2 µ (y + ) = µ (y) + @µ (y) · + O(|| || )
  • 38. Prediction Risk Estimation Prediction: µ (y) = x (y) Sensitivity analysis: if µ is weakly di↵erentiable 2 µ (y + ) = µ (y) + @µ (y) · + O(|| || ) Stein Unbiased Risk Estimator: 2 2 2 SURE (y) = ||y µ (y)|| P +2 df (y) df (y) = tr(@µ (y)) = div(µ )(y)
  • 39. Prediction Risk Estimation Prediction: µ (y) = x (y) Sensitivity analysis: if µ is weakly di↵erentiable 2 µ (y + ) = µ (y) + @µ (y) · + O(|| || ) Stein Unbiased Risk Estimator: 2 2 2 SURE (y) = ||y µ (y)|| P +2 df (y) df (y) = tr(@µ (y)) = div(µ )(y) 2 Theorem: [Stein, 1981] Ew (SURE (y)) = Ew (|| x0 µ (y)|| )
  • 40. Prediction Risk Estimation Prediction: µ (y) = x (y) Sensitivity analysis: if µ is weakly di↵erentiable 2 µ (y + ) = µ (y) + @µ (y) · + O(|| || ) Stein Unbiased Risk Estimator: 2 2 2 SURE (y) = ||y µ (y)|| P +2 df (y) df (y) = tr(@µ (y)) = div(µ )(y) 2 Theorem: [Stein, 1981] Ew (SURE (y)) = Ew (|| x0 µ (y)|| ) Other estimators: GCV, BIC, AIC, . . .
  • 41. Prediction Risk Estimation Prediction: µ (y) = x (y) Sensitivity analysis: if µ is weakly di↵erentiable 2 µ (y + ) = µ (y) + @µ (y) · + O(|| || ) Stein Unbiased Risk Estimator: 2 2 2 SURE (y) = ||y µ (y)|| P +2 df (y) df (y) = tr(@µ (y)) = div(µ )(y) 2 Theorem: [Stein, 1981] Ew (SURE (y)) = Ew (|| x0 µ (y)|| ) Other estimators: GCV, BIC, AIC, . . . Generalized SURE: estimate Ew (||Pker( )? (x0 x (y))||2 )
  • 42. Computation for L1 Regularization 1 2 Sparse estimator: x (y) 2 argmin ||y x|| + ||x||1 x 2
  • 43. Computation for L1 Regularization 1 2 Sparse estimator: x (y) 2 argmin ||y x|| + ||x||1 x 2 Theorem: for all y, there exists x? s.t. I injective. df (y) = div ( x ) (y) = ||x? ||0 [Dossal et al. 2011]
  • 44. (a) y Computation for L1 Regularization Regulariz 6 Quadratic lo ? x 10 (b) x (y, ) at the optimal 2 4 6 2.5 Projection Risk GSURE 1 True Risk Sparse estimator: x (y) 21.5 using multi-scale 2 + ||x||thresholding argmin 2||y Compressed-sensing x|| wavelet 1 (a) y 2 Quadratic loss x ) 6 1.5 x 10 ) ⌘ ? Theorem: for all y, there exists x? s.t. I injective. 2.5 df (y) = div ( x ) (y)1 = ||x? ||01 [Dossal et al. 2011] (b) x?(y, ) at the optimal 2 4 66 (b) x?(y, ) at the optimal 2 4 8 8 10 10 12 Regularization parameter λ Regularization parameter λ 2R P ⇥N Compressed-sensing using multi-scale wavelet thresholding realization of a random vector. P = N/4 2 Quadratic loss pressed-sensing using multi-scale wavelet thresholding are indexed by I, 6 x 10 : TI wavelets. (c) xM L 2.5 Projection Risk on GJ : 6 GSURE x 10 True Risk Quadratic loss 2.5 2 1.5 Projecti Quadratic loss (c) xM L GSURE True Ri 1.5 A[J]DI sI . 2 atic loss )? 1 + xat ? (y) 2 4 6 y (c) xM L (d) x?(y, (d) x?(y, ) the optimal ) at the optimal 1 2 4 ? 6 8 Regularization parameter λ 10 12 Regulariz
  • 45. where, for any z 2 RP , ⌫ = ⌫(z) solves the following linear system Anisotropic Total-Variation ✓ ⇤ DJ ◆✓ ◆ ✓ ⇤ ◆ 6 ⌫ z 10 6 x 10 DJ⇤ 0 ⌫ ˜ = x . 0 1 2.5 2.5 Extension to ` analysis, TV. I In practice, with law of large number, the empirical mean is replaced for the expectation. I The computation of ⌫(z) is achieved by solving the [Vaiter et al. conjugate gradient solver linear system with a 2012] : vertical sub-sampling. Numerical example Finite di↵erences gradient: Super-resolution using (anisotropic) Total-Variation Observations y 2 2 D = [@1 , @2 ] (a) y Quadratic loss (a) y Quadratic loss 6 x 10 2.5 Projection Risk GSURE True Risk Quadratic loss 2 (a) y Quadratic loss 1.5 1.5 1.5 1 1 1 ? ? x (y) (b) x?(y, ? at the optimal ) 2 4 ?6 8 10 12
  • 46. Conclusion Sparsity: approximate signals with few atoms. dictionary
  • 47. Conclusion Sparsity: approximate signals with few atoms. dictionary Compressed sensing ideas: Randomized sensors + sparse recovery. Number of measurements signal complexity. CS is about designing new hardware.
  • 48. Conclusion Sparsity: approximate signals with few atoms. dictionary Compressed sensing ideas: Randomized sensors + sparse recovery. Number of measurements signal complexity. CS is about designing new hardware. The devil is in the constants: Worse case analysis is problematic. Designing good signal models.