study Accelerating Spatially Varying Gaussian Filters

1,702 views

Published on

This is my study review.

Published in: Technology, Business
2 Comments
0 Likes
Statistics
Notes
  • Be the first to like this

No Downloads
Views
Total views
1,702
On SlideShare
0
From Embeds
0
Number of Embeds
4
Actions
Shares
0
Downloads
44
Comments
2
Likes
0
Embeds 0
No embeds

No notes for slide

study Accelerating Spatially Varying Gaussian Filters

  1. 1. Accelerating Spatially Varying Gaussian Filters Jongmin Baek, David E. Jacobs Standford University SIGGRAPH ASIA 2010
  2. 2. Drawback of Bilateral Filter– picewise-flat regions, false edges, blooming at high contrastregions bilateral proposed method
  3. 3. Drawback of Bilateral filter– spurious detail on the road at color edges bilateral proposed method
  4. 4. l Spatially varing Gaussian filters x noisy y signal bilateraled Spatially varing Gaussian filtered
  5. 5. l Spatially varing Gaussian filters 2D Gaussian kernel x  3D Gaussian kernel ? noisy y signal bilateraled Spatially varing Gaussian filtered kernels along the gradient
  6. 6. Outline Introduction Review of Related Work Acceleration  Kernel Sampling speeds up spatially varing Gaussina kernel Applications Limitations and Future Work Conclusion
  7. 7. Introduction Bilateral filter [Tomasi and Manduchi 98]  Blur pixels spatially and preserve sharp edges  Joint bilateral filter for upsampling [Eisemann & Durand 04]  Bilateral filter ~ non-local mean [Buades et al. 05] High dimension Gaussian filter [Paris and Durand 06]  Merge {spatial (x,y), range (I or r,g,b)} space Important sampling speeds up on high dimension Gaussian filter  Bilateral grid [Paris & Durand 06; Chen et al. 07]  Gaussian KD-tree [Adams et al. 09]  Permutohedral Lattice [Adams et al. 09] Trilateral filter [Choudhury & Tumbilin 03]  Kernel along the image gradients  spatially varing Gaussina kernel No speed up methods for spatially varing Gaussian kernel
  8. 8. Gaussian KernelsIsotropic 1D, 2D, .. ND Gaussian KernelElliptical Gaussian KernelAnisotropy Gaussian KernelDistance of Gaussian Kernel
  9. 9. Gaussian kernels – 1D, 2D, .. ND Isotropy Gaussian kernel 1D Gaussian Kernel 2D Gaussian Kernel X2 (x1,x2) X1  1 2 u  1 xxk1D (u)  exp( 2 ) k2 D ( x )  exp( ) 2 2 2 2 2 2 2   1 xx k ND ( x )  exp(   ) 2 2 N 2 2
  10. 10. Elliptical kernelsanisotropic Gaussian kernel - rotated & scaledRadial kernel Elliptical kernel p p rotated and scaled kernels
  11. 11. ||p||2 p p
  12. 12. Distance of xEuclidean distance Mahadanbis distance nx   x  x x 2 DM ( x)  x  1 x 2 i i 1   12  12 ..  1n     21  2 ..  2 n  2   .. .. ..   2   n1  n 2  ..  n  
  13. 13. Mahalanobis Distance [P.C. Mahanobis1936] x2 p x1 p=(x1,x2) in X1-X2 space What is the distance of p in Y1-Y2 space with an elliptical kernel ? 1  12  12   x1  p Y Y  DM ( x)  x  1 x  x1 x2   2 2   1 2  12  2   x2 
  14. 14. Mahalanobis Distance [P.C. Mahanobis 1936] p p DM1(p) = DM2(p) M1 M2Distance is determined by the standard deviation of the kern
  15. 15. Mahalanobis Distance [P.C. Mahanobis 1936] y2 y1 Distance is determined by the standard deviation of the kernProject p to the axes of the kernel then divided by the standard d
  16. 16. Mahalanobis Distance [P.C. Mahanobis 1936] y2 p x2 y1 x1 Project p to the axes of the kernel x’ Σ-1x Divided by the standard deviation  1  0  2  y  1   x1  x1x 2   x1  2p Y Y  DM ( y )  y  1 y   y1 y2   y1   1   DM ( x)  x  1 x  x1 x2   2 2     0 1   y2   x1x 2  x 2   x2  1 2 y y   y2  2  
  17. 17. 1   x1  x1x 2   x1  2DM ( x)  x  x x  x1 x2   1    X2  x1x 2  x22   x2  Y2 p  2  x1x 2  1 1x   x1   EE   x  E E   x1x 2  x22 where E is the m atrixof eigen vect of  ,  is the m atrixof the eigen values of . or X1 Y1 1 E  e1 e2 ,     2  E  E 1  e1 e2   [1e1 2 e2 ]  e1 e2  2  that is   1 DM ( x)  x  x x  x E1 E x  y 1 y note : PCA of the kernel is the axes of Y
  18. 18.  y  e x  X2 Y2y   1   1   y2  e2 x  p y1  var( y1 )  var(e1 x)  var(e11 x1  e21 x2 ) 2 e11  x1  2e11e21 x1 x 2  e21  x 2 22 2 2 X1   2  x1 x 2   e11  Y1 [e11 e21 ] x1   e1  x e1  e1 EE e1  x1 x 2  x 2  e21  2      e1  e1 e1 e2  1 e  1  2  e2  1   y2 2  var( y2 )  var(e2 x)  var(e12 x1  e22 x2 ) e12  x1  2e12e22 x1 x 2  e22  x 2 22 2 2   x1 2  x1 x 2   e12  [e12 e22 ]   e2  x e2  e2 EE e2  x1 x 2  x 2  e22  2      e1  e2 e1 e2  1 e  2  2  e2  2   1 1 /  y1 2   y1  1 /    y1  DM ( y )  y  y y  [ y1 y2 ] 2   [ y1 y2 ] 1   1 /  y 2   y2     1 / 2   y2   
  19. 19. Multivariate Gaussian distribution Mahalano bis 1 1 x x Distancek ( x)  exp(   N ) 2  1/ 2 2 is the covariancematrix of x
  20. 20. Bilateral filters • Remove noise and keep edge  • Kernel is not fixed  • Can apply fixed kernel (convolution)  • Large memory cost  • Can apply fixed kernel (convolution)  • Down-sample  convolution  up-sample • Blur on important samplings (leaf nodes)  • Blur on important samplings (lattic)  • Spatially varing  • Anisotropic gaussian kernel 
  21. 21. Bilateral Filter  Kernel weighing is depend on position distance and color distance W WR  1  s   I ( p)  K  I (q )  N s ( p  q )  N s ( I ( p)  I (q ))  q     I(p) K  q N s ( p  q )  N s ( I ( p)  I (q ))  Ws Ws is fixed WR WR depends on |I(p)-I(q)| W= WS WR Ws * WR is not fixed !
  22. 22. Bilateral filters • Remove noise and keep edge  • Kernel is not fixed  • Can apply fixed kernel (convolution)  • Large memory cost  • Can apply fixed kernel (convolution)  • Down-sample  convolution  up-sample important • Blur on important samplings (leaf nodes)  sampling on grid, kd- tree, lattic • Blur on important samplings (lattic)  • Spatially varing  • Anisotropic gaussian kernel 
  23. 23. Gaussian Bilateral is a kind of High Dimensional Gaussian Filter p q space range  pq 2  1  I I 2  p 1  p q  exp  exp  2  s 2  2 s2  2  r  2 r 2  q    ”A Fast Approximation of the Bilateral Filter using a Signal Processing Approach”
  24. 24. Gaussian Bilateral is a kind of HighDimensional Gaussian Filter 1  p q  1  I  2  Iq  2  exp  p exp p 2  s  2 s2  2  r  2 r   2      1 / 2 s2   p  q  q  1 exp     ( p  q) ( I p  I q )    I  I   2 s  r   1 / 2 r   p q   P [p I p ] Q  [q I q ]  1 2 s  r  exp  ( P  Q)  1 ( P  Q)  space x range p q
  25. 25. higher dimensional functionswi w Gaussian convolution division slicing
  26. 26. High Dimensional Gaussian FilteringEx: RGB image with isotropic Gaussian kernel D :=diag {} := elliptical kernel
  27. 27. l High Dimension Gaussian Filters 2D Bilateral kernel x ≡ 3D Gaussian kernel noisy y signal bilateraled
  28. 28. Bilateral filters • Remove noise and keep edge  • Kernel is not fixed  • Can apply fixed kernel (convolution)  • Large memory cost  • Can apply fixed kernel (convolution)  • Down-sample  convolution  up-sample important • Blur on important samplings (leaf nodes)  sampling on grid, kd-tree, lattic • Blur on important samplings (lattic)  • Spatially varing  • Anisotropic gaussian kernel 
  29. 29. Why do we need Bilateral grid ? High dimension Gaussian costs large memory & running time ! Image size Gray Run-time RGB Run-time w*h Image Image Bilateral wh O(wh * n2) 3 wh O(3wh * n2) High 256 w O(256 wh * 2563 wh O(2563 wh * n5) dimension h n3 ) Gaussian Bilateral grid [Chen et al. 07]  Down sampling (point {pixel,color}  coarse grid)  Gaussian blur on the coarse grids  Up sampling (coarse grid  point{pixel, color})
  30. 30. higher dimensional functionswi w DOWNSAMPLE Gaussian convolution UPSAMPLE division slicing
  31. 31. Bilateral filters • Remove noise and keep edge  • Kernel is not fixed  • Can apply fixed kernel (convolution)  • Large memory cost  • Can apply fixed kernel (convolution)  • Down-sample  convolution  up-sample important • Blur on important samplings (leaf nodes)  sampling on grid, kd- tree, lattic • Blur on important samplings (lattic)  • Spatially varing  • Anisotropic gaussian kernel 
  32. 32. Important Sampling Bilateral Grid is a kind of important sampling  High dimension kernel + sampling on the grids DOWNSAMPLE UPSAMPLE Gaussian Blur Gaussian KD-tree [Adams et al. 09]  High dimension kernel + sampling on leaf nodes  Splatting (downsample points to leaf nodes)  Blurring (Gaussian blurring on leaf nodes)  Splicing (upsample from leaf nodes to points)
  33. 33. Important Sampling in Gaussian KD-tree High-dimension Gaussian filter : sampling s neighborhood s Important sampling with Gaussian KD-tree : evaluating samples as near as possible s   T    m m   p j  pi D p j  pi   V i   V j s j exp  ,  s j  s, p j are leaf nodes j   2  j  pi
  34. 34. Why not important sampling onGaussian filter ? Gaussian filter p q2 q1 High dimension Gaussian filter p q2 q1
  35. 35. Bilateral filters • Remove noise and keep edge  • Kernel is not fixed  • Can apply fixed kernel (convolution)  • Large memory cost  • Can apply fixed kernel (convolution)  • Down-sample  convolution  up-sample important • Blur on important samplings (leaf nodes)  sampling on grid, kd- tree, lattic • Blur on important samplings (lattic)  • Spatially varing  • Anisotropic gaussian kernel 
  36. 36. The Permutohedral Lattic [Adams, Baek, et al. EG2010]Gaussian KD-tree signalPermutohedral Lattice for high dimension Gaussian filter
  37. 37. Bilateral filters • Remove noise and keep edge  • Kernel is not fixed  • Can apply fixed kernel (convolution)  • Large memory cost  • Can apply fixed kernel (convolution)  • Down-sample  convolution  up-sample • Blur on important samplings (leaf nodes)  • Blur on important samplings (lattic)  • Spatially varing  • Anisotropic gaussian kernel 
  38. 38. The High Dimension GaussianKernel can be spatially varing alongGradient
  39. 39. Why do we need a spatially varing kernel ? orfiltering smoothed result
  40. 40. #1.1 : High-dimension Gaussian filter an isotropic kernel (radial kernel)
  41. 41. #1.1 : High-dimension Gaussian filter an isotropic kernel (radial kernel) smoothed result 
  42. 42. #1.2 : High-dimension Gaussian filter smooth an anisotropic kernel ??(elliptical kernel) smooth 
  43. 43. #1.2 : High-dimension Gaussian filter
  44. 44. #1.2 : High-dimension Gaussian filter
  45. 45. #1.3 Spatial varing Gaussianfilter smooth 
  46. 46. Why not using an isotropic kernel ?(radial or ball or …) Image resolution ≠ color range resolution  We usually apply small image kernel : 3x3, 5x5, …  But what is approximate size for color range kernel ?  Depend on color distribution, color space  Special image, e.g. HDR
  47. 47. The High Dimension GaussianKernel can be spatially varing alonggradient
  48. 48. Trilateral[Choudhury & Tumblin 03] G1 = ∂ I/ ∂ x, G2 = ∂ I/ ∂ y, △x = xj – xi, △y = yj – yiGeneralized as   Pi : ( xi , yi , I ( xi , yi )) Vi : ( I ( xi , yi ), xi , yi ,1)
  49. 49. AccelerationImportant Sampling  Gaussian KD-treeDimensionality  M:(x,y)  (x, y, x+y, x-y)Kernel Sampling & Segmentation  PermutohedralLattice
  50. 50. kImportant Sampling for If kernel is isotropic, easy to estimate ∫ GD(x,σ) q0=∫ GD(x,σ) R0 R1 q1=∫ GD(x,σ) X
  51. 51. kImportant Sampling for If kernel is anisotropic… ??? ∫ GD(x,σ) q0=∫ GD(x,σ) R0 R1 q1=∫ GD(x,σ) X
  52. 52. Dimensionality Elevation for Spatial varing kernel  C≠0
  53. 53. Dimensionality Elevation for M : R2  R4 M : ( x, y )  x  2,y 2 , ( x  y) 8 , ( x  y) 8  M : R 3  R13 M : ( x, y, z )  (1 x,  2 y,  3 z ,  4 ( x  y ), 5 ( x  y ), 6 ( y  z ),  7 ( y  z ), 8 ( x  y  z ), 9 ( x  y  z ),10 ( x  y  z ),11 ( x  y  z ))
  54. 54. Kernel Sampling & Segmentation Kernel sampling Let D = {D1, D2, … } Assumptions  The kernel is locally constant  While the space of possible kernel is vary large – D has O(dp2) degree of freedom. However D is restricted, let dr Kernel segmentation  Clustering  no efficiency
  55. 55. Kernel Sampling & Segmentation Segment  Regular sample Gaussian kernel {Dl} sparsely  Segmentation {Sl}  For each Dl belonging to D, define the segment Sl as {Pi} to satisfy  Pi is an element of Sl only if blurring Pi with D is necessary for interpolating Dl Each segment Sl is filtered separately  Kernel is rotated or sheared so that Dl is diagonal D1 Segment {Sl} S1 S2 S4 S3 sparsely sampling kernel D = {D1, D2, … Dn}
  56. 56. Kernel Sampling & Segmentation for sparsely sampling kernel D1 D2 D3 D4
  57. 57. Kernel Sampling & Segmentation for S2  S2 U {Pi} Pionsegmentati
  58. 58. Kernel Sampling & Segmentation foronsegmentati k=0 k=16
  59. 59. Kernel Sampling & Segmentation foronsegmentati
  60. 60. Kernel Sampling & Segmentation for S3
  61. 61. Review of accelerating methods forspatially varing Gaussian filter Important sampling  Blurring Gaussian KD-tree leaf nodes  #Samples proportion to ratio of integral of kernel Dimensional elevation  Elevate dimension and apply standard Gaussian KD- tree x, y)  x M :( 2, y 2 , ( x  y) 8 , ( x  y) 8  M : R3  R11 Kernel sampling  Sample kernels for Permutohedral Lattice node  Blurring Permutohedral Lattice node Comparison of the proposed methods αd αd d3
  62. 62. ApplicationsTone MappingSparse Range Image Up-sampling
  63. 63. Bilateral Tone Mapping Decompose image to {B, D}  B : based layer for HDR  D : detail layer for LDR, local texture variations from the Based Tone mapping  Scale down B + D Comparison of obtaining B with Bilateral  Bilateral tone mapping : quick but artifacts  Kernel sampling : quick and approximate to Trilateral  Trilateral : slow
  64. 64. Bilateral Blooming & false edgKernel sampling Kernel sampling Trilateral
  65. 65. Bilateral Blooming & false edgeKernel sampling Kernel sampling Trilateral
  66. 66. Joint Bilateral upsampling Use Bilateral kernel to up-sample image operations performed at a low-resolution [Kopf et al. 2007] spatial range
  67. 67. Sparse Range ImageUpsampling Range image (depth map)  Encode scene geometry as per-pixel distance map  Useful for autonomous vehicle, background segmentation… Joint Bilateral filter  Up-sampling  Similar color has similar depth Color Image Ground Truth Depth Bilateral Upsampled Depth
  68. 68. Results of Sparse Range Image Upsampling- Synthetic1 dataset
  69. 69. Results of Sparse Range Image Upsampling- Highway2 dataset
  70. 70. Limitations Time complexity of kernel sampling  Polynomially with dp  Linear with the dataset size #SampledKernels affects the resulting quality  Too few samples caused kernel sampling to degenerate to a spatially invariant Gaussian filter  Too many samples creates segments with too few points and the dilation to be less effective
  71. 71. Conclusion A flexible scheme for accelerating spatially varing high dimensional Guassian filters  Segmenting & tiling image data  Comparable results to Trilateral filter  Faster than Trilateral filter  Better than Bilateral filter Applicable for traditional bilateral filter applications  Tone mapping, sparse image upsampling
  72. 72. Future Work Shot noise  Shot noise varies with signal strength and is particularly prevalent in areas such as astronomy and medicine, so these areas could make use of a fast photon shot denoising filter Video denoising  Align blur kernels in the space-time volume with object movement Light field filtering or upsampling  Aligning blur kernels with edges in the ray-space hyper-volume
  73. 73. Q&A

×