2. Drawback of Bilateral Filter
– picewise-flat regions, false edges, blooming at high contrast
regions
bilateral proposed method
3. Drawback of Bilateral filter
– spurious detail on the road at color edges
bilateral
proposed method
4. l Spatially varing Gaussian filters
x
noisy
y signal
bilateraled
Spatially varing
Gaussian
filtered
5. l Spatially varing Gaussian filters
2D Gaussian kernel
x 3D Gaussian
kernel ?
noisy
y signal
bilateraled
Spatially varing
Gaussian
filtered
kernels
along the
gradient
6. Outline
Introduction
Review of Related Work
Acceleration
Kernel Sampling speeds up spatially varing
Gaussina kernel
Applications
Limitations and Future Work
Conclusion
7. Introduction
Bilateral filter [Tomasi and Manduchi 98]
Blur pixels spatially and preserve sharp edges
Joint bilateral filter for upsampling [Eisemann & Durand 04]
Bilateral filter ~ non-local mean [Buades et al. 05]
High dimension Gaussian filter [Paris and Durand 06]
Merge {spatial (x,y), range (I or r,g,b)} space
Important sampling speeds up on high dimension
Gaussian filter
Bilateral grid [Paris & Durand 06; Chen et al. 07]
Gaussian KD-tree [Adams et al. 09]
Permutohedral Lattice [Adams et al. 09]
Trilateral filter [Choudhury & Tumbilin 03]
Kernel along the image gradients spatially varing
Gaussina kernel
No speed up methods for spatially varing Gaussian kernel
12. Distance of x
Euclidean distance Mahadanbis distance
n
x x x' x
2
DM ( x) x' 1 x
2
i
i 1
12 12 .. 1n
21 2 .. 2 n
2
.. .. ..
2
n1 n 2
.. n
13. Mahalanobis Distance [P.C. Mahanobis
1936]
x2 p
x1
p=(x1,x2) in X1-X2 space What is the distance of p in Y1-Y2
space with an elliptical kernel ?
1
12 12 x1
p Y Y DM ( x) x' 1 x x1 x2
2
2
1 2
12 2 x2
14. Mahalanobis Distance [P.C. Mahanobis
1936]
p p
DM1(p) = DM2(p)
M1 M2
Distance is determined by the standard deviation of the kern
15. Mahalanobis Distance [P.C. Mahanobis
1936]
y2
y1
Distance is determined by the standard deviation of the kern
Project p to the axes of the kernel then divided by the standard d
16. Mahalanobis Distance [P.C. Mahanobis
1936]
y2 p
x2
y1 x1
Project p to the axes of the
kernel
x’ Σ-1x
Divided by the standard
deviation 1
0 2 y 1
x1 x1x 2 x1
2
p Y Y DM ( y ) y ' 1 y y1 y2 y1 1 DM ( x) x' 1 x x1 x2
2
2
0 1 y2 x1x 2 x 2 x2
1 2
y y
y2
2
17. 1
x1 x1x 2 x1
2
DM ( x) x' x x x1 x2
1
X2
x1x 2 x22 x2 Y2
p
2
x1x 2 1 1
x x1
EE ' x E E '
x1x 2 x22
where E is the m atrixof eigen vect of , is the m atrixof the eigen values of .
or
X1
Y1
1
E e1 e2 ,
2
E E
1
e1 e2 [1e1 2 e2 ] e1 e2
2
that is
1
DM ( x) x' x x x' E1 E ' x y ' 1 y
note : PCA of the kernel is the axes of Y
20. Bilateral filters
• Remove noise and keep edge
• Kernel is not fixed
• Can apply fixed kernel (convolution)
• Large memory cost
• Can apply fixed kernel (convolution)
• Down-sample convolution up-sample
• Blur on important samplings (leaf nodes)
• Blur on important samplings (lattic)
• Spatially varing
• Anisotropic gaussian kernel
21. Bilateral Filter
Kernel weighing is depend on position distance
and color distance
W WR
1 s
I ' ( p)
K
I (q ) N s ( p q ) N s ( I ( p) I (q ))
q
I(p) K q N s ( p q ) N s ( I ( p) I (q ))
Ws Ws is fixed
WR
WR depends on |I(p)-I(q)|
W= WS WR
Ws * WR is not fixed !
22. Bilateral filters
• Remove noise and keep edge
• Kernel is not fixed
• Can apply fixed kernel (convolution)
• Large memory cost
• Can apply fixed kernel (convolution)
• Down-sample convolution up-sample
important
• Blur on important samplings (leaf nodes) sampling on
grid, kd-
tree, lattic
• Blur on important samplings (lattic)
• Spatially varing
• Anisotropic gaussian kernel
23. Gaussian Bilateral is a kind of High
Dimensional Gaussian Filter
p
q
space range
pq 2
1 I I 2
p 1 p q
exp exp
2 s
2 2 s2 2 r 2 r
2
q
”A Fast Approximation of the Bilateral Filter using a Signal Processing Approach”
24. Gaussian Bilateral is a kind of High
Dimensional Gaussian Filter
1 p q 1 I
2
Iq
2
exp p
exp
p 2 s 2 s2 2 r 2 r
2
1 / 2 s2 p q
q
1
exp
( p q)' ( I p I q )'
I I
2 s r 1 / 2 r p q
P [p I p ]'
Q [q I q ]'
1
2 s r
exp ( P Q)' 1 ( P Q)
space x range
p
q
31. Bilateral filters
• Remove noise and keep edge
• Kernel is not fixed
• Can apply fixed kernel (convolution)
• Large memory cost
• Can apply fixed kernel (convolution)
• Down-sample convolution up-sample
important
• Blur on important samplings (leaf nodes) sampling on
grid, kd-
tree, lattic
• Blur on important samplings (lattic)
• Spatially varing
• Anisotropic gaussian kernel
32. Important Sampling
Bilateral Grid is a kind of important sampling
High dimension kernel + sampling on the grids
DOWNSAMPLE UPSAMPLE
Gaussian Blur
Gaussian KD-tree [Adams et al. 09]
High dimension kernel + sampling on leaf nodes
Splatting (downsample points to leaf nodes)
Blurring (Gaussian blurring on leaf nodes)
Splicing (upsample from leaf nodes to points)
33. Important Sampling in Gaussian KD-
tree
High-dimension Gaussian filter : sampling s
neighborhood
s
Important sampling with Gaussian KD-tree :
evaluating samples as near as possible s
T m
m
p j pi D p j pi
V 'i V j s j exp , s j s, p j are leaf nodes
j
2 j
pi
34.
35. Why not important sampling on
Gaussian filter ?
Gaussian filter
p q2
q1
High dimension Gaussian
filter
p q2
q1
36. Bilateral filters
• Remove noise and keep edge
• Kernel is not fixed
• Can apply fixed kernel (convolution)
• Large memory cost
• Can apply fixed kernel (convolution)
• Down-sample convolution up-sample
important
• Blur on important samplings (leaf nodes) sampling on
grid, kd-
tree, lattic
• Blur on important samplings (lattic)
• Spatially varing
• Anisotropic gaussian kernel
37. The Permutohedral Lattic [Adams, Baek, et al.
EG2010]
Gaussian KD-tree
signal
Permutohedral Lattice for high dimension Gaussian filter
38. Bilateral filters
• Remove noise and keep edge
• Kernel is not fixed
• Can apply fixed kernel (convolution)
• Large memory cost
• Can apply fixed kernel (convolution)
• Down-sample convolution up-sample
• Blur on important samplings (leaf nodes)
• Blur on important samplings (lattic)
• Spatially varing
• Anisotropic gaussian kernel
39. The High Dimension Gaussian
Kernel can be spatially varing along
Gradient
40. Why do we need a spatially varing
kernel ?
or
filtering
smoothed result
47. Why not using an isotropic kernel ?
(radial or ball or …)
Image resolution ≠ color range resolution
We usually apply small image kernel : 3x3, 5x5, …
But what is approximate size for color range
kernel ?
Depend on color distribution, color space
Special image, e.g. HDR
48. The High Dimension Gaussian
Kernel can be spatially varing along
gradient
49. Trilateral
[Choudhury & Tumblin 03]
G1 = ∂ I/ ∂ x,
G2 = ∂ I/ ∂ y,
△x = xj – xi,
△y = yj – yi
Generalized as
Pi : ( xi , yi , I ( xi , yi )) Vi : ( I ( xi , yi ), xi , yi ,1)
54. Dimensionality Elevation for
M : R2 R4
M : ( x, y ) x 2,y 2 , ( x y) 8 , ( x y) 8
M : R 3 R13
M : ( x, y, z ) (1 x, 2 y, 3 z , 4 ( x y ), 5 ( x y ), 6 ( y z ),
7 ( y z ), 8 ( x y z ), 9 ( x y z ),10 ( x y z ),11 ( x y z ))
55. Kernel Sampling & Segmentation
Kernel sampling
Let D = {D1, D2, … }
Assumptions
The kernel is locally constant
While the space of possible kernel is vary large – D
has O(dp2) degree of freedom. However D is
restricted, let dr
Kernel segmentation
Clustering no efficiency
56. Kernel Sampling & Segmentation
Segment
Regular sample Gaussian kernel {Dl} sparsely
Segmentation {Sl}
For each Dl belonging to D, define the segment Sl as
{Pi} to satisfy
Pi is an element of Sl only if blurring Pi with D is
necessary for interpolating Dl
Each segment Sl is filtered separately
Kernel is rotated or sheared so that Dl is diagonal
D1
Segment {Sl}
S1 S2
S4
S3
sparsely sampling kernel
D = {D1, D2, … Dn}
62. Review of accelerating methods for
spatially varing Gaussian filter
Important sampling
Blurring Gaussian KD-tree leaf nodes
#Samples proportion to ratio of integral of kernel
Dimensional elevation
Elevate dimension and apply standard Gaussian KD-
tree x, y) x
M :( 2, y 2 , ( x y) 8 , ( x y) 8
M : R3 R11
Kernel sampling
Sample kernels for Permutohedral Lattice node
Blurring Permutohedral Lattice node
Comparison of the proposed methods
αd αd d3
64. Bilateral Tone Mapping
Decompose image to {B, D}
B : based layer for HDR
D : detail layer for LDR, local texture variations from the
Based
Tone mapping
Scale down B + D
Comparison of obtaining B with Bilateral
Bilateral tone mapping : quick but artifacts
Kernel sampling : quick and approximate to Trilateral
Trilateral : slow
67. Joint Bilateral upsampling
Use Bilateral kernel to up-sample image
operations performed at a low-resolution [Kopf et
al. 2007]
spatial range
68. Sparse Range Image
Upsampling
Range image (depth map)
Encode scene geometry as per-pixel distance map
Useful for autonomous vehicle, background
segmentation…
Joint Bilateral filter
Up-sampling
Similar color has similar depth
Color Image Ground Truth Depth Bilateral Upsampled Depth
71. Limitations
Time complexity of kernel sampling
Polynomially with dp
Linear with the dataset size
#SampledKernels affects the resulting quality
Too few samples caused kernel sampling to
degenerate to a spatially invariant Gaussian filter
Too many samples creates segments with too few
points and the dilation to be less effective
72. Conclusion
A flexible scheme for accelerating spatially varing
high dimensional Guassian filters
Segmenting & tiling image data
Comparable results to Trilateral filter
Faster than Trilateral filter
Better than Bilateral filter
Applicable for traditional bilateral filter
applications
Tone mapping, sparse image upsampling
73. Future Work
Shot noise
Shot noise varies with signal strength and is
particularly prevalent in areas such as astronomy
and medicine, so these areas could make use of a
fast photon shot denoising filter
Video denoising
Align blur kernels in the space-time volume with
object movement
Light field filtering or upsampling
Aligning blur kernels with edges in the ray-space
hyper-volume