CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: additional slides

484 views

Published on

Published in: Education
  • Be the first to comment

  • Be the first to like this

CVPR2010: Advanced ITinCVPR in a Nutshell: part 4: additional slides

  1. 1. 1
  2. 2.  Probability density function (pdf) estimation using isocontours/isosurfaces Application to Image Registration Application to Image Filtering Circular/spherical density estimation in Euclidean space 2
  3. 3. Histograms Kernel Mixture model density estimate Parameter selection: Sample-basedbin-width/bandwidth/number methods of components Bias/variance tradeoff: Do not treat a large bandwidth: high bias, signal as alow bandwidth: high variance) signal 3
  4. 4. Continuous image representation: function I(x,y) atTrace out isocontours of the intensity using some interpolant. values. several intensity 4
  5. 5. P (αLevel< α + Curves at Intensity α α + region < I Level ∆ α ) ∝ area of brown ∆ α Curves at Intensity α and
  6. 6.   Assume a uniform 1 d   density on (x,y) p I (α ) =  | Ω | dα  ∫ dxdy    I ( x, y )≤ α  Random variable 1 du transformation from = |Ω | ∫ I2 + I2 I ( x, y )= α x y (x,y) to (I,u) Every point in the imageIntegrate out u along the u = direction to get domain contributes to densitylevel set the of intensity I the density. (dummy variable) Published in CVPR 2006, PAMI 2009. 6
  7. 7. 7
  8. 8. P (α 1 < I1 < α 1 at Intensity (α 1I 2 <+α∆ 2α + )∆ αI1 ) ∝ Level Curves + ∆ α 1 ,α 2 < , α 1 1 in 2 Level Curves at Intensity α 1 in I1 and α 2 in I 2 area ofα black region I 2 and ( 2 , α 2 + ∆ α 2 ) in 1 1p I1 ,I 2 (α 1 , α 2 ) = |Ω | ∑ C | ∇ I1 ( x, y)∇ I 2 ( x, y) sin(θ ( x, y)) |C = {( x , y) | I1 ( x , y) = α 1 , I 2 ( x, y) = α 2 }θ ( x , y) = angle between gradients at ( x, y) Relationships between geometric and probabilistic entities. 8
  9. 9.  Similar density estimator developed by Kadir and Brady (BMVC 2005) independently of us. Similar idea: several differences in implementation, motivation, derivation of results and applications. 9
  10. 10. 1 du p I (α ) = |Ω | ∫ I ( x, y )= α | ∇ I ( x, y ) | 1 1 p I1 ,I 2 (α 1 , α 2 ) = |Ω | ∑ C | ∇ I1 ( x, y)∇ I 2 ( x , y) sin(θ ( x, y)) | C = {( x , y) | I1 ( x, y) = α 1 , I 2 ( x, y) = α 2 }Compute cumulative of the cumulative) do not exist Densities (derivatives where image gradients are (α < or where image interval measures. P zero, I < α + ∆ ) gradients run parallel. 10
  11. 11. 11
  12. 12. 102464 bins 256 bins 512 128 32 binsStandard histograms Isocontour Method 12
  13. 13. 13
  14. 14.  Randomized/digital approximation to area calculation. Strict lower bound on the accuracy of the isocontour method, for a fixed interpolant. Computationally more expensive than the isocontour method. 14
  15. 15. 128 x 128 bins 15
  16. 16.  Simplest one: linear interpolant to each half- pixel (level curves are segments). Low-order polynomial interpolants: high bias, low variance. High-order polynomial interpolants: low bias, high variance. 16
  17. 17. Accuracy of estimated density Polynomial improves as signal is sampled with Interpolant finer resolution. Bandlimited analog signal, Nyquist-sampled digitalAssumptions on signal: signal: better interpolant Accurate reconstruction by sincinterpolant! (Whitaker-Shannon Sampling Theorem) 17
  18. 18.  Probability density function (pdf) estimation using isocontours Application to Image Registration Application to Image Filtering Circular/spherical density estimation in Euclidean space 18
  19. 19. •Mutual Information: Well known image find the Given two images of an object, to similarity measure Viola and Wells (IJCV geometric transformation that “best” 1995) and Maes et al (TMI 1997). aligns one with the other, w.r.t. some image similarity measure. Insensitive to illumination changes:useful in multimodality imageregistration 19
  20. 20. Marginal MI ( I1 , I 2 ) = H ( I1 ) − H ( I1 | I 2 ) Probabilities = H ( I 1 ) + H ( I 2 ) − H ( I1 , I 2 ) p1 (α 1 ) H ( I1 ) = − ∑ p1 (α 1 ) log p1 (α 1 ) p2 (α 2 ) α1 H ( I 2 ) = − ∑ p2 (α 2 ) log p2 (α 2 ) Joint Probability α 2 p12(α 1 , α 2 ) H ( I1 , I 2 ) = − ∑ ∑ p12 (α 1 , α 2 ) log p12 (α 1 , α 2 ) α1 α 2Marginal entropy H ( I1 ) Joint entropy H ( I1 , I 2 )Conditional entropy H ( I1 | I 2 ) 20
  21. 21. Hypothesis: If the alignment between images isoptimal then Mutual information is maximum. I1 I2 Functions of Geometric Transformation p12(i, j) H ( I1 , I 2 ) MI( I1 , I 2 ) 21
  22. 22. σ = 0.05 σ = 0 .2 σ = 0 .7 22
  23. 23. σ = 0.05 2 7 32 bins 128 bins PVI=partial volume interpolation (Maes et al, TMI 1997) 23
  24. 24. PD slice Warped andsliceslice slice WarpedNoisy T2 T2 T2 Brute force search for the maximum of MI 24
  25. 25. MI with standard MI with our method histogramsσ = 0 .7Par. of affine transf.θ = 30, s = t = − 0.3, ϕ = 0 25
  26. 26. Method Error in Error in Error in t Theta s(avg.,var. (avg., var.) (avg., var.) ) Histograms 3.7,18.1 0.7,0 0.43,0.08 (bilinear)32 BINS Isocontours 0,0.06 0,0 0,0 PVI 1.9, 8.5 0.56,0.08 0.49,0.1 Histograms 0.3,49.4 0.7,0 0.2,0 (cubic) 2DPointProb 0.3,0.22 0,0 0,0 26
  27. 27.  Probability density function (pdf) estimation using isocontours Application to Image Registration Application to Image Filtering Circular/spherical density estimation in Euclidean space 27
  28. 28. Anisotropic neighborhood filters (Kerneldensity based filters): Grayscale images ∑ K ( I ( x, y ) − I (a, b);σ ) I ( x, y ) ( x , y )∈ N ( a ,b ) I ( a, b) = ∑ K ( I ( x, y ) − I (a, b); σ ) ( x , y )∈ N ( a ,b ) K: a decreasing Parameter σ controls function the degree of (typically anisotropicity of the smoothing Gaussian) Central Pixel (a,b): Neighborhood N(a,b) around (a,b) 28
  29. 29. Anisotropic Neighborhood filters: Problems Sensitivity toSensitivity to the the SIZE of the parameter σ Neighborhood Does not account for gradient information 29
  30. 30. Anisotropic Neighborhood filters: ProblemsTreat pixels as independent samples 30
  31. 31. Continuous Image Representation Interpolate in between the pixel values ∑ K ( I ( x, y ) − I (a, b);σ ) I ( x, y ) ( x, y )∈ N ( a ,b ) I ( a, b) = ∑ K ( I ( x, y ) − I (a, b);σ ) ( x, y )∈ N ( a ,b ) ∫ ∫ I ( x, y) K ( I ( x, y) − I (a, b);σ )dxdy N ( a ,b ) I ( a, b) = ∫ ∫ K ( I ( x, y) − I (a, b);σ )dxdy N ( a ,b ) 31
  32. 32. Continuous Image Representation Areas between isocontours at intensity α and α+Δ (divided by area of neighborhood)= Pr(α<Intensity <α+Δ|N(a,b)) 32
  33. 33. ∫ ∫ I ( x, y) K ( I ( x, y) − I (a, b);σ )dxdy N ( a ,b ) I ( a, b) = ∫ ∫ K ( I ( x, y) − I (a, b);σ )dxdy N ( a ,b ) Lim∆ → 0 ∑0 Area(α < I < α + ∆ | N ( a, b)) × α × K (α − I ( a, b); σ ) Lim∆ → ∑ PrI ( a,(b) b) = I a, = α α Lim∆ → 0 ∑0 Area(α < I < α + ∆ | N ( a, b)) × K (α − I ( a, b); σ ) Lim∆ → ∑ Pr α α Areas between isocontours: Published in contribute to weights for averaging. EMMCVPR 2009 33
  34. 34. Extension to RGB imagesR (a, b), G (a, b), B (a, b)      ∑) α Pr(α < ( R, G, B) < α + ∆ ) K (α − ( R, G, B);σ ) (α=   = Area Joint Probability< of R,G,B (α − ( R,of,overlap of  ∑ Pr(α < ( R, G, B) α + ∆ ) K  (α ) isocontour pairs from R, G, B images G B ); σ ) 34
  35. 35. Mean-shift framework• A clustering method developed by Fukunaga& Hostetler (IEEE Trans. Inf. Theory, 1975).• Applied to image filtering by Comaniciu and Meer (PAMI 2003).• Involves independent update of each pixel by maximization of local estimate of probability density of joint spatial and intensity parameters. 35
  36. 36. Mean-shift framework• One step of mean-shift update around (a,b,c) where c=I(a,b). ∑ (x, y, I(x, y))w(x, y) (x, y)∈ N(a,b) ˆ ˆ ˆ 1. (a, b, c) := ∑ w(x, y) (x, y)∈ N(a,b) ˆ ˆ (a, b) ≡ updated center of the neighborhood , c ≡ updated intensity value ˆ  (x − a) 2 (y − b) 2 (I(x, y) − c) 2  w(x, y) := exp − − −   σs2 σs2 σr 2    ˆ ˆ ˆ 2. (a, b, c) ⇐ (a, b, c) 3. Repeat steps (1) and (2) till (a, b, c) stops changing. 4. Set I(a, b) ⇐ c. ˆ 36
  37. 37. Our Method in Mean-shift Setting I(x,y) X(x,y)=x Y(x,y)=y 37
  38. 38. Our Method in Mean-shift Setting Facets of tessellation induced by isocontours and the pixel grid ( X k , Yk ) = Centroid of Facet #k. I k = Intensity (from interpolated image) at ( X k , Yk ) . Ak = Area of Facet #k. ∑ ( X k , Yk , I k ) A k K( ( X k , Yk , I k ) − ( X (a, b), Y (a, b), I (a, b)) ; σ ) k ∈ N ( a ,b )( X (a, b), Y (a, b), I (a, b)) = ∑ A k K( ( X k , Yk , I k ) − ( X (a, b), Y (a, b), I (a, b)) ; σ ) k ∈ N ( a ,b ) 38
  39. 39. Experimental Setup: Grayscale Images• Piecewise-linear interpolation used for our method in all experiments.• For our method, Kernel K = pillbox kernel, i.e. K ( z; σ ) = 1 If |z| <= σ K ( z; σ ) = 0 If |z| >σ• For discrete mean-shift, Kernel K = Gaussian.• Parameters used: neighborhood radius ρ=3, σ=3.• Noise model: Gaussian noise of variance 0.003 (scale of 0 to 1). 39
  40. 40. OriginalImage Noisy ImageDenoised Denoised(Isocontour (Gaussian KernelMean Shift) Mean Shift) 40
  41. 41. Denoised DenoisedOriginal Noisy Image (Isocontour (Std.MeanImage Mean Shift) Shift) 41
  42. 42. Experiments on color images• Use of pillbox kernels for our method.• Use of Gaussian kernels for discrete mean shift.• Parameters used: neighborhood radius ρ= 6,σ = 6.• Noise model: Independent Gaussian noise on each channel with variance 0.003 (on a scale of 0 to 1). 42
  43. 43. Experiments on color images• Independent piecewise-linear interpolation on R,G,B channels in our method.• Smoothing of R, G, B values done by coupled updates using joint probabilities. 43
  44. 44. Original Noisy ImageImageDenoised Denoised(Isocontour (Gaussian KernelMean Shift) Mean Shift) 44
  45. 45. Denoised DenoisedOriginal Noisy Image (Isocontour (Gaussian KernelImage Mean Shift) Mean Shift) 45
  46. 46. Observations• Discrete kernel mean shift performs poorly with small neighborhoods and small values of σ.• Why? Small sample-size problem for kernel density estimation.• Isocontour based method performs well even in this scenario (number of isocontours/facets >> number of pixels).• Large σ or large neighborhood values not always necessary for smoothing. 46
  47. 47. Observations• Superior behavior observed when comparing isocontour-based neighborhood filters with standard neighborhood filters for the same parameter set and the same number of iterations. 47
  48. 48.  Probability density function (pdf) estimation using isocontours Application to Image Registration Application to Image Filtering Circular/spherical density estimation in Euclidean space 48
  49. 49.  Examples of unit vector data:1. Chromaticity vectors of color values: ( R, G, B) ( r , g , b) = R2 + G2 + B22. Hue (from the HSI color scheme) obtained from the RGB values.  3 (G − B)  θ = arctan   2R − G − B    49
  50. 50. Convert RGB values to unit ( Ri , Gi , Bi ) vi = (ri , g i , bi ) = vectors Ri2 + Gi2 + Bi2 Estimate density of unit 1 N vectors p (v ) = N ∑i= 1 K(v; κ , vi ) K ( v; κ , u ) = C p ( κ )e κ vT u Other mixture voMF popular K = von - Mises Fisher Kernel kernels: Watson, models κ = concentration parameter Banerjee et al (JMLR cosine. C p ( κ ) = normalization constant 2005) 50
  51. 51. Density of Estimate density of RGB(magnitude,chromaticity) using KDE/Mixture models p ( m, v) = m 2 p ( R, G, B ) using random-variable m = ) 2 + 2 B −G 2 2 B 2 transformation ( R − Ri ) 2 + (G − Gi R ( + Bi ) +  1 Np ( R ,G , B ) = ∑ exp −    p (v) =v) = ∑ m 2 p ( R, G , B(κdmT vi ) N i=1 σ 12 N∞ 1  p ( N ∫ 2π I (κ )  exp ) i v Density of chromaticity Density of chromaticity: i=1 o i m= 0 (integrate out magnitude) mi = wi , v 2 wi 2κ i = 2 mi conditioning on m=1. = , m= R +m + B σ G 2 i Projected normal estimator:Variable bandwidthspheres”,Watson,”Statistics on voMF KDE: What’s new? The notion that 1983, all estimation can proceed in Bishop, “Neural networks for Euclidean space. Small,”Therecognition” 2006. pattern statistical theory of shape”, 1995 51
  52. 52. Estimate density of RGB using KDE/Mixture modelsUse random variable transformation to get density of HSI (hue, saturation,intensity) Integrate out S,I to get density of hue 52
  53. 53.  Consistency between densities of Euclidean and unit vector data (in terms of random variable transformation/conditioning). Potential to use the large body of literature available for statistics of Euclidean data (example: Fast Gauss Transform Greengard et al (SIAM Sci. Computing 1991), Duraiswami et al (IJCV 2003). Model selection can be done in Euclidean space. 53

×