SlideShare a Scribd company logo
1 of 34
Download to read offline
2.3.3 Line and curve detection

Lines and curves are important features in computer
vision because they define the contours of objects in
the image.

Take the result of edge detection as input, see if we
can link up the edge pixels and detect the existence
of lines and curves.
Line detection - Hough transform (HT)

A line y = mx + n can be represented by a parameter
pair (m, n) which is one point in the parameter space.

You can rewrite the line equation as n = -mx + y. For
a point p = [x, y]T, m and n can vary and form a line in
the parameter space, representing all possible lines
passing through p.
A line containing N edge points p1, …, pN is identified
in the parameter space by the intersection of N lines
associated with p1, …, pN.

                                    m



                      p2                        n
         p1
                                                n’


              image                       m’
                              parameter space
As m and n vary, the corresponding entry in the
parameter space will be increased by 1. Ideally, all
entries covered by the lines in the parameter space will
have the count of 1 except that the entry (m’, n’) has the
count of N.

Therefore, HT is a voting algorithm. To detect line is
to search for peak in the parameter space.


  There is a problem using the parameter space
  formed by m, n!
[0, 0]             j
                θ                    360°

            ρ
                                     .
                                 θ   .
                                     .
    i                                10°


                                     0°


                                            0   10   ...       M2 + N2
                                                           ρ
        discrete image I[i, j]                  parameter space


In implementation, we usually adopt the polar
representation ρ = j cosθ - i sinθ.
Line detection algorithm:
• input edge detection result (M x N binary image)
• quantize ρ and θ to create the parameter space,
  ρ ∈[0 , M 2 + N 2 ],θ ∈ [ 0 ,2π ]
• initialize all entries of the parameter space to zero
• for each edge point [i, j], increment by 1 each entry
  that satisfies the polar representation, e.g. for a
  specific quantized value of θ, find the closest
  quantized value of ρ
• find the local maxima (ρ, θ), each has the count >
  user-defined threshold τ
Hough transform can be implemented using the
MATLAB function hough.


Advantages:
• It is possible to detect the line even some points are
  missing.
• False edge points are unlikely to contribute to the
  same set of parameters and generate a false peak in
  the parameter space.
Disadvantages:
• You can generalize HT to detect curve y = f(x, a),
  where a = [a1, …, aP]. The parameter space is P-
  dimensional. However, search time increases rapidly
  with the number of parameters.
• Non-target shapes can produce spurious peaks in
  parameter space, e.g. low-curvature curves can be
  detected as lines.
Another approach to detect line is to fit edge pixels to a
line model - model fitting approach

For a generic line ax + by + c = 0, or , find the
parameter vector [a, b, c]T which results in a line going
as near as possible to each edge point. A solution is
reached when the distance between the line and all the
edge points is minimum – least squares problem.
Ellipse fitting

Many objects contain circular shapes, which usually
appear as ellipses in image. Therefore, ellipse
detectors are useful tools for computer vision.

According to the model fitting approach, find the best
ellipse that can fit the edge points.
Assume that the set of edge points pi = [xi, yi]T,
i = 1, ...,N belong to a single arc of ellipse

    xTa = ax2+bxy+cy2+dx+ey+f = 0

where x = [x2, xy, y2, x, y, 1]T and a = [a, b, c, d, e, f]T.

Find the parameter vector a, associated to the ellipse
which fits p1…pN best in the least squares sense
                 N          2
            min ∑ x i a
                        T
             a
                 i =1
To avoid the trivial solution a = 0 and force the
solution to be an ellipse
                             0    0 − 2 0 0 0
                             0    1 0 0 0 0
                                             
     2               T
                             
                            T −2   0 0 0 0 0
    b − 4 ac = −1 = a Ca = a                 a
                             0    0 0 0 0 0
                             0    0 0 0 0 0
                                             
                             0
                                  0 0 0 0 0 

where C is called the constraint matrix. C is rank-
deficient.
The problem becomes min a T X T Xa = min a T Sa
                                  a                a

where X is called the design matrix
     x2   x y       y2   x           y       1
     1     1 1       1     1             1    
     x2   x y       y2   x           y       1
 X = 2     2 2       2     2             2    
                    L                         
     2                                        
    xN    x    y    y2   x           y       1
              N N    N       N           N    

and S is called the scatter matrix S = XTX.
Using Lagrange multipliers, the problem
(generalized eigenvalue problem) can be solved by
Sa = λCa.

Ellipse detection algorithm:
• normalize the edge points
• build X
• compute S
• build C
• compute eigenvalues, a is the eigenvector
  corresponding to the only negative λ

Use the MATLAB function eig to compute eigenvalues
and eigenvectors.
Original image       Edge detection result




Ellipse detection result
2.3.4 Color

Humans use color information to distinguish objects.
Color model
Color can be represented in 3 bytes – one byte for each
of Red, Green and Blue (RGB). The encoding of an
arbitrary color in the visible spectrum can be made by
combining the encoding of three primary colors RGB.
RGB color model is additive and corresponds well to
monitors.
(255, 0, 0) red
(0, 255, 0) green
(255, 255, 0) yellow
(0, 0, 0) black
(255, 255, 255) white
Some computer vision algorithms can perform better
using normalized colors.

                        R
 normalized red    r=
                      R+G+B
                        G
 normalized green g =
                      R+G+B

 normalized blue        B
                   b=
                      R+G+B
Another color model is called Hue-Saturation-Intensity
(HSI). It has the advantage that color information
(chromaticity) represented by H and S, is separated from
intensity. Hue describes the tone of the color. Saturation
provides a measure of its purity.

     R+G+B                            2R − G − B
  I=                 cosH =
       3                      2 (R − G) 2 + (R − B)(G − B)

                  3
         S = 1−       min(R, G, B)
                R+G+B
Normalize RGB components to 1. Subtract H from
360° when B/I > G/I. H is not defined when S is zero.
S is not defined when I is zero.
Conversion between RGB and HSI can be implemented
using the MATLAB functions rgb2hsv and hsv2rgb.


An RGB color image in MATLAB corresponds to a
3D matrix of dimensions M x N x 3.

image = imread(filename);
[height, width, color] = size(image);
Histogram
A histogram counts the number of pixels of each kind
(e.g. grey level, color).
Create a histogram by reading the image pixels one by
one and incrementing the appropriate bin of the
histogram. For color image, you can create 3
histograms, one for each of the 3 color components,
e.g. RGB. Each histogram may have 256 bins.
Histogram can be used to determine how similar a test
image T to a reference image R.

Assume both histograms hT and hR have K bins.
                     K
  intersection   = ∑ min(h T [i], h R [i])
                     i =1
                     K

                     ∑ min(h       T   [i], h R [i])
     match       =   i =1
                            K

                            ∑h
                            i =1
                                       R   [i]

The match value indicates how much color content of
the reference image is present in the test image. It is
relatively invariant to translation, rotation and scale
changes of image.
Sometimes, you may want to compute a dissimilarity
measure
                   K
     distance = ∑ h T [i] − h R [i]
                   i =1




Noise filtering techniques and edge detectors can be
extended to color images under the componentwise
paradigm.
2.3.5 Texture
Texture feature can be a powerful descriptor of an
image. It gives us information about the spatial
arrangement of the colors or intensities in an image.




Same histograms – 50% black, 50% white
Texture is commonly found in natural scenes and man-
made objects.




However, there is not a universally agreed upon
definition of texture.
There are two main approaches to describe texture
properties:

Structural approach – texture is a set of primitive texture
elements (texels) in some regular or repeated
relationship.

Statistical approach – texture is a quantitative measure
of the arrangement of colors or intensities in a region.

The first approach can work well for man-made, regular
patterns. The second approach is more general and easier
to compute and is used more often in practice.
Edgeness
The number of edge pixels in a given region indicates
the busyness of that region.
                            p
 Edgeness per unit area =
                            N
where |p| is the number of edge pixels in a region of N
pixels.
To include both gradient magnitude and gradient
orientation

   Edge-based histogram = (hmag, horient)

where hmag is the normalized histogram of gradient
magnitude of that region, and horient is the normalized
histogram of gradient orientation of that region.
Co-occurrence matrices
A co-occurrence matrix is a 2D array C in which both
the rows and the columns represent a set of image
values (intensities, colors). The value Cd[i, j] indicates
how many times value i co-occurs with value j in some
designated spatial relationship. The spatial relationship
is represented by a vector d = (dr, dc), indicating the
displacement of the pixel having value of j from the
pixel having value of i by dr rows and dc columns.
j
      1    1    0 0                    0 1 2
      1    1    0 0               0 4 0 2
      0    0    2 2       i       1 2 2 0
      0    0    2 2               2 0 0 2

          image I                  C(0,1)

It is common to normalize the co-occurrence matrix
and so each entry can be considered as a probability.
                             C d [i, j]
               N d [i, j] =
                            ∑∑ C d [i, j]
                              i    j
Numeric features can be computed from the co-
occurrence matrix that can be used to represent the
texture more compactly.

Energy = ∑∑ N d [i, j]
              2

            i       j



Entropy = −∑∑ N d [i, j]log 2 N d [i, j]
                    i       j


Contrast = ∑∑ (i - j) 2 N d [i, j]
                i       j

                                    N d [i, j]
Homogeneity = ∑∑
                            i   j   1+ i - j
∑∑ (i − µ )(j − µ )N
                   i   j
                            i       j    d   [i, j]
Correlation   =
                            σiσ j

where µi, µj are the means and σi, σj are the standard
deviations of the row and column sums

              N d [i] = ∑ N d [i, j]
                            j

              N d [j] = ∑ N d [i, j]
                                i
Summary

♦ Hough transform

♦ ellipse fitting

♦ color models

♦ histogram

♦ texture features

More Related Content

What's hot

Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
11.the comparative study of finite difference method and monte carlo method f...
11.the comparative study of finite difference method and monte carlo method f...11.the comparative study of finite difference method and monte carlo method f...
11.the comparative study of finite difference method and monte carlo method f...Alexander Decker
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportGabriel Peyré
 
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidelMetodos jacobi y gauss seidel
Metodos jacobi y gauss seidelCesar Mendoza
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsGabriel Peyré
 
optimal control principle slided
optimal control principle slidedoptimal control principle slided
optimal control principle slidedKarthi Ramachandran
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Gabriel Peyré
 
Lesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of CalculusLesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of CalculusMatthew Leingang
 
Convex optimization methods
Convex optimization methodsConvex optimization methods
Convex optimization methodsDong Guo
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerChristian Robert
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Christian Robert
 

What's hot (19)

Notes 17
Notes 17Notes 17
Notes 17
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Stochastic Assignment Help
Stochastic Assignment Help Stochastic Assignment Help
Stochastic Assignment Help
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
11.the comparative study of finite difference method and monte carlo method f...
11.the comparative study of finite difference method and monte carlo method f...11.the comparative study of finite difference method and monte carlo method f...
11.the comparative study of finite difference method and monte carlo method f...
 
Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
 
Metodos jacobi y gauss seidel
Metodos jacobi y gauss seidelMetodos jacobi y gauss seidel
Metodos jacobi y gauss seidel
 
Low Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse ProblemsLow Complexity Regularization of Inverse Problems
Low Complexity Regularization of Inverse Problems
 
Figures
FiguresFigures
Figures
 
YSC 2013
YSC 2013YSC 2013
YSC 2013
 
optimal control principle slided
optimal control principle slidedoptimal control principle slided
optimal control principle slided
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
Low Complexity Regularization of Inverse Problems - Course #3 Proximal Splitt...
 
Lesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of CalculusLesson 28: The Fundamental Theorem of Calculus
Lesson 28: The Fundamental Theorem of Calculus
 
Convex optimization methods
Convex optimization methodsConvex optimization methods
Convex optimization methods
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 
ABC model choice
ABC model choiceABC model choice
ABC model choice
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]
 

Similar to Test

Similar to Test (20)

Dip3
Dip3Dip3
Dip3
 
Signal Processing Homework Help
Signal Processing Homework HelpSignal Processing Homework Help
Signal Processing Homework Help
 
SinogramReconstruction
SinogramReconstructionSinogramReconstruction
SinogramReconstruction
 
Mining of massive datasets
Mining of massive datasetsMining of massive datasets
Mining of massive datasets
 
Computer Network Homework Help
Computer Network Homework HelpComputer Network Homework Help
Computer Network Homework Help
 
torsionbinormalnotes
torsionbinormalnotestorsionbinormalnotes
torsionbinormalnotes
 
Calculus Assignment Help
Calculus Assignment HelpCalculus Assignment Help
Calculus Assignment Help
 
Calculus Homework Help
Calculus Homework HelpCalculus Homework Help
Calculus Homework Help
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Differential Equations Assignment Help
Differential Equations Assignment HelpDifferential Equations Assignment Help
Differential Equations Assignment Help
 
UNIT III.pptx
UNIT III.pptxUNIT III.pptx
UNIT III.pptx
 
Unit 3
Unit 3Unit 3
Unit 3
 
Unit 3
Unit 3Unit 3
Unit 3
 
ImageSegmentation (1).ppt
ImageSegmentation (1).pptImageSegmentation (1).ppt
ImageSegmentation (1).ppt
 
ImageSegmentation.ppt
ImageSegmentation.pptImageSegmentation.ppt
ImageSegmentation.ppt
 
ImageSegmentation.ppt
ImageSegmentation.pptImageSegmentation.ppt
ImageSegmentation.ppt
 
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docxMATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
MATH 200-004 Multivariate Calculus Winter 2014Chapter 12.docx
 
Mtc ssample05
Mtc ssample05Mtc ssample05
Mtc ssample05
 
Mtc ssample05
Mtc ssample05Mtc ssample05
Mtc ssample05
 
Computer Graphics & linear Algebra
Computer Graphics & linear Algebra Computer Graphics & linear Algebra
Computer Graphics & linear Algebra
 

More from Kinni MEW (18)

Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Test
TestTest
Test
 
Lec1
Lec1Lec1
Lec1
 

Test

  • 1. 2.3.3 Line and curve detection Lines and curves are important features in computer vision because they define the contours of objects in the image. Take the result of edge detection as input, see if we can link up the edge pixels and detect the existence of lines and curves.
  • 2. Line detection - Hough transform (HT) A line y = mx + n can be represented by a parameter pair (m, n) which is one point in the parameter space. You can rewrite the line equation as n = -mx + y. For a point p = [x, y]T, m and n can vary and form a line in the parameter space, representing all possible lines passing through p.
  • 3. A line containing N edge points p1, …, pN is identified in the parameter space by the intersection of N lines associated with p1, …, pN. m p2 n p1 n’ image m’ parameter space
  • 4. As m and n vary, the corresponding entry in the parameter space will be increased by 1. Ideally, all entries covered by the lines in the parameter space will have the count of 1 except that the entry (m’, n’) has the count of N. Therefore, HT is a voting algorithm. To detect line is to search for peak in the parameter space. There is a problem using the parameter space formed by m, n!
  • 5. [0, 0] j θ 360° ρ . θ . . i 10° 0° 0 10 ... M2 + N2 ρ discrete image I[i, j] parameter space In implementation, we usually adopt the polar representation ρ = j cosθ - i sinθ.
  • 6. Line detection algorithm: • input edge detection result (M x N binary image) • quantize ρ and θ to create the parameter space, ρ ∈[0 , M 2 + N 2 ],θ ∈ [ 0 ,2π ] • initialize all entries of the parameter space to zero • for each edge point [i, j], increment by 1 each entry that satisfies the polar representation, e.g. for a specific quantized value of θ, find the closest quantized value of ρ • find the local maxima (ρ, θ), each has the count > user-defined threshold τ
  • 7. Hough transform can be implemented using the MATLAB function hough. Advantages: • It is possible to detect the line even some points are missing. • False edge points are unlikely to contribute to the same set of parameters and generate a false peak in the parameter space.
  • 8. Disadvantages: • You can generalize HT to detect curve y = f(x, a), where a = [a1, …, aP]. The parameter space is P- dimensional. However, search time increases rapidly with the number of parameters. • Non-target shapes can produce spurious peaks in parameter space, e.g. low-curvature curves can be detected as lines.
  • 9. Another approach to detect line is to fit edge pixels to a line model - model fitting approach For a generic line ax + by + c = 0, or , find the parameter vector [a, b, c]T which results in a line going as near as possible to each edge point. A solution is reached when the distance between the line and all the edge points is minimum – least squares problem.
  • 10. Ellipse fitting Many objects contain circular shapes, which usually appear as ellipses in image. Therefore, ellipse detectors are useful tools for computer vision. According to the model fitting approach, find the best ellipse that can fit the edge points.
  • 11. Assume that the set of edge points pi = [xi, yi]T, i = 1, ...,N belong to a single arc of ellipse xTa = ax2+bxy+cy2+dx+ey+f = 0 where x = [x2, xy, y2, x, y, 1]T and a = [a, b, c, d, e, f]T. Find the parameter vector a, associated to the ellipse which fits p1…pN best in the least squares sense N 2 min ∑ x i a T a i =1
  • 12. To avoid the trivial solution a = 0 and force the solution to be an ellipse 0 0 − 2 0 0 0 0 1 0 0 0 0   2 T  T −2 0 0 0 0 0 b − 4 ac = −1 = a Ca = a  a 0 0 0 0 0 0 0 0 0 0 0 0   0  0 0 0 0 0  where C is called the constraint matrix. C is rank- deficient.
  • 13. The problem becomes min a T X T Xa = min a T Sa a a where X is called the design matrix  x2 x y y2 x y 1  1 1 1 1 1 1   x2 x y y2 x y 1 X = 2 2 2 2 2 2   L   2  xN x y y2 x y 1  N N N N N  and S is called the scatter matrix S = XTX.
  • 14. Using Lagrange multipliers, the problem (generalized eigenvalue problem) can be solved by Sa = λCa. Ellipse detection algorithm: • normalize the edge points • build X • compute S • build C • compute eigenvalues, a is the eigenvector corresponding to the only negative λ Use the MATLAB function eig to compute eigenvalues and eigenvectors.
  • 15. Original image Edge detection result Ellipse detection result
  • 16. 2.3.4 Color Humans use color information to distinguish objects.
  • 17. Color model Color can be represented in 3 bytes – one byte for each of Red, Green and Blue (RGB). The encoding of an arbitrary color in the visible spectrum can be made by combining the encoding of three primary colors RGB. RGB color model is additive and corresponds well to monitors. (255, 0, 0) red (0, 255, 0) green (255, 255, 0) yellow (0, 0, 0) black (255, 255, 255) white
  • 18. Some computer vision algorithms can perform better using normalized colors. R normalized red r= R+G+B G normalized green g = R+G+B normalized blue B b= R+G+B
  • 19. Another color model is called Hue-Saturation-Intensity (HSI). It has the advantage that color information (chromaticity) represented by H and S, is separated from intensity. Hue describes the tone of the color. Saturation provides a measure of its purity. R+G+B 2R − G − B I= cosH = 3 2 (R − G) 2 + (R − B)(G − B) 3 S = 1− min(R, G, B) R+G+B Normalize RGB components to 1. Subtract H from 360° when B/I > G/I. H is not defined when S is zero. S is not defined when I is zero.
  • 20. Conversion between RGB and HSI can be implemented using the MATLAB functions rgb2hsv and hsv2rgb. An RGB color image in MATLAB corresponds to a 3D matrix of dimensions M x N x 3. image = imread(filename); [height, width, color] = size(image);
  • 21. Histogram A histogram counts the number of pixels of each kind (e.g. grey level, color). Create a histogram by reading the image pixels one by one and incrementing the appropriate bin of the histogram. For color image, you can create 3 histograms, one for each of the 3 color components, e.g. RGB. Each histogram may have 256 bins.
  • 22.
  • 23. Histogram can be used to determine how similar a test image T to a reference image R. Assume both histograms hT and hR have K bins. K intersection = ∑ min(h T [i], h R [i]) i =1 K ∑ min(h T [i], h R [i]) match = i =1 K ∑h i =1 R [i] The match value indicates how much color content of the reference image is present in the test image. It is relatively invariant to translation, rotation and scale changes of image.
  • 24. Sometimes, you may want to compute a dissimilarity measure K distance = ∑ h T [i] − h R [i] i =1 Noise filtering techniques and edge detectors can be extended to color images under the componentwise paradigm.
  • 25. 2.3.5 Texture Texture feature can be a powerful descriptor of an image. It gives us information about the spatial arrangement of the colors or intensities in an image. Same histograms – 50% black, 50% white
  • 26. Texture is commonly found in natural scenes and man- made objects. However, there is not a universally agreed upon definition of texture.
  • 27. There are two main approaches to describe texture properties: Structural approach – texture is a set of primitive texture elements (texels) in some regular or repeated relationship. Statistical approach – texture is a quantitative measure of the arrangement of colors or intensities in a region. The first approach can work well for man-made, regular patterns. The second approach is more general and easier to compute and is used more often in practice.
  • 28. Edgeness The number of edge pixels in a given region indicates the busyness of that region. p Edgeness per unit area = N where |p| is the number of edge pixels in a region of N pixels.
  • 29. To include both gradient magnitude and gradient orientation Edge-based histogram = (hmag, horient) where hmag is the normalized histogram of gradient magnitude of that region, and horient is the normalized histogram of gradient orientation of that region.
  • 30. Co-occurrence matrices A co-occurrence matrix is a 2D array C in which both the rows and the columns represent a set of image values (intensities, colors). The value Cd[i, j] indicates how many times value i co-occurs with value j in some designated spatial relationship. The spatial relationship is represented by a vector d = (dr, dc), indicating the displacement of the pixel having value of j from the pixel having value of i by dr rows and dc columns.
  • 31. j 1 1 0 0 0 1 2 1 1 0 0 0 4 0 2 0 0 2 2 i 1 2 2 0 0 0 2 2 2 0 0 2 image I C(0,1) It is common to normalize the co-occurrence matrix and so each entry can be considered as a probability. C d [i, j] N d [i, j] = ∑∑ C d [i, j] i j
  • 32. Numeric features can be computed from the co- occurrence matrix that can be used to represent the texture more compactly. Energy = ∑∑ N d [i, j] 2 i j Entropy = −∑∑ N d [i, j]log 2 N d [i, j] i j Contrast = ∑∑ (i - j) 2 N d [i, j] i j N d [i, j] Homogeneity = ∑∑ i j 1+ i - j
  • 33. ∑∑ (i − µ )(j − µ )N i j i j d [i, j] Correlation = σiσ j where µi, µj are the means and σi, σj are the standard deviations of the row and column sums N d [i] = ∑ N d [i, j] j N d [j] = ∑ N d [i, j] i
  • 34. Summary ♦ Hough transform ♦ ellipse fitting ♦ color models ♦ histogram ♦ texture features