DIGITAL IMAGE PROCESSING
Image Segmentation:
Thresholding
What is a digital image processing?
 An image may be defined a two-dimension function f(x,y) (x,y)
---- coordinate for a point in a plane.
f ---- intensity or gray or color in the position (x,y)
Digital image
digitize to the function f and coordinates (x,y) let them become
discrete values.
Usually f,x and y are all finite values.
Pixel
a point in a digital image is called pixel.
Digital Image Processing
regarded as a discipline from an image to another.
What is segmentation?
 Dividing the image into different regions.
 Separating objects from background and giving them individual
labels(ID numbers)
 The purpose of image segmentation is to partition an
image into meaningful regions with respect to a
particular application
Why segmentation?
 Usually image segmentation is an initial and vital step in a
series of processes aimed at overall image understanding
 Segmentation is generally the first stage in any attempt to analyze or
interpret an image automatically.
 Segmentation bridges the gap between low-level image
processing and high-level image processing.
 Some kinds of segmentation technique will be found in
any application involving the detection, recognition, and
measurement of objects in images.
Low level image processing
 Low level image processing is mainly concerned with
extracting descriptions from images (that are usually
represented as images themselves).
 The analysis usually does not know anything about what
objects are actually in the scene, nor where the scene is
relative to the observer.
 There may be multiple, largely independent
descriptions, such as edge fragments, spots, reflectances,
line fragments, etc.
Low level image processing
 For example, if one was looking at an image of a coffee
mug on a desk, the low level descriptions would make
explicit where the mug edges were, where specular
highlights were on the mug surface, what the colours on the
mug were. As this description is still linked to an image, these
descriptions would apply everywhere in the image, not just
to the mug.
High level image processing
 High-level features are something that we can directly
see and recognize, like object classification, recognition,
segmentation and so on.
 High-level algorithms are mostly in the machine learning
domain.
 These algorithms are concerned with the interpretation
or classification of a scene as a whole.
 Things like body pose classification, face detection,
classification of human actions, object detection and
recognition and so on.
High level image processing
 These algorithms are concerned with training a system to
recognize or classify something, then you provide it
some unknown input that it has never seen before and its
job is to either determine what is happening in the
scene, or locate a region of interest where it detects an
action that the system is trained to look for.
 You would have some sort of pre-processing stage
where you have some high-level system that determines
salient areas in the scene where something important is
happening. You would then apply low-level feature
detection algorithms in this localized area.
Why segmentation?
 The role of segmentation is crucial in most tasks requiring image
analysis.
 The success or failure of the task is often a direct consequence of the success or
failure of segmentation.
 Accurate segmentation of objects of interest in an image greatly
facilitates further analysis of these objects. For example, it allows us
to:
 Count the number of objects of a certain type.
 Measure geometric properties (e.g., area, perimeter)
of objects in the image.
 Study properties of an individual object (intensity,
texture, etc.)
Segmentation – difficulty
 Segmentation is often the most difficult problem to solve
in image analysis.
 There is no universal solution!
 A reliable and accurate segmentation of an image is, in
general, very difficult to achieve by purely automatic
means
Targeted Segmentation
 Segmentation is an ill-posed problem...
What is a correct segmentation of this image?
Targeted Segmentation
 ...unless we specify a segmentation target.
Targeted Segmentation
 A segmentation can also be defined as a mapping from the
set of pixels to some application dependent target set, e.g.
 {Object, Background}
 {Humans, Other objects}
 {1,2,3,4,...}
 {Healthy tissue, Tumors}
 To perform accurate segmentation, we (or our algorithms)
need to somehow know how to differentiate between
different elements of the
target set.
Segmentation Algorithms
 Segmentation algorithms are based on one of two basic
properties of color, gray values, or texture:
 Similarity
 Partition an image into regions that are similar according to a
predefined criteria.
 Discontinuity
 Detecting boundaries of regions based on local discontinuity in
intensity.
Segmentation Algorithms
 We will study Four types of algorithms
 Thresholding
 Based on pixel intensities (shape of histogram is often used for
automation).
 Edge-based
 Detecting edges that separate regions from each other.
 Region-based
 Grouping similar pixels (with e.g. region growing, split & merge).
 Watershed segmentation
 Find regions corresponding to local minima in intensity.
 Similarity
 Similarity
 Discontinuity
 Discontinuity
Thresholding
 One of the widely methods used for image segmentation.
 Single value thresholding can be given mathematically as
follows:






T
y
x
f
if
T
y
x
f
if
y
x
g
)
,
(
0
)
,
(
1
)
,
(
Thresholding Example
 Imagine a poker playing robot that needs to visually
interpret the cards in its hand
Original Image Thresholded Image
But Be Careful
 If you get the threshold wrong the results can be
disastrous
Threshold Too Low Threshold Too High
Thresholding - methods
 Global threshold
The same value is used for the whole image.
 Optimal global threshold
Based on the shape of the current image histogram.
Searching for valleys, Gaussian distribution etc.
 Local (or dynamic) threshold
The image is divided into non-overlapping sections,
which are thresholded one by one.
Global Thresholding
 Partition the image histogram using a single global threshold
 Success strongly depends on how well the histogram can be
partitioned
 We chose a threshold T midway between the two gray
value distributions.
How to find a global threshold
 The basic global threshold, T, is calculated
 as follows:
1. Select an initial estimate for T (typically the
average grey level in the image)
2. Segment the image using T to produce two groups
of pixels:
G1 : pixels with grey levels >T
G2 : pixels with grey levels ≤ T
3. Compute the average grey levels of pixels in G1 to
give μ1 and G2 to give μ2
How to find a global threshold
4. Compute a new threshold value:
5. Repeat steps 2 – 4 until the difference in T in
successive iterations is less than a predefined limit
T∞
2
2
1 
 

T
Optimum Global threshold
 Otsu’s Method
 Based on a very simple idea: Find the threshold that
minimizes the weighted within-class variance.
 This turns out to be the same as maximizing the between-
class variance.
 Operates directly on the gray level histogram
Otsu’s Method
 {0,1,2,…,L-1} , L means gray level intensity
 Select a threshold , and use it to
classify : intensity in the range and :
( ) ,0 1
T k k k L
   
1
C [0, ]
k 2
C [ 1, 1]
k L
 
 
1
2
2
G
0
=
L
G i
i
i m p





1
0
( )
k
i
i
P k p


1
2 1
1
( ) 1 ( )
L
i
i k
P k p P k

 
  

,
1 1 2 2 G
P m P m m
  1 2
+ =1
P P
,
0 1 2 1
... L
MN n n n n 
    
i
n
M*N = total number of pixel.
= number of pixels with intensity i
it is global variance.
Otsu’s Method
2 2
B 1 2 1 2
( ) ( )
k P P m m
   
2
1
1 1
( ( ) ( ))
( )(1 ( ))
G
m P k m k
P k P k


2
B
2
G
( )
=
k



1 if f(x,y) k*
0 if f(x,y) k*
( , )
g x y 


0 ( *) 1
k

 
it is between-class variance
it is a measure of separability between class.
For x = 0,1,2,…,M-1
and y = 0,1,2…,N-1.
2 2
B B
1
( ) max ( )
o k L
k k
 

  

Otsu’s Method
 The criterion function involves between-classes variance to the
total variance is defined as:
 = B
2
/ G
2
 All possible thresholds are evaluated in this way, and the
one that maximizes  is chosen as the optimal threshold
Adaptive Thresholding
 Partitioning
 An approach to handling situations in which single value
thresholding will not work is to divide an image into sub
images and threshold these individually
 Since the threshold for each pixel depends on its
location within an image this technique is said to
adaptive
Adaptive Thresholding - partitioning
 As can be seen success is mixed
 It is work when the objects of interest and the
background occupy regions of reasonably comparable
size. If not , it will fail.
 But, we can further subdivide the troublesome sub
images for more success
Adaptive Thresholding Example (cont…)
 These images show the
troublesome parts of the
previous problem further
subdivided
 After this sub division successful
thresholding can be achieved
Adaptive Thresholding
based on local image properties
 Let and denote the standard deviation and
mean value of the set of pixels contained in a
neighborhood, .
xy
 xy
m
xy
S
1 if Q(local parameters) is true
0 if Q(local parameters) is true
( , )
g x y 
 if ( , ) ( , )
otherwise
( , ) xy xy
true f x y a ANDf x y bm
xy xy false
Q m


 

Adaptive Thresholding
Using moving average
 It is based on computing a moving average along scan lines of an
image.
 denote the intensity of the point at step k+1.
n denote the number of point used in the average.
 is the initial value.
 ,where b is constant and is the moving average at
point (x,y)
1
k
z 
1
(1) /
m z n

xy xy
T bm

1
1
2
1 1
( 1) ( ) ( )
k
i k k n
i k n
m k z m k z z
n n

 
  
    


Segmentation is preper concept to hands.pptx

  • 1.
    DIGITAL IMAGE PROCESSING ImageSegmentation: Thresholding
  • 2.
    What is adigital image processing?  An image may be defined a two-dimension function f(x,y) (x,y) ---- coordinate for a point in a plane. f ---- intensity or gray or color in the position (x,y) Digital image digitize to the function f and coordinates (x,y) let them become discrete values. Usually f,x and y are all finite values. Pixel a point in a digital image is called pixel. Digital Image Processing regarded as a discipline from an image to another.
  • 3.
    What is segmentation? Dividing the image into different regions.  Separating objects from background and giving them individual labels(ID numbers)  The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application
  • 4.
    Why segmentation?  Usuallyimage segmentation is an initial and vital step in a series of processes aimed at overall image understanding  Segmentation is generally the first stage in any attempt to analyze or interpret an image automatically.  Segmentation bridges the gap between low-level image processing and high-level image processing.  Some kinds of segmentation technique will be found in any application involving the detection, recognition, and measurement of objects in images.
  • 5.
    Low level imageprocessing  Low level image processing is mainly concerned with extracting descriptions from images (that are usually represented as images themselves).  The analysis usually does not know anything about what objects are actually in the scene, nor where the scene is relative to the observer.  There may be multiple, largely independent descriptions, such as edge fragments, spots, reflectances, line fragments, etc.
  • 6.
    Low level imageprocessing  For example, if one was looking at an image of a coffee mug on a desk, the low level descriptions would make explicit where the mug edges were, where specular highlights were on the mug surface, what the colours on the mug were. As this description is still linked to an image, these descriptions would apply everywhere in the image, not just to the mug.
  • 7.
    High level imageprocessing  High-level features are something that we can directly see and recognize, like object classification, recognition, segmentation and so on.  High-level algorithms are mostly in the machine learning domain.  These algorithms are concerned with the interpretation or classification of a scene as a whole.  Things like body pose classification, face detection, classification of human actions, object detection and recognition and so on.
  • 8.
    High level imageprocessing  These algorithms are concerned with training a system to recognize or classify something, then you provide it some unknown input that it has never seen before and its job is to either determine what is happening in the scene, or locate a region of interest where it detects an action that the system is trained to look for.  You would have some sort of pre-processing stage where you have some high-level system that determines salient areas in the scene where something important is happening. You would then apply low-level feature detection algorithms in this localized area.
  • 9.
    Why segmentation?  Therole of segmentation is crucial in most tasks requiring image analysis.  The success or failure of the task is often a direct consequence of the success or failure of segmentation.  Accurate segmentation of objects of interest in an image greatly facilitates further analysis of these objects. For example, it allows us to:  Count the number of objects of a certain type.  Measure geometric properties (e.g., area, perimeter) of objects in the image.  Study properties of an individual object (intensity, texture, etc.)
  • 10.
    Segmentation – difficulty Segmentation is often the most difficult problem to solve in image analysis.  There is no universal solution!  A reliable and accurate segmentation of an image is, in general, very difficult to achieve by purely automatic means
  • 11.
    Targeted Segmentation  Segmentationis an ill-posed problem... What is a correct segmentation of this image?
  • 12.
    Targeted Segmentation  ...unlesswe specify a segmentation target.
  • 13.
    Targeted Segmentation  Asegmentation can also be defined as a mapping from the set of pixels to some application dependent target set, e.g.  {Object, Background}  {Humans, Other objects}  {1,2,3,4,...}  {Healthy tissue, Tumors}  To perform accurate segmentation, we (or our algorithms) need to somehow know how to differentiate between different elements of the target set.
  • 14.
    Segmentation Algorithms  Segmentationalgorithms are based on one of two basic properties of color, gray values, or texture:  Similarity  Partition an image into regions that are similar according to a predefined criteria.  Discontinuity  Detecting boundaries of regions based on local discontinuity in intensity.
  • 15.
    Segmentation Algorithms  Wewill study Four types of algorithms  Thresholding  Based on pixel intensities (shape of histogram is often used for automation).  Edge-based  Detecting edges that separate regions from each other.  Region-based  Grouping similar pixels (with e.g. region growing, split & merge).  Watershed segmentation  Find regions corresponding to local minima in intensity.  Similarity  Similarity  Discontinuity  Discontinuity
  • 16.
    Thresholding  One ofthe widely methods used for image segmentation.  Single value thresholding can be given mathematically as follows:       T y x f if T y x f if y x g ) , ( 0 ) , ( 1 ) , (
  • 17.
    Thresholding Example  Imaginea poker playing robot that needs to visually interpret the cards in its hand Original Image Thresholded Image
  • 18.
    But Be Careful If you get the threshold wrong the results can be disastrous Threshold Too Low Threshold Too High
  • 19.
    Thresholding - methods Global threshold The same value is used for the whole image.  Optimal global threshold Based on the shape of the current image histogram. Searching for valleys, Gaussian distribution etc.  Local (or dynamic) threshold The image is divided into non-overlapping sections, which are thresholded one by one.
  • 20.
    Global Thresholding  Partitionthe image histogram using a single global threshold  Success strongly depends on how well the histogram can be partitioned  We chose a threshold T midway between the two gray value distributions.
  • 21.
    How to finda global threshold  The basic global threshold, T, is calculated  as follows: 1. Select an initial estimate for T (typically the average grey level in the image) 2. Segment the image using T to produce two groups of pixels: G1 : pixels with grey levels >T G2 : pixels with grey levels ≤ T 3. Compute the average grey levels of pixels in G1 to give μ1 and G2 to give μ2
  • 22.
    How to finda global threshold 4. Compute a new threshold value: 5. Repeat steps 2 – 4 until the difference in T in successive iterations is less than a predefined limit T∞ 2 2 1     T
  • 23.
    Optimum Global threshold Otsu’s Method  Based on a very simple idea: Find the threshold that minimizes the weighted within-class variance.  This turns out to be the same as maximizing the between- class variance.  Operates directly on the gray level histogram
  • 24.
    Otsu’s Method  {0,1,2,…,L-1}, L means gray level intensity  Select a threshold , and use it to classify : intensity in the range and : ( ) ,0 1 T k k k L     1 C [0, ] k 2 C [ 1, 1] k L     1 2 2 G 0 = L G i i i m p      1 0 ( ) k i i P k p   1 2 1 1 ( ) 1 ( ) L i i k P k p P k        , 1 1 2 2 G P m P m m   1 2 + =1 P P , 0 1 2 1 ... L MN n n n n       i n M*N = total number of pixel. = number of pixels with intensity i it is global variance.
  • 25.
    Otsu’s Method 2 2 B1 2 1 2 ( ) ( ) k P P m m     2 1 1 1 ( ( ) ( )) ( )(1 ( )) G m P k m k P k P k   2 B 2 G ( ) = k    1 if f(x,y) k* 0 if f(x,y) k* ( , ) g x y    0 ( *) 1 k    it is between-class variance it is a measure of separability between class. For x = 0,1,2,…,M-1 and y = 0,1,2…,N-1. 2 2 B B 1 ( ) max ( ) o k L k k       
  • 26.
    Otsu’s Method  Thecriterion function involves between-classes variance to the total variance is defined as:  = B 2 / G 2  All possible thresholds are evaluated in this way, and the one that maximizes  is chosen as the optimal threshold
  • 27.
    Adaptive Thresholding  Partitioning An approach to handling situations in which single value thresholding will not work is to divide an image into sub images and threshold these individually  Since the threshold for each pixel depends on its location within an image this technique is said to adaptive
  • 28.
    Adaptive Thresholding -partitioning  As can be seen success is mixed  It is work when the objects of interest and the background occupy regions of reasonably comparable size. If not , it will fail.  But, we can further subdivide the troublesome sub images for more success
  • 29.
    Adaptive Thresholding Example(cont…)  These images show the troublesome parts of the previous problem further subdivided  After this sub division successful thresholding can be achieved
  • 30.
    Adaptive Thresholding based onlocal image properties  Let and denote the standard deviation and mean value of the set of pixels contained in a neighborhood, . xy  xy m xy S 1 if Q(local parameters) is true 0 if Q(local parameters) is true ( , ) g x y   if ( , ) ( , ) otherwise ( , ) xy xy true f x y a ANDf x y bm xy xy false Q m     
  • 31.
    Adaptive Thresholding Using movingaverage  It is based on computing a moving average along scan lines of an image.  denote the intensity of the point at step k+1. n denote the number of point used in the average.  is the initial value.  ,where b is constant and is the moving average at point (x,y) 1 k z  1 (1) / m z n  xy xy T bm  1 1 2 1 1 ( 1) ( ) ( ) k i k k n i k n m k z m k z z n n            

Editor's Notes

  • #26 Optimum Global Thresholding, viewed as a statistical-decision
  • #27 Smoothing can make intensity more smooth , on the other way the edge detection can make intensity more clear
  • #32 這地方為一直掃下來,並算周圍的平均及標準差,再去做分割 3)為一個special case