Threshold Selection for Image segmentationDocument Transcript
Threshold selection for image segmentation by region approach
Parijat Sinha (Non member)
This paper is examines the different image segmentation techniques. Two images have been
considered for this experiment. The first image shows rice grains scattered randomly on a
contrasting image. The second image is a spot image developed using commercially available
software. Results have been obtained using a program written in Visual BASIC 6.0 by the author.
The output images have been compiled into tables.
The computer vision branch of the field of artificial intelligence is concerned with developing
algorithms for analyzing the content of an image. The two main approaches that have come up
are statistical pattern recognition and neural networks. The statistical pattern recognition method
is the most widely use approach.
Statistical pattern recognition assumes that the image may contain one or more objects and that
each object belongs to one of several predetermine types, categories, or pattern classes. Given a
digitized image containing several objects, the pattern recognition process consists of three main
The first phase is called image segmentation or object isolation, in which each object is found
and its image is isolated from the rest of the scene.
The second phase is called feature extraction. This is where the objects are measured. A
measurement is the value of some quantifiable property of an object. A feature is a function of
one or more measurements, computed so that it quantifies some significant characteristic of the
The third phase of pattern recognition is classification. Its output is merely a decision regarding
the class to which each object belongs. Each object is recognized as being of one particular type
and the recognition is implemented as a classification process. Each object is assigned to one of
several pre-established groups (classes) that represent all the possible types of objects expected to
exist in the image.
Input Object type
Image Feature Classification
The three phases of pattern recognition
The image segmentation process
Image segmentation can be defined as a process that partitions a digital image into disjoint (non-
overlapping) regions. A region is a connected set of pixels—that is, a set in which all pixels are
adjacent or touching. The formal definition of connectedness is as follows: between any two
pixels in a connected set, there exists a connected path wholly within the set, where a connected
path is a path that always moves between neighboring pixels. Thus in a connected set, a
connected path can be traced between any two pixels without ever leaving the set.
When a human observer views a scene, the processing that takes place in the visual system
essentially segments the scene for him or her. This is done so effectively that one sees not a
complex scene, but rather something one thinks of as a collection of objects. With digital
processing, however, the objects in an image are laboriously isolated by breaking up the image
into sets of pixels, each of which is the image of one object.
Image segmentation can be approached from three different philosophical perspectives. In the
case called region approach, one assigns each pixel to a particular object or region. In the
boundary approach, one attempts only to locate the boundaries that exist between the regions.
In the edge approach, one seeks to identify edge pixels and then link them together to form the
required boundaries. All three approaches are useful for visualizing the problem.
A histogram uses a bar graph to profile the occurrences of each gray level present in an image.
An image with poor contrast has a histogram having the gray levels grouped in the dark half of
the scale. The cure for low contrast images is “histogram equalization”. Equalization causes a
histogram with a mountain grouped closely together to spread out into a flat or equalized
The mathematical transform for the histogram equalization algorithm is derived as shown:-
Consider an image c with a poor histogram. The as yet unknown function f transforms the image
c into an image b with a flat histogram.
b(x, y) = f[c(x, y)]
Let p1 (a) is the probability of finding a pixel with the value a in the image. Area1 is the number
of pixels in the image and H1(a) is the histogram of the image. The probability density function
of a pixel value a is
p1(a) = (1/area1)* H1(a)
The cumulative density function (cdf) is the sum of all the probability density functions up to
the value a.
p1(a) = (1/area1)* ∑ H1(i)
Hc(a) is the histogram of the original image c. Dm is the number of gray levels in the new image
b. the desired histogram equalization function f(a) is given as
f(a) = Dm*(1/area1)* ∑ Hc(a)
Histogram equalization followed by smoothing is often considered as the initial step in image
segmentation process. But with some images the equalized image may appear worse than the
original image. In a properly scanned image, histogram equalization can introduce noise into
what were uniform areas of an image.
The following photographs show the effect of histogram equalization on the photograph and the
histogram. The histograms are of the lower left part of the image in an area of 20*20. As visible,
equalization has induced noise into the image. Thus a prudent choice should be made before
opting for histogram equalization of images.
Original image Equalized image
Histogram for original image Histogram for equalized image
Image segmentation by thresholding
Thresholding is a particularly useful region approach technique for scenes containing solid
objects resting upon a contrasting background. It is computationally simple and never fails to
define disjoint regions with closed, connected boundaries.
When using a threshold rule for image segmentation, one assigns all pixels at or above the
threshold gray level to the object. All pixels with gray level below the threshold fall outside the
object. The boundary is then that set of interior points each of which has at least one neighbor
outside the object.
Thresholding works well if the objects of interest have uniform gray level and rest upon a
background of different but uniform gray level.
In the simplest implementation of boundary location by thresholding, the value of the threshold
gray level is held constant throughout the image. If the background gray level is reasonably
constant throughout, and if the objects all have approximately equal contrast above the
background, then a fixed global threshold will usually work well, provided that the threshold gray
level is properly selected.
A global threshold of 128 on gray scale has been considered to obtain the binary image of rice
Optimal threshold selection
Unless the object in the image has extremely steep sides, the exact value of the threshold gray
level can have considerable effect on the boundary position an overall size of the extracted object.
This means that subsequent size measurements are sensitive to the threshold gray level. Thus an
optimal or at least consistent method is needed to establish the threshold.
The two most common techniques in vogue are mentioned.
Histogram peak technique
This technique finds the two peaks in the histogram corresponding to the background and object
of the image. It sets the threshold halfway between the two peaks. When the histograms are
searched for peaks, peak spacing should be used to ensure the highest peaks are separated.
Another item to watch carefully determining which peak corresponds to the background and
which corresponds to the object.
Histogram valley technique
This technique uses the peaks of the histogram, but concentrates on the valleys between them.
Instead of setting the midpoint arbitrarily halfway between the two peaks, the valley technique
searches between the two peaks to find the lowest valley.
Histogram for equalized image Histogram for original image
In many cases, the background gray level is not constant, and the contrast of objects varies within
the image. In such cases, a threshold that works well in one area of the image might work poorly
in other areas. In such cases it is convenient to use a threshold gray level that is a slowly varying
function of position in image.
The author implemented the adaptive technique by dividing the whole image into 10*10 (and 5*5
also for some cases) size squares and then calculating a threshold for each square individually
using the histogram peak technique for optimal threshold calculation.
Original image Enhanced image
Global thresholding has been applied to the Global thresholding has been applied to the
normal image. Some rice grains are left out in enhanced image. A greater number of rice
this process. grains are visible if one looks at the lower part.
But at the same time some noise has been
Adaptive thresholding has been applied to the Adaptive thresholding has been applied to the
enhanced image considering 5*5 squares. All normal image considering 5*5 squares. All the
the grains are visible but the noise content is grains are visible but noise level is still too
too high, making the image useless. high.
Adaptive thresholding has been applied to a Adaptive thresholding has been applied to the
smoothed image considering 5*5 squares. The normal image considering 10*10 squares.
noise is still quite high. Some grains look distorted but it is the best
image obtained with a little noise.
Another method commonly used sets the threshold point at a given percentage of pixels in the
histogram. The histogram for the edge detector output is calculated and the values are summed
starting from zero. When this sum exceeds a given percent of the total the threshold value is
obtained. A good percentage to use is 50% for most edge detectors and images. This method was
applied on 10*10 squares under the adaptive thresholding method. The image was thresholded at
different percentages and the results are as shown.
Image using 40% thresholding Image using 70% thresholding
The noise content is too high in the first image and an attempt to reduce it by increasing the
threshold results in a loss in details.
The analysis of spots
Suppose an image B(x, y) contains a single spot. By definition, this image contains a point (x0, y0)
of maximum gray level. If polar co-ordinates centered upon (x0, y0) are established so that image
is given by Bp (r1, theta) >= Bp (r2, theta) if r2>r1.
B (x, y) is called a monotone spot of equality is not allowed in the above equation. This means
that the gray level strictly decreases along a line extending out in any direction from the center
An important special case occurs if all contours of a monotone spot are circles centered on the
center point. This special case is called a concentric circular spot (CCS). To a good
approximation, this usually describes the noise-free image of stars in a telescope, certain cells in a
microscope and many other important things.
The author has used an image of a CCS developed using Adobe Photoshop 6.0. The results are
The spot developed using Adobe Photoshop 6.0 Spot profile function: scan line along the center
showing the variation in intensity.
Edge detection using Robert’s edge detector. Edge detection using contrast based edge
Spot can be made out. detector. Very feeble broken lines are visible.
Adaptive thresholding applied on image, Adaptive thresholding applied on image,
thresholding being done at 75%. thresholding being done at 60%.
Adaptive thresholding applied on image, Adaptive thresholding applied on image,
thresholding being done at 50%. thresholding being done using histogram peak
Global thresholding using 128 gray level. Output obtained using other detectors.
As evident, a spot can be best analyzed using the adaptive thresholding or by studying the spot
profile function. Most other edge detectors fail to give a satisfactory result.
1. Kenneth R. Castleman, Digital Image Processing, Prentice-Hall 1996
2. Dwayne Philips, Image Processing in C, 1995