This document provides an overview of digital image processing. It begins with definitions of key terms like digital image, pixels, and image file formats. It then outlines the main stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also discusses the history and applications of digital image processing in fields like medicine, astronomy, law enforcement, and more. Finally, it describes the typical components of an image processing system such as image sensors, specialized hardware, computer, software, storage, displays, and networking.
9. āOne picture is worth more than ten thousand
wordsā
Visual Information is more powerful than
textual information
10. Introduction
ā¢ Digital image play an important role both in
daily life applications such as
ā Satellite television
ā Magnetic resonance imagining
ā Geographical information system and astronomy
ā The term digital image generally refers to the
manipulation /processing of an image by means of
a processor/computer.
11. What is an Image?
An image is a two dimensional function that
represents a measure of some characteristics
such as brightness or color of a viewed scene
It Can be many things ā photograph, CAD
drawing, painting, a scene, a memory, and so
on.
An image is a projection of a 3D scene into a 2D
projection plane.
12. Different types of Images
ā¢ Analog Image and Digital Image
can be mathematically represented as a
continuous range of values representing
position and intensity example the television
image.
ā¢ A digital image is a two dimensional discrete
signal (a matrix of small pixels and elements)
For manipulating the images, there is a
number of software and algorithms that are
applied to perform changes
13. Why do we need Image processing
ā¢ It is motivated by three major applications
āTo improve pictorial information for
human interpretation.
āImage processing for autonomous
machine application
āEfficient storage and transmission
24. ā¢ Now these methods mainly employ the
different image processing techniques to
enhance the pictorial information for human
interpretation and analysis.
ā¢ Typical applications of these kinds of
techniques are noise filtering.
25.
26. ā¢ To extract some description or some features which
can be used for further processing by a digital
computer. And such a kind of processing can be
applied in industrial machine vision, for product,
assembly and inspection.
ā¢ It can be used for automated target detection and
tracking.
ā¢ This can also be used for processing of aerial and
satellite images, for weather prediction, crop
assessment and many other applications.
ā¢ This can be used for fingerprint recognition.
27.
28.
29.
30.
31.
32.
33.
34. ā¢ We want to process the image to reduce the
space required to store that image or if want
to transmit the image we should be able to
transmit the image over a low bandwidth
channel.
35. History of Digital Image Processing
ā¢Early 1920s: One of the first applications of
digital imaging was in the news-
paper industry
ā The Bartlane cable picture
transmission service
ā Images were transferred by submarine cable
between London and New York
ā Pictures were coded for cable transfer and
reconstructed at the receiving end on a telegraph
printer
Early digital image
36. ā¢ In early 1920s image processing techniques were being
used. And during those days, the image processing
techniques or the digital images were used to transmit
the newspapers pictures between London and New York.
ā¢ And these digital pictures are carried by submarine
cables; the system which was known as Bartlane
Systems.
ā¢ When you transmit these digital images via a submarine
cable.
ā¢ The transmitting side to have a facility for digitization of
the image, similarly on the receiving side, to have a
facility for reproduction of the image..
ā¢ The pictures are being reproduced by the telegraphic
printers
37. History of DIP (contā¦)
ā¢In 1921s: Improvements to the Bartlane system
resulted in higher quality images
ā New reproduction
processes based
on photographic
techniques
ā Increased number
of tones in
reproduced images
Improved
digital image
38. ā¢ And in this case, on the receiver, instead of using the
telegraphic printer, the digital images or the codes of
the digital images were perforated on a tape and
photographic printing was carried on using those
tapes.
ā¢ There are 2 images, the second image is obviously
the image that we have shown in the earlier slide.
ā¢ The first image is the image which has been
produced using this photographic printing process.
So here you find that the improvement both in terms
of tonal quality as well as in terms of resolution is
quite evident.
39. History of DIP (contā¦)
ā¢1929s: Improvements to the Bartlane system
resulted in higher quality images
ā New reproduction
processes based
on photographic
techniques
ā Increased number
of tones in
reproduced images
Improved
digital image Early 15 tone digital
image
40. History of DIP (contā¦)
ā¢1960s: Improvements in computing technology
and the onset of the space race led to a surge of
work in digital image processing
ā 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
ā Such techniques were used
in other space missions
including the Apollo landings
A picture of the moon taken
by the Ranger 7 probe
minutes before landing
41. History of DIP (contā¦)
ā¢1970s: Digital image processing begins to be
used in medical applications
ā 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans
Typical head slice CAT
image
42. History of DIP (contā¦)
ā¢1980s - Today: The use of digital image
processing techniques has exploded and they
are now used for all kinds of tasks in all kinds of
areas
ā Image enhancement/restoration
ā Artistic effects
ā Medical visualisation
ā Industrial inspection
ā Law enforcement
ā Human computer interfaces
44. Examples: The Hubble Telescope
ā¢Launched in 1990 the Hubble
telescope can take images of
very distant objects
ā¢However, an incorrect mirror
made many of Hubbleās
images useless
ā¢Image processing
techniques were
used to fix this
46. Examples: Medicine
ā¢Take slice from MRI scan of canine heart, and
find boundaries between types of tissue
ā Image with gray levels representing tissue density
ā Use a suitable filter to highlight edges
Original MRI Image of a Dog Heart Edge Detection Image
47. Examples: GIS
ā¢Geographic Information Systems
ā Digital image processing techniques are used
extensively to manipulate satellite imagery
ā Terrain classification
ā Meteorology
48. Examples: GIS (contā¦)
ā¢Night-Time Lights of the
World data set
ā Global inventory of
human settlement
ā Not hard to imagine
the kind of analysis
that might be done
using this data
49. Examples: Industrial Inspection
ā¢Human operators are
expensive, slow and
unreliable
ā¢Make machines do the
job instead
ā¢Industrial vision systems
are used in all kinds of
industries
50. Examples: Law Enforcement
ā¢Image processing
techniques are used
extensively by law
enforcers
ā Number plate
recognition for speed
cameras/automated toll
systems
ā Fingerprint recognition
ā Enhancement of CCTV
images
51. Examples: Law Enforcement
ā¢Image processing
techniques are used
extensively by law
enforcers
ā Number plate
recognition for speed
cameras/automated toll
systems
ā Fingerprint recognition
ā Enhancement of CCTV
images
52. Image Representation
Digital image is composed of M rows and N
columns of pixels
each storing a value
Pixel values are most
often grey levels in the
range 0-255(black-white)
53. A digital image is a representation of a two-
dimensional image as a finite set of digital values, called
picture elements or pixels
54. Definition
ā¢ An image may be defined as a two
dimensional function f(x,y)
ā¢ Where x and y are spatial coordinates
ā¢ Digital image processing refers to processing
digital images by means of digital computer
55. Several fields deal with images
ā¢ Computer Graphics : the creation of images.
ā¢ Image Processing : the enhancement or other
manipulation of the image ā the result of
which is usually another images.
ā¢ Computer Vision: the analysis of image
content.
56. What is DIP?
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process
Input: Image
Output: Image
Examples: Noise
removal, image
sharpening
Mid Level Process
Input: Image
Output: Attributes
Examples: Object
recognition,
segmentation
High Level Process
Input: Attributes
Output: Understanding
Examples: Scene
understanding,
autonomous navigation
57. 3 types of computerized process
ā¢ Low-level : input, output are images Primitive
operations such as image preprocessing to
reduce noise, contrast enhancement, and
image sharpening
ā¢ Mid-level : inputs may be images, outputs are
attributes extracted from those images
Segmentation, Description of objects,
Classification of individual objects
ā¢ High-level : Image analysis
60. In digital imaging, a pixel, or pel, (picture
element) is a single point in a raster image,
or the smallest addressable screen element
in a display device; it is the smallest unit of
picture that can be represented or
controlled. Each pixel has its own address.
The address of a pixel corresponds to its
coordinates.
Basics about Pixels
61. A megapixel (MP or Mpx) is one million pixels,
and is a term used not only for the number of
pixels in an image, but also to express the
number of image sensor elements of digital
cameras or the number of display elements of
digital displays. For example, a camera with an
array of 2048 Ć 1536 sensor elements is
commonly said to have "3.1 megapixels" (2048 Ć
1536 = 3,145,728).
Basic about Pixels (contdā¦)
75. Key Stages in Digital Image Processing
Image Acquisition
Image Restoration
Morphological
Processing
Segmentation
Representation &
Description
Image Enhancement
Object Recognition
Problem Domain
Colour Image
Processing
Image Compression
76. Image Acquisition
ā¢ It is the first process
ā¢ The input image will be in digital form
ā¢ It involves acquisition of source from real
word images and storing them in computer for
further pre-processing such as scaling.
77. Key Stages in Digital Image Processing:
Image Aquisition
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
78. Image Enhancement
ā¢ Process of manipulating an image so that the
result is more suitable than the original for a
specific application
ā¢ Enhancement techniques are problem
oriented
ā¢ Means a method which is suitable for
enhancing x ray Images may not be the best
approach for enhancing satellite images
79. Key Stages in Digital Image Processing:
Image Enhancement
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
80. Image Restoration
ā¢ Which deals with improving the appearance of
an image.
ā¢ Restoration techniques are based on
mathematical or probabilistic models of image
degradation.
ā¢ Where as enhancement is based on human
subjective preferences
81. Key Stages in Digital Image Processing:
Image Restoration
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
82. Morphological Processing
ā¢ Deals with tools for extracting image
components that are useful in the
representation and description of shape such
as boundaries skeleton etc.
ā¢ Input are images and output are attributes
extracted from those images
83. Key Stages in Digital Image Processing:
Morphological Processing
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
84. Segmentation
ā¢ Partition of an image into its constituent parts
or objects
ā¢ It is one of the most difficult stages in DIP
ā¢ More accurate the segmentation the more
likely the recognition to succeed
85. Key Stages in Digital Image Processing:
Segmentation
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
86. Representation & Description
ā¢ It is the output of the segmentation stage
which usually is raw pixel data consisting
either the boundary of a region
ā¢ Description also called feature selection deals
with extracting attributes that result in some
quantitative information of interest or are
basic differentiating once class of object from
another
88. Object Recognition
ā¢ The process that assigns a label (eg
Vehicle ) to an object based on its
descriptors.
ā¢ A pattern class is a family of patterns that
share some common properties
ā¢ Pattern classes are denoted by
w1,w2ā¦.Ww Where W is the no of class
patterns
ā¢ Grouping into separate classes
89. Key Stages in Digital Image Processing:
Object Recognition
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
90. Image Compression
ā¢ The technique deals with reducing the storage
required to save an image or the bandwidth
required to transmit it.
Colour Image Processing
ā¢ Use of color images has been gaining
importance because of significant increase in
the use of digital images over the internet
91. Key Stages in Digital Image Processing:
Image Compression
Image
Acquisition
Image
Restoration
Morphological
Processing
Segmentation
Representation
& Description
Image
Enhancement
Object
Recognition
Problem Domain
Colour Image
Processing
Image
Compression
92. Components of an Image Processing
System
Network
Image displays Computer Mass storage
Hardcopy
Specialized image
processing hardware
Image processing
software
Image sensors
Problem Domain
Typical general-
purpose DIP
system
93. Components of an Image Processing
System
1. Image Sensors
Two elements are required to acquire digital
images. The first is the physical device that is
sensitive to the energy radiated by the object
we wish to image (Sensor). The second,
called a digitizer, is a device for converting
the output of the physical sensing device into
digital form.
94. 2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as an
arithmetic logic unit (ALU), which performs arithmetic and
logical operations in parallel on entire images.
This type of hardware sometimes is called a front-end
subsystem, and its most distinguishing characteristic is speed.
In other words, this unit performs functions that require fast
data throughputs that the typical main computer cannot
handle.
95. 3. Computer
The computer in an image processing system is a general-
purpose computer and can range from a PC to a
supercomputer.
In dedicated applications, sometimes specially designed
computers are used to achieve a required level of
performance.
96. 4. Image Processing Software
Software for image processing consists of specialized modules
that perform specific tasks.
A well-designed package also includes the capability for the
user to write code that, as a minimum, utilizes the specialized
modules.
97. 5. Mass Storage Capability
Mass storage capability is a must in a image processing
applications. An image of sized 1024 * 1024 pixels requires one
megabyte of storage space if the image is not compressed.
Digital storage for image processing applications falls into three
principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
98. 6. Image Displays
The displays in use today are mainly color
(preferably flat screen) TV monitors.
Monitors are driven by the outputs of the
image and graphics display cards that are an
integral part of a computer system.
99. 7. Hardcopy devices
Used for recording images, include laser
printers, film cameras, heat-sensitive devices,
inkjet units and digital units, such as optical
and CD-Rom disks.
100. 8. Networking
Is almost a default function in any computer system,
in use today. Because of the large amount of data
inherent in image processing applications the key
consideration in image transmission is bandwidth.
In dedicated networks, this typically is not a
problem, but communications with remote sites via
the internet are not always as efficient.
101. Image Sampling and quantization
ā¢ To create a digital image, we need to convert
the continuous sensed data into digital form.
This process includes 2 processes:
ā Sampling: Digitizing the co-ordinate value is called
sampling.
ā Quantization: Digitizing the amplitude value is
called quantization.
102. Image Sampling And Quantisation
A digital sensor can only measure a limited
number of samples at a discrete set of energy
levels
Quantisation is the process of converting a
continuous analogue signal into a digital
representation of this signal
103. Analogue Vs digital
ā¢Think of the volume knob on your stereo
ā¢Records Vs CDs Vs Mp3s
Remember that a digital image is always only an
approximation of a real world scene
Image Sampling And Quantisation
104. Image Sampling & Quantisation
To create a digital image, we need to convert
continuous sensed data into digital form.
ā¢This involves two processes: sampling and
quantisation
ā¢The basic idea behind sampling and
quantization is illustrated in Fig. 3.1.
105. Fig 3.1 Generating a digital image (a) Continuous image. (b) A scan line from
A to B in the continuous image. (c) Sampling & quantisation. (d) Digital scan
line.
106. ā¢ Figure 3.1(a) shows a continuous image, f (x,
y), that we want to convert to digital form.
ā¢ To convert it to digital form, we have to
sample the function in both coordinates and
in amplitude.
ā¢ An image may be continuous with respect to
the x- and y-coordinates and also in
amplitude.
107. ā¢ The one-dimensional function shown in Fig.
3.1(b) is a plot of amplitute (gray level) values
of the continuous image along the line
segment AB in Fig. 3.1(a).
ā¢ To sample this function, we take equally
spaced samples along line AB, as shown in Fig.
3.1(c).
ā¢ Location of each sample is given by a vertical
tick mark in the bottom part of the figure.
108. ā¢ The samples are shown as small white squares
superimposed on the function. The set of
these discrete locations gives the sampled
function.
ā¢ However, the values of the samples still span
(vertically) a continuous range of gray-level
values.
ā¢ In order to form a digital function, the
gray-level values also must be converted
(quantized) into discrete quantities.
109. Fig. 3.2 (a) Continuous image projected onto a sensor array.
(b) Result of image sampling and quantisation
110. Representing Digital Images
ā¢ Let f(s,t) represent a continuous image
function of two continuous variables s and t.
ā¢ The result of sampling and quantisation is a
matrix of real numbers.
ā¢ The values of the coordinates at the origin are
(x,y) = (0,0).
ā¢ The next coordinate values along the first row
are (x,y) = (0,1).
ā¢ The notation (0,1) is used to signify the 2nd
sample along the 1st row.
114. ā¢ It is advantageous to use a more traditional
matrix notation to denote a digital image and
its elements.
Fig. 3.5 A digital image
115. ā¢ The number of bits required to store a
digitised image is
ā¢ b = M x N x k
Where M & N are the number of rows and
columns, respectively.
ā¢ The number of gray levels is an integer power
of 2:
ā¢ L = 2k where k =1,2,ā¦24
ā¢ It is common practice to refer to the image as
a āk-bit imageā
116. Spatial and Intensity resolution
ā¢ The spatial resolution of an image is the physical size
of a pixel in that image; i.e., the area in the scene
that is represented by a single pixel in that image.
ā¢ Dense sampling will produce a high resolution image
in which there are many pixels, each of which
represents of a small part of the scene.
ā¢ Coarse sampling, will produce a low resolution image
in which there are a few pixels, each of which
represents of a relatively large part of the scene.
124. Interpolation
ā¢ It is a basic tool used extensively in task such
as Zooming, shrinking, rotating and geometric
corrections.
ā¢ Interpolation is the process of using known
data to estimate values at unknown locations.
ā¢ Bilinear interpolation in which we use the four nearest
neighbour to estimate the intensity at a given location
ā¢ Bicubic interpolation which involves the sixteen nearest
neighbour of a point.
126. Neighborhoods of a Pixels
ā¢ A Pixel āPā at location (x,y) has two horizontal and two vertical
neighbors
ā¢ This set of four pixels is called 4-neighbor of P=N4(p)
ā¢ Each of these neighbors is at a unit distance from P
ā¢ If P is boundary pixel it will have less number of neighbors
(x-1,y)
(x,y-1) P (x,y) (x,y+1)
(x+1,y)
127. Diagonal and 8-neighbors
ā¢ A pixels āPā has four diagonal neighbors = ND(P)
ā¢ The points of N4(p) and ND(p) together are called 8-neighbors of P N8(p)=N4(p)
U ND(p)
ā¢ If p is a boundary pixel then both ND(p) and N8(p) will have less number of
pixels
(x-1,y-1) (x-1,y+1)
P (x,y)
(x+1,y-1) (x+1,y+1)
130. Outline
ā¢ In this lecture, we consider several important
relationships between pixels in a digital image.
ā Neighborhood
ā ā¢ Adjacency
ā ā¢ Connectivity
ā ā¢ Paths
ā ā¢ Regions and boundaries
131. Metric and topological properties of
digital images
ā¢ A digital image consists of picture elements with
finite size-these pixels carry information about
the brightness of a particular location in the
image.
ā¢ pixels are arranged into a rectangular sampling
grid.
ā¢ Such a digital image is represented by a two-
dimensional matrix whose elements are natural
numbers corresponding to the quantization levels
in the brightness scale.
132. Neighbors of a Pixel
ā¢ A pixel p at coordinates (x,y) has two horizontal and two vertical neighbors
whose coordinates are given by:
(x+1,y), (x-1, y), (x, y+1), (x,y-1)
This set of pixels, called the 4-neighbors or p, is denoted by N4(p). Each pixel is
one unit distance from (x,y) and some of the neighbors of p lie outside the
digital image if (x,y) is on the border of the image.
(x, y-1)
(x-1, y) P (x,y) (x+1, y)
(x, y+1)
133. Neighbors of a Pixel
ā¢ The four diagonal neighbors of p have coordinates:
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
and are denoted by ND (p).
These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted
by N8 (p).
As before, some of the points in ND (p) and N8 (p) fall outside the image if (x,y) is on the
border of the image.
(x-1, y+1) (x+1, y-1)
P (x,y)
(x-1, y-1) (x+1, y+1)
(x-1, y+1) (x, y-1) (x+1, y-1)
(x-1, y) P (x,y) (x+1, y)
(x-1, y-1) (x, y+1) (x+1, y+1)
134. Adjacency and Connectivity
ā¢ Let V: a set of intensity values used to define
adjacency and connectivity.
ā¢ In a binary image, V = {1}, if we are referring to
adjacency of pixels with value 1.
ā¢ In a gray-scale image, the idea is the same, but V
typically contains more elements, for example, V
= {180, 181, 182, ā¦, 200}
ā¢ If the possible intensity values 0 ā 255, V set can
be any subset of these 256 values.
135. Types of Adjacency
1. 4-adjacency: Two pixels p and q with values
from V are 4-adjacent if q is in the set N4(p).
2. 8-adjacency: Two pixels p and q with values
from V are 8-adjacent if q is in the set N8(p).
3. m-adjacency =(mixed)
137. Types of Adjacency
ā¢ Mixed adjacency is a modification of 8-
adjacency. It is introduced to eliminate the
ambiguities that often arise when 8-adjacency is
used.
ā¢ For example:
138. Types of Adjacency
ā¢ In this example, we can note that to connect between two
pixels (finding a path between two pixels):
ā In 8-adjacency way, you can find multiple paths
between two pixels
ā While, in m-adjacency, you can find only one path
between two pixels
ā¢ So, m-adjacency has eliminated the multiple path
connection that has been generated by the 8-adjacency.
ā¢ Two subsets S1 and S2 are adjacent, if some pixel in S1 is
adjacent to some pixel in S2. Adjacent means, either 4-, 8-
or m-adjacency.
142. Labeling of connected Components
ā¢ Scan an image pixel by pixel from left to right
and top to bottom
ā¢ There are 2 types of connectivity interested
ļ§ 4-connectivity
ļ§ 8-connectivity
ā¢ Equivalent labeling
143. Step: 4-connected components
ā¢ P is pixel scanned process
ā¢ If pixel p is color value 0 move on the next scanning
ā¢ If pixel p is color value 1 examine pixel top and left
ā If top and left were 0, assign a new label to p
ā If only one of themāre 1, assign its label to p
ā If both of themāre 1 and have
ā¢ the same number, assign their label to p
ā¢ Different number, assign label of top to p and make note that
two label is equivalent
ā¢ Sort all pairs of equivalent labels and assign each
equivalent to be same type
145. Step: 8-connected components
ā¢ Steps are same as 4-connected components
ā¢ But the pixel that are consider is 4 previous
pixels ( top-left, top, top-right, and left )
147. A Digital Path
ā¢ A digital path (or curve) from pixel p with
coordinate (x,y) to pixel q with coordinate (s,t) is
a sequence of distinct pixels with coordinates
(x0,y0), (x1,y1), ā¦, (xn, yn) where (x0,y0) = (x,y) and
(xn, yn) = (s,t) and pixels (xi, yi) and (xi-1, yi-1) are
adjacent for 1 ā¤ i ā¤ n
ā¢ n is the length of the path
ā¢ If (x0,y0) = (xn, yn), the path is closed.
ā¢ We can specify 4-, 8- or m-paths depending on
the type of adjacency specified.
148. A Digital Path
ā¢ Return to the previous example:
In figure (b) the paths between the top right and
bottom right pixels are 8-paths. And the path
between the same 2 pixels in figure (c) is m-path
149. Connectivity
ā¢ Let S represent a subset of pixels in an image,
two pixels p and q are said to be connected in
S if there exists a path between them
consisting entirely of pixels in S.
ā¢ For any pixel p in S, the set of pixels that are
connected to it in S is called a connected
component of S. If it only has one connected
component, then set S is called a connected
set.
150. Region and Boundary
ā¢ Region
Let R be a subset of pixels in an image, we call R a
region of the image if R is a connected set.
ā¢ Boundary
The boundary (also called border or contour)
of a region R is the set of pixels in the region
that have one or more neighbors that are not
in R.
151. Region and Boundary
If R happens to be an entire image, then its boundary is
defined as the set of pixels in the first and last rows and
columns in the image.
This extra definition is required because an image has no
neighbors beyond its borders
Normally, when we refer to a region, we are referring to
subset of an image, and any pixels in the boundary of the
region that happen to coincide with the border of the
image are included implicitly as part of the region
boundary.
152. Distance Measures
ā¢ For pixels p, q and z, with coordinates (x,y),
(s,t) and (v,w), respectively, D is a distance
function if:
(a) D (p,q) ā„ 0 (D (p,q) = 0 iff p = q), Identity
(b) D (p,q) = D (q, p), and Symmetry
(c) D (p,z) ā¤ D (p,q) + D (q,z). Triangular
inequality
153. Distance Measures
ā¢ The Euclidean Distance between p and q is
defined as:
De (p,q) = [(x ā s)2 + (y - t)2]1/2
Pixels having a distance less than or equal
to some value r from (x,y) are the points
contained in a disk of
radius r centered at (x,y)
p (x,y)
q (s,t)
154. Distance Measures
ā¢ The D4 distance (also called city-block
distance) between p and q is defined as:
D4 (p,q) = | x ā s | + | y ā t |
Pixels having a D4 distance from
(x,y), less than or equal to some
value r form a Diamond
centered at (x,y)
p (x,y)
q (s,t)
D4
155. Distance Measures
Example:
The pixels with distance D4 ā¤ 2 from (x,y) form
the following contours of constant distance.
The pixels with D4 = 1 are
the 4-neighbors of (x,y)
156. Distance Measures
ā¢ The D8 distance (also called chessboard
distance) between p and q is defined as:
D8 (p,q) = max(| x ā s |,| y ā t |)
Pixels having a D8 distance from
(x,y), less than or equal to some
value r form a square
Centered at (x,y)
p (x,y)
q (s,t)
D8(b)
D8(a)
D8 = max(D8(a) , D8(b))
158. Distance Measures
ā¢ Dm distance:
is defined as the shortest m-path between the
points.
In this case, the distance between two pixels
will depend on the values of the pixels along
the path, as well as the values of their
neighbors.