3. 3
of
36
⢠UNIT I:
⢠Introduction to Image Processing
⢠Fundamentals of Image Processing and Image
Transforms, Basic steps of Image Processing
System Sampling and Quantization of an
image â Basic relationship between pixels
⢠Image Transforms: 2 D- Discrete Fourier
Transform, Discrete Cosine Transform (DCT),
Wavelet Transforms: Continuous Wavelet
Transform, Discrete Wavelet Transforms.
4. 4
of
36
Contents
This lecture will cover:
â Motivation
â What is a digital image?
â What is digital image processing?
â History of digital image processing
â State of the art examples of digital image
processing
â Key stages in digital image processing
5. 5
of
36
What is a Digital Image?
A digital image is a representation of a two-
dimensional image as a finite set of digital
values, called picture elements or pixels
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
6. 6
of
36
What is a Digital Image? (contâŚ)
Pixel values typically represent gray levels,
colours, heights, opacities etc
Remember digitization implies that a digital
image is an approximation of a real scene
1 pixel
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
7. 7
of
36
What is a Digital Image? (contâŚ)
Common image formats include:
â 1 sample per point (B&W or Grayscale)
â 3 samples per point (Red, Green, and Blue)
â 4 samples per point (Red, Green, Blue, and âAlphaâ,
a.k.a. Opacity)
For most of this course we will focus on grey-scale
images
8. 8
of
36
What is Digital Image Processing?
Digital image processing focuses on two
major tasks
â Improvement of pictorial information for
human interpretation
â Processing of image data for storage,
transmission and representation for
autonomous machine perception
Some argument about where image
processing ends and fields such as image
analysis and computer vision start
9. 9
of
36
What is DIP? (contâŚ)
The continuum from image processing to
computer vision can be broken up into low-,
mid- and high-level processes
Low Level Process
Input: Image
Output: Image
Examples: Noise
removal, image
sharpening
Mid Level Process
Input: Image
Output: Attributes
Examples: Object
recognition,
segmentation
High Level Process
Input: Attributes
Output: Understanding
Examples: Scene
understanding,
autonomous navigation
In this course we will
stop here
10. 10
of
36
History of Digital Image Processing
Early 1920s: One of the first applications of
digital imaging was in the news-
paper industry
â The Bartlane cable picture
transmission service
â Images were transferred by submarine cable
between London and New York
â Pictures were coded for cable transfer and
reconstructed at the receiving end on a
telegraph printer
Early digital image
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
11. 11
of
36
History of DIP (contâŚ)
Mid to late 1920s: Improvements to the
Bartlane system resulted in higher quality
images
â New reproduction
processes based
on photographic
techniques
â Increased number
of tones in
reproduced images
Improved
digital image Early 15 tone digital
image
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
12. 12
of
36
History of DIP (contâŚ)
1960s: Improvements in computing
technology and the onset of the space race
led to a surge of work in digital image
processing
â 1964: Computers used to
improve the quality of
images of the moon taken
by the Ranger 7 probe
â Such techniques were used
in other space missions
including the Apollo landings
A picture of the moon taken
by the Ranger 7 probe
minutes before landing
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
13. 13
of
36
History of DIP (contâŚ)
1970s: Digital image processing begins to
be used in medical applications
â 1979: Sir Godfrey N.
Hounsfield & Prof. Allan M.
Cormack share the Nobel
Prize in medicine for the
invention of tomography,
the technology behind
Computerised Axial
Tomography (CAT) scans
Typical head slice CAT
image
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
14. 14
of
36
History of DIP (contâŚ)
1980s - Today: The use of digital image
processing techniques has exploded and
they are now used for all kinds of tasks in all
kinds of areas
â Image enhancement/restoration
â Artistic effects
â Medical visualisation
â Industrial inspection
â Law enforcement
â Human computer interfaces
15. 15
of
36
Examples: Image Enhancement
One of the most common uses of DIP
techniques: improve quality, remove noise
etc
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
16. 16
of
36
Examples: The Hubble Telescope
Launched in 1990 the Hubble
telescope can take images of
very distant objects
However, an incorrect mirror
made many of Hubbleâs
images useless
Image processing
techniques were
used to fix this
18. 18
of
36
Examples: Medicine
Take slice from MRI scan of canine heart,
and find boundaries between types of tissue
â Image with gray levels representing tissue
density
â Use a suitable filter to highlight edges
Original MRI Image of a Dog Heart Edge Detection Image
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
19. 19
of
36
Examples: GIS
Geographic Information Systems
â Digital image processing techniques are used
extensively to manipulate satellite imagery
â Terrain classification
â Meteorology
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
20. 20
of
36
Examples: GIS (contâŚ)
Night-Time Lights of
the World data set
â Global inventory of
human settlement
â Not hard to imagine
the kind of analysis
that might be done
using this data
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
21. 21
of
36
Examples: Industrial Inspection
Human operators are
expensive, slow and
unreliable
Make machines do the
job instead
Industrial vision systems
are used in all kinds of
industries
Can we trust them?
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
22. 22
of
36
Examples: PCB Inspection
Printed Circuit Board (PCB) inspection
â Machine inspection is used to determine that
all components are present and that all solder
joints are acceptable
â Both conventional imaging and x-ray imaging
are used
23. 23
of
36
Examples: Law Enforcement
Image processing
techniques are used
extensively by law
enforcers
â Number plate
recognition for speed
cameras/automated
toll systems
â Fingerprint recognition
â Enhancement of
CCTV images
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
24. 24
of
36
Examples: HCI
Try to make human computer
interfaces more natural
â Face recognition
â Gesture recognition
Does anyone remember the
user interface from âMinority
Reportâ?
These tasks can be
extremely difficult
25. 25
of
36
Colour Fundamentals (contâŚ)
Chromatic light spans the electromagnetic
spectrum from approximately 400 to 700 nm
As we mentioned before human colour
vision is achieved through 6 to 7 million
cones in each eye
Images
taken
from
Gonzalez
&
Woods,
Digital
Image
Processing
(2002)
26. 26
of
36
8/19/2023
Imaging types
â˘gamma ray Imaging(Nuclear Medicine)
â˘x-ray imaging(Diagnosis)
â˘Imaging in Ultrviolet band(industrial inspection, astronomical
observation)
â˘Imaging in visible & Infra Red band
â˘Imaging in MicroWave band
The dominant application of imaging in the microwave band is wave
The unique feature of imaging radar is its ability to collect data over virtually any region at anytime, regardless of weather or
ambient lighting condition
â˘Imaging in Radio band
â˘Imaging Modeleties using non EM Spectrum band
28. 28
of
36
⢠Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other features of the
images and passes the result to the image processing hardware. It includes the
problem domain.
⢠Image Processing Hardware:
Image processing hardware is the dedicated hardware that is used to process the
instructions obtained from the image sensors. It passes the result to general purpose
computer.
⢠Computer:
Computer used in the image processing system is the general purpose computer that is
used by us in our daily life.
⢠Image Processing Software:
Image processing software is the software that includes all the mechanisms
and algorithms that are used in image processing system.
⢠Mass Storage:
Mass storage stores the pixels of the images during the processing.
⢠Hard Copy Device:
Once the image is processed then it is stored in the hard copy device. It can
be a pen drive or any external ROM device.
⢠Image Display:
It includes the monitor or display screen that displays the processed images.
⢠Network:
Network is the connection of all the above elements of the image processing system.
29. 29
of
36
Fundamental Steps in DIP
Material covered in book has two broad categories
â Methods having input & Output as images
â Methods having image as input and attributes as output
â This is depicted in diagram
â Diagram doesnât imply that all process are applied to all
images but just that any can be applied for specific purpose
8/19/2023
51. 51
of
36
BASIC RELATIONSHIP BETWEEN
PIXELS
⢠The word pixel is based on a contraction of pix ("pictures") and el
(for "element"); similar formations with el for "element" include the
words: voxel and texel.
⢠In digital imaging, a pixel (or picture element) is a single point in a
image.
⢠The pixel is the smallest addressable screen element; it is the
smallest unit of picture that can be controlled.
⢠Each pixel has its own address. The address of a pixel corresponds
to its coordinates.
⢠Pixels are normally arranged in a 2-dimensional grid, and are often
represented using dots or squares.
⢠Each pixel is a sample of an original image; more samples typically
provide more accurate representations of the original.
⢠The intensity of each pixel is variable.
⢠In colour image systems, a colour is typically represented by three
or four component intensities such as red, green, and blue, or cyan,
52. 52
of
36
bits per pixel
⢠The number of distinct colors that can be represented by a pixel depends
on the number of bits per pixel (bpp).
⢠A 1 bpp image uses 1-bit for each pixel, so each pixel can be either on or
off. Each additional bit doubles the number of colors available,
⢠so a 2 bpp image can have 4 colors, and a 3 bpp image can have 8 colors:
1 bpp, 21 = 2 colors (monochrome)
⢠2 bpp, 22 = 4 colors
⢠3 bpp, 23 = 8 colors
⢠8 bpp, 28 = 256 colors
⢠16 bpp, 216 = 65,536 colors ("Highcolor" )
⢠24 bpp, 224 â 16.8 million colors ("Truecolor")
53. 53
of
36
⢠For color depths of 15 or more bits per pixel, the depth is normally
the sum of the bits allocated to each of the red, green, and blue
components. Highcolor, usually meaning 16 bpp, normally has five
bits for red and blue, and six bits for green, as the human eye is
more sensitive to errors in green than in the other two primary
colors.
⢠For applications involving transparency, the 16 bits may be divided
into five bits each of red, green, and available: this means that each
24-bit pixel has an extra 8 bits to describe its blue, with one bit left
for transparency.
⢠A 24-bit depth allows 8 bits per component. On some systems, 32-
bit depth is opacity,dullness (for purposes of combining with another
image).
54. 54
of
36
NEIGHBORS OF A PIXEL
⢠A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose
coordinates are given by:
(x+1,y), (x-1, y), (x, y+1), (x,y-1)
This set of pixels, called the 4-neighbors or p, is denoted by N4(p). Each pixel is one
unit distance from (x,y) and some of the neighbors of p lie outside the digital image if
(x,y) is on the border of the image. The four diagonal neighbors of p have coordinates and
are denoted by ND (p).
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
55. 55
of
36
These points, together with the 4-neighbors, are called the 8-neighbors of p, denoted by N8 (p).
As before, some of the points in ND (p) and N8 (p) fall outside the image if
(x,y) is on the border of the image.
The boundary (also called border or contour) of a region R
is the set of pixels in the region that have one or more neighbors that are not
in R. If R happens to be an entire image (which we recall is a rectangular set of
pixels), then its boundary is defined as the set of pixels in the first and last rows
and columns of the image
56. 56
of
36
ADJACENCY AND CONNECTIVITY
⢠Let v be the set of gray âlevel values used to define adjacency, in a binary
image, v={1}. In a gray-scale image, the idea is the same, but V typically
contains more elements, for example, V = {180, 181, 182, âŚ, 200}.
⢠If the possible intensity values 0 â 255, V set can be any subset of these
256 values. if we are reference to adjacency of pixel with value.
⢠Three types of adjacency
â 4- Adjacency â two pixel P and Q with value from V are 4 âadjacency if
A is in the set N4(P)
â 8- Adjacency â two pixel P and Q with value from V are 8 âadjacency if
A is in the set N8(P)
â M-adjacency âtwo pixel P and Q with value from V are m â adjacency if
(i) Q is in N4(p) or
â (ii) Q is in ND(q) and
â the set N4(p) ⊠N4(q) has no pixel whose values are from V.
⢠Mixed adjacency is a modification of 8-adjacency. It is introduced to
eliminate the ambiguities that often arise when 8-adjacency is used.
57. 57
of
36
Distance Measures
For pixels p, q, and z, with coordinates (x, y), (s, t), and (v, w), respectively, D
is a distance function or metric if
(a) D(p, q) 0 AD(p, q)=0 iff p=qB,
(b) D(p, q)=D(q, p), and
(c) D(p, z) D(p, q)+D(q, z).
The Euclidean distance between p and q is defined as
De (p,q) = [(x â s)2 + (y - t)2]1/2
Pixels having a distance less than or equal to some value r from (x,y) are the points
contained in a disk of radius â r âcentered at (x,y)
58. 58
of
36
Pixels having a D4 distance from (x,y), less than or equal to some value r form a Diamond
centered at (x,y)
⢠The D4 distance (also called city-block distance) between p and q is defined as:
D4 (p,q) = | x â s | + | y â t |
73. 73
of
36
DISCRETE COSINE TRANSFORM
(DCT) :
The discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-
bands) of differing importance (with respect to the image's visual quality). The DCT is similar
to the discrete Fourier transform: it transforms a signal or image from the spatial domain to the
frequency domain.
The general equation for a 1D (N data items) DCT is defined by the following equation
and the corresponding inverse 1D DCT transform is simple F-1(u), i.e.: where
74. 74
of
36
⢠The general equation for a 2D (N by M image) DCT is defined by the
following equation:
and the corresponding inverse 2D DCT transform is simple F-1(u,v), i.e.: where
The basic operation of the DCT is as follows:
ďˇ The input image is N by M;
ďˇ f(i,j) is the intensity of the pixel in row i and column j;
ďˇ F(u,v) is the DCT coefficient in row k1 and column k2 of the DCT matrix.
ďˇ For most images, much of the signal energy lies at low frequencies; these appear in the
upper left corner of the DCT.
ďˇ Compression is achieved since the lower right values represent higher frequencies, and are
often small - small enough to be neglected with little visible distortion.
ďˇ The DCT input is an 8 by 8 array of integers. This array contains each pixel's gray scale
level;
8 bit pixels have levels from 0 to 255.
77. 77
of
36
⢠We can also apply a wavelet transform differently.
⢠Suppose we apply a wavelet transform to an image by rows, then by
columns, but using our transform at one scale only.
⢠This technique will produce a result in four quarters: the top left will be a
half-sized version of the image and the other quarterâs high-pass filtered
images.
⢠These quarters will contain horizontal, vertical, and diagonal edges of the
image.
⢠We then apply a one-scale DWT to the top-left quarter, creating smaller
images, and so on. This is called the nonstandard decomposition, and is
illustrated in figure
78. 78
of
36
⢠Steps for performing a one-scale wavelet transform are
given below:
⢠Step 1: Convolve the image rows with the low-pass filter.
⢠Step 2 : Convolve the columns of the result of step 1 with
the low-pass filter and rescale this to half its size by sub-
sampling.
⢠Step 3 : Convolve the result of step 1 with high-pass filter
and again sub-sample to obtain an image of half the
size.
⢠Step 4 : Convolve the original image rows with the high-
pass filter.
⢠Step 5: Convolve the columns of the result of step 4 with
the low-pass filter and recycle this to half its size by sub-
sampling.
⢠Step 6 :Convolve the result of step 4 with the high-pass
filter and again sub-sample to obtain an image of half the
size.
81. 81
of
36
⢠An example of a discrete wavelet
transform on an image is shown in Figure
above. On the left is the original image
data, and on the right are the coefficients
after a single pass of the wavelet
transform. The low-pass data is the
recognizable portion of the image in the
upper left corner. The high-pass
components are almost invisible because
image data contains mostly low frequency
information.
82. 82
of
36
Summary
We have looked at:
â What is a digital image?
â What is digital image processing?
â History of digital image processing
â State of the art examples of digital image
processing
â Key stages in digital image processing
â Sampling & Quantization
Next time we will start to see how it all
worksâŚ