2. INTRODUCTION
What Is Image , Image Processing , Image Processing Steps &
Types Of Image Processing
01
DIGITAL IMAGE PROCESSING
What is Digital Image , Grey Level & Pixel02
IMAGE ENHANCEMENT & IMAGE
SEGMENTATION
The Different Types of Image enhancement & Image Segmentation
03
CLASSIFICATION
Image classification and its types , supervised & unsupervised04
CONTENTS
4. INTRODUCTION
Image : Image is a optical appearance of object produced through mirror or lenses .
Image Processing : Image processing is a method to perform some operations on an image,
in order to get an enhanced image or to extract some useful information from it.
It is a type of signal processing in which input is an image and output may be image or
characteristics/features associated with that image. Nowadays, image processing is among
rapidly growing technologies. It forms core research area within engineering and computer
science disciplines too.
5. Image Processing Steps
Image processing basically includes the following three steps:
1. Importing the image via image acquisition tools.
2. Analysing and manipulating the image.
3. Output in which result can be altered image or report that is based on image analysis.
6. Types of Image Processing
There are two types of methods used for image processing namely,
1. Analogue Image Processing
2. Digital Image Processing
Analogue Image Processing : Analogue image processing can be used for the hard copies
like printouts and photographs. Image analysts use various fundamentals of interpretation
while using these visual techniques.
Digital Image Processing : Digital image processing techniques help in manipulation of the
digital images by using computers. The three general phases that all types of data have to
undergo while using digital technique are pre-processing, enhancement, and display,
information extraction.
8. Introduction Of Digital Image Processing
• Digital Image : A digital image is a representation of a real image as a set of numbers that can be stored and
handled by a digital computer. In order to translate the image into numbers, it is divided into small areas
called pixels(picture elements).
• For each pixel, the imaging device records a number, or a small set of numbers, that describe some property
of this pixel, such as its brightness (the intensity of the light) or its color. The numbers are arranged in an array
of rows and columns that correspond to the vertical and horizontal positions of the pixels in the image.
• Digital Image Processing : Digital image processing techniques help in manipulation of the digital images by
using computers. The three general phases that all types of data have to undergo while using digital technique
are pre-processing, enhancement, and display, information extraction.
9. Grey level / Grey value
• The minimum grey level is 0. The maximum grey level depends on the digitisation depth of the
image. For an 8-bit-deep image it is 255. In a binary image a pixel can only take on either the
value 0 or the value 255. In contrast, in a greyscale or colour image a pixel can take on any value
between 0 and 255.
In a colour image the grey level of each pixel can be calculated using the following formula:
Grey level = 0.299 * red component + 0.587 * green component + 0.114 * blue component
A grey level histogram indicates how many pixels of an image share the same grey level.
The x-axis shows the grey levels (e.g. from 0 to 255), the y-axis shows their frequency in the
image. This information can be used to calculate a threshold.
10. PIXEL
• In digital imaging, a pixel, pel, or picture element is a physical point in a raster image, or the
smallest addressable element in an all points addressable display device; so it is the smallest
controllable element of a picture represented on the screen.
Is the shape of a pixel always square is inherent to the screen and when a picture of higher
resolution is viewed on screen with lower resolution, o pixel shrink to fit in the screen Since no. of
pixels on the screen (max resolution) are fixed how a image or video of different resolution is
viewed?
12. IMAGE ENHANCEMENT
• Image enhancement is the process of adjusting digital images so that the results are more
suitable for display or further image analysis. For example, you can remove noise, sharpen, or
brighten an image, making it easier to identify key features.
13.
14. IMAGE SEGMENTATION
• In computer vision, image segmentation is the process of partitioning a digital image into multiple
segments. The goal of segmentation is to simplify and/or change the representation of an image
into something that is more meaningful and easier to analyze.
• Let’s understand image segmentation using a simple example. Consider the below image:
15. There’s only one object here – a dog. We can build a straightforward cat-dog classifier model and
predict that there’s a dog in the given image. But what if we have both a cat and a dog in a single
image?
We can train a multi-label classifier, in that instance. Now, there’s another caveat – we won’t
know the location of either animal/object in the image.
That’s where image localization comes into the picture (no pun intended!). It helps us to identify
the location of a single object in the given image. In case we have multiple objects present, we
then rely on the concept of object detection (OD). We can predict the location along with the
class for each object using OD.
16. Before detecting the objects and even before classifying the image, we need to understand what the
image consists of. Enter – Image Segmentation.
So how does image segmentation work?
We can divide or partition the image into various parts called segments. It’s not a great idea to
process the entire image at the same time as there will be regions in the image which do not
contain any information. By dividing the image into segments, we can make use of the important
segments for processing the image. That, in a nutshell, is how image segmentation works.
An image is a collection or set of different pixels.
17. Object detection builds a bounding box corresponding to each class in the image. But it tells us nothing
about the shape of the object. We only get the set of bounding box coordinates. We want to get more
information – this is too vague for our purposes.
Image segmentation creates a pixel-wise mask for each object in the image. This technique gives us a
far more granular understanding of the object(s) in the image.
18. The Different Types of Image Segmentation
• We can broadly divide image segmentation techniques into two types. Consider the below images:
In image 1, every pixel belongs to a particular class (either background or person). Also, all the pixels belonging to a
particular class are represented by the same color (background as black and person as pink). This is an example of
semantic segmentation
Image 2 has also assigned a particular class to each pixel of the image. However, different objects of the same class
have different colors (Person 1 as red, Person 2 as green, background as black, etc.). This is an example of instance
segmentation
20. CLASSIFICATION
Classification is one of the most widely used techniques in machine learning, with a broad array of applications,
including sentiment analysis, ad targeting, spam detection, risk assessment, medical diagnosis and image
classification. The core goal of classification is to predict a category or class y from some inputs x.
Image Classification:
Image classification refers to the task of extracting information classes from a multiband raster image.
The resulting raster from image classification can be used to create thematic maps. Depending on the interaction
between the analyst and the computer during classification,
there are two types of classification:
1. Supervised &
2. Unsupervised.
21. Supervised classification :
Supervised classification uses the spectral signatures obtained from training samples to classify an image. With the
assistance of the Image Classification toolbar, you can easily create training samples to represent the classes you
want to extract. You can also easily create a signature file from the training samples, which is then used by the
multivariate classification tools to classify the image.
Unsupervised classification :
Unsupervised classification finds spectral classes (or clusters) in a multiband image without the analyst’s intervention.
The Image Classification toolbar aids in unsupervised classification by providing access to the tools to create the
clusters, capability to analyze the quality of the clusters, and access to classification tools.
22. In the following example, the Image Classification toolbar was used to classify a Landsat TM satellite image.
The following raw satellite image is a four-band Landsat TM image of the northern area of Cincinnati, Ohio.
Input Landsat TM image
23. Using the toolbar, five land-use classes were defined from the satellite
image: Commercial/Industrial, Residential, Cropland, Forest, and Pasture.
Training samples
24. The quality of the training samples was analyzed using the training sample evaluation tools in Training Sample
Manager.
Evaluating training samples
25. Using the Image Classification toolbar and Training Sample Manager, it was determined the training samples were
representative for the area and statistically separate. Therefore, a maximum likelihood classification was performed
from the toolbar. The classified image was then cleaned to create the final land-use map as shown below.
Output classified land use map
26. E X A M P L E O F
I M A G E C L A S S I F I C A T I O N
01 02 03 04
Input Landsat
TM image
Training samples Evaluating
training samples
Output classified land
use map