SlideShare a Scribd company logo
1 of 78
Download to read offline
Image processing 
is the study of any algorithm that takes an image as input and returns an image 
as output. 
WHY DO WE NEED IMAGE PROCESSING?… 
Since the digital image is “invisible” it must be prepared for viewing 
on one or more output device (laser printer,monitor,etc) 
The digital image can be optimized for the application by enhancing or altering the 
appearance of structures within it (based on: body part, diagnostic task, viewing 
preferences,etc) 
It might be possible to analyze the image in the computer and provide cues to the 
radiologists to help detect important/suspicious structures (e.g.: Computed Aided 
Diagnosis, CAD)
Components of an Image Processing System
Sensors 
Two elements required to acquire digital images 
Physical device: sensitive to the energy radiated by the object we wish to image. 
Digitizer: converts output of physical sensing device into digital form. 
Specialized image processing hardware: 
Digitizer + hardware 
Hardware: Performs Arithmetic Logic Unit (ALU) on entire image. 
Computer: 
Image processing system ranging from PC to supercomputer. 
Image Processing Software: 
Specialized modules performing specific tasks
Mass Storage: 
Short-term storage for use during processing. 
on line storage for relatively fast recall 
Archival storage, characterized by infrequent access 
Image Displays: 
Flat screen, TV, Monitors, LCD, LED, 3D displays. 
Hardcopy: 
Laser Printers, Camera Films, Heat Sensitive devices, inkjet units, digital units 
like optical and CD-ROM.
Image processing software
1-KS400 
 One of commercial image processing software packages used for some of the 
applications is the KONTRON Imaging System KS400. 
 very powerful and convenient in use. 
the KS400 software has one essential disadvantage: 
 it does not allow to have direct access form a macro to single pixels in the image. 
 In practice this means 
that the functionality of the software is limited by the set of standard functions. 
 In principle, the problem can be overcome by using Free Programming KONTRON 
Software Development Kit which is supplied with KS400.
2-Image processing/analysis software : 
Developed The software written for UNIX consists of a set of low-level image 
processing/analysis functions including:- 
Sun raster file image (RAS) reading/writing; 
automatic and manual image thresholding; 
gray-scale and binary morphology; 
fractal analysis of contours using 'hand and dividers' method; 
fractal analysis of percolation networks; 
image correlation.
3-Image processing with MathCAD and 
Matlab. 
The MatLab and MathCAD environments are ideally suited to image 
processing. 
programs have Image Processing Toolboxes which provide a powerful and 
flexible environment for image processing and analysis. Both programs were used 
to perform different calculations on images. 
-There are several advantages 
 One of them is the ability to have direct access to any portion of available 
information what in general is not possible with many commercial image 
analysis systems.
Image processing algorithms
1-FINDING THE NEAREST COLOUR: 
1. Start at the first pixel in the image. 
2. Get the actual color of the pixel. 
3. Find the nearest color of this pixel from the available 
palette. 
4. Replace the pixel with the nearest color. 
5. Have we reached the end of the image? If so stop 
here. 
6. Move on to the next pixel. 
7. Go back to step 2.
 There are a few methods for finding the nearest color. 
The one that I am going to explain here is called the Euclidean 
distance method. 
The algorithm for this is as follows: 
1. Set minimum distance to a value higher than the highest possible. 
2. Start at the first color in the palette. 
3. Find the difference between each RGB value of the actual color and the current 
palette color. 
4. Calculate the Euclidean distance. 
5. If the distance is smaller than the minimum distance set minimum distance to the 
smaller 
value and note the current palette color. 
6. Have we reached the end of the palette? If so stop here. 
7. Move on to the next color in the palette. 
8. Go back to step 3.
2-ERROR DIFFUSION: 
-Error diffusion works by comparing the actual color 
of a pixel against its nearest color and taking the 
difference between them. 
-This difference is known as the error. 
- Portions of the error are split between neighbouring 
pixels causing the error to be diffused hence the name 
“error diffusion”.
The simplest form of error diffusion can be 
shown as:
-With this form of error diffusion half of the error from the current pixel (represented 
by the black dot) is diffused to the pixel on the right and half to the pixel below. 
-In a color image error diffusion should be applied to the red, green and blue channels 
separately. 
- An important point to note here is that the total amount of error diffused should 
never exceed a value of 1. 
-It is also important to ensure that when a portion of the error is diffused to 
neighbouring pixels it does not cause invalid values (e.g. go below 0 or above 255). 
- Should a value go outside of the valid range then it should be truncated (e.g. -10 
would be truncated to 0 and 260 would be truncated to 255).
-The quality of this type of error diffusion however is pretty poor and very few 
people would actually use it. 
-A better one, and arguably the most famous, is Floyd-Steinberg error diffusion.
With Floyd-Steinberg error diffusion the error is distributed amongst 
more neighbouring pixels providing a much nicer overall dithering 
effect. 
Performing the colour reduction with Floyd-Steinberg error diffusion 
will produce the following results: 
images reduced to 8 colors with error diffusion
3-GREYSCALE CONVERSION: 
to convert a particular color into greyscale we need to work out what the intensity or 
brightness of that color is. 
he formula for this is: 
Another method of greyscale conversion which takes into account the human 
perception of color uses different weights for the red, green and blue components.
• To illustrate the above points let’s have a look at an image of 
some colored bars: 
• Let’s convert this image to greyscale using the mean method: 
• Converting the image to greyscale using the weighted method 
will give us the following:
4-BRIGHTNESS ADJUSTMENT: 
adding the desired change in brightness to each of the red, green and blue color 
components. 
The value of brightness will usually be in the range of -255 to +255 for a 24 bit 
palette. 
Negative values will darken the image and, conversely, positive values will 
brighten the image.
images which have had the brightness adjusted by -128 
(darkened) and +128 (brightened):
5-CONTRAST ADJUSTMENT: 
The first step is to calculate a contrast correction factor which is given by the 
following formula where the value C is the desired level of contrast: 
The next step is to perform the actual contrast adjustment itself. The 
following formula shows the adjustment in contrast being made to the red 
component of a color 
The value of contrast will be in the range of -255 to +255. 
Negative values will decrease the amount of contrast and, conversely, positive 
values will increase the amount of contrast.
6-GAMMA CORRECTION: 
can be described as the relationship between an input and the resulting output. 
For the scope of this article the input will be the RGB intensity values of an image. 
The relationship in this case between the input and output is that the output is 
proportional to the input raised to the power of gamma. The formula for 
calculating the resulting output is as follows: 
For calculating gamma correction the input value is raised to the power of the 
inverse of gamma. The formula for this is as follows: 
The range of values used for gamma will depend on the application, but 
personally I tend to use a range of 0.01 to 7.99.
7-COLOUR INVERSION AND SOLARISATION : 
-Color inversion, also known as the negative effect 
- is one of the easiest effects to achieve in image processing. 
-Color inversion is achieved by subtracting each RGB color value from the 
maximum possible value (usually 255).
Another effect which is related to color inversion is the sola rise effect. 
The difference between the sola rise effect and color inversion is that with the sola 
rise effect only color values above or below a set threshold are inverted. 
The images below show the inversion of colours below a threshold of 128 in the 
first instance and then the inversion of colours above a threshold of 128 in the 
second.
Image processing 
functions/algorithms 
Image processing applications mainly focuses on improving the visual 
appearance of images to a human viewer and preparing for measurement of the 
features and structures present. 
The measurement of images generally requires that features be well defined, 
either by edges or unique brightness, colour, texture, or some combination of 
these factors.
Spatial filtering 
An image can be filtered to remove a band of spatial frequencies 
as high frequencies and low frequencies. Where rapid brightness transitions are 
established, high frequencies will be there 
Spatial filtering operations include high pass, low pass and edge detection filters.
Sharpening 
The main aim in image sharpening is to highlight fine detail in the image 
or to enhance detail that has been blurred due to noise or other effects. 
The Laplacian often used for this purpose. However, image sharpening 
too can be interpreted in the frequency domain. 
 Sharpening emphasizes edges in the image and make them easier to 
see and recognize. 
In addition to that, differences between each pixel and its neigbour too, 
can influence sharpening effect.
Blurring 
The visual effect of a low pass filter is image blurring. 
This is because the sharp brightness transitions had been attenuated to small 
brightness transitions. 
It have less detail and blurry. 
 Blurring is aimed to diminish the effects of camera noise, unauthentic pixel values or 
missing pixel values. 
For blurring effect, there are two mostly used techniques which are :- 
-neighbourhood averaging (Gaussian filters) . 
-edge preserving (median filters).
A Gaussian blur filter modifies each pixel by looking at its 
neighbours and computing a "weighted average" of their values, 
with more weight being given to closer neighbours. 
The standard Gaussian blur filter found in most image 
processing programs is isotropic; it blends pixels values equally in 
all directions. 
For median filtering on the other hand, the outcome is that 
pixels with outlying values are forced to become more like their 
neighbours but at the same time edges are preserved.
Edge Detection 
Edge detection is an image processing technique for finding 
the boundaries of objects within images. 
It works by detecting discontinuities in brightness. 
Edge detection is used for image segmentation and data 
extraction in areas such as image processing, computer vision, 
and machine vision.
Edge Detection Techniques 
Sobel Operator 
Robert’s cross operator: 
The Roberts Cross operator performs a simple, quick to compute, 
2-D spatial gradient measurement on an image.
Prewitt’s operator: 
Prewitt operator [5] is similar to the Sobel operator and is used 
for detecting vertical and horizontal edges in images.
3-Image processing with MathCAD and Matlab. 
 The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative 
of an image. 
The Laplacian is often applied to an image that has first been 
smoothed with something approximating a Gaussian Smoothing filter 
in order to reduce its sensitivity to noise. 
The operator normally takes a single gray level image as input and 
produces another gray level image as output.
 Canny Edge Detection Algorithm 
Canny's intentions were to enhance the many edge detectors already out at the 
time he started his work. 
The first and most obvious is low error rate. 
The second criterion is that the edge points be well localized. 
A third criterion is to have only one response to a single edge. 
Hysteresis is used to track along the remaining pixels that have not been 
suppressed. 
Hysteresis uses two thresholds and if the magnitude is below the first threshold, it 
is set to zero (made a non edge). 
If the magnitude is above the high threshold, it is made an edge. And if the 
magnitude is between the 2 thresholds, then it is set to zero unless there is a path 
from this pixel to a pixel with a gradient above T2. 
Advantages and Disadvantages of Edge Detector
Introduction: face recognition systems
•Due increasing concern regarding security issues around the 
world, the growing interest in general about the extent of private 
computer systems to identify the faces of accuracy, thus increasing 
the number of security systems and applications in this area, and 
has evolved significantly, and the algorithms used in varied 
between simplicity and complexity. 
•There were many questions: 
•What is the extent of the accuracy of the algorithm to identify the 
faces even be suitable for such applications?
Stages of face recognition systems in 
general
Mechanism of automated face 
recognition system 
step1 
The formation and extraction memory facial recognition 
training set cluster, which is a collection of photos 
submitted in advance of the system.
step2 
• Find the most feature vector closest resemblance within the training 
set to beam features extracted from the image provided to the test - any 
image desired to know the identity of the owner through identification 
system 
feature vector An eclectic arrays of arrays original picture, and 
represent important values and fundamental part of the original images, 
and thereby reduced the size of images to represent the epitome ray 
images.
Step 3 
Here Within this system, we want to know the identity of the 
person and discrimination, and pass through the image of that 
person to the system, called in this case this image provided to 
get to know them in a test image. 
In order to draw the features vector images within this project, 
we will rely on the PCA algorithm to extract features of X-process 
feature vector images.
Principal Component Analysis 
(PCA) algorithm.
PCA algorithm 
•PCA algorithm is considered one of the most successful techniques 
that have been used in the field of image recognition in the field of 
image compression. 
•Used this algorithm in MATLAB program 
The main goal in the PCA algorithm 
• lies in reducing the large dimensions of space data to the 
dimensions of the smallest spaces. Usually the new spaces is a 
feature spaces (containing the basic and important features of 
the data in the original space).
Identify problems and large-dimensional 
systems 
Systems deal with large dimensional spaces. 
Can make many improvements and through matching transfer 
existing data to the data in less space dimensions. And thus we 
have our dimensionality reduction from the original space with 
large dimensions to the new space with smaller dimensions.
Let it be, for example, we have the following Beam: 
•As part of the author 
of N after the space , we 
reduce the cross-dimensional 
moving to another beam 
• 
To any space of K so that after K <N. consisting
Within this context, the PCA calculates linear conversion linear 
transformation Which in turn meet with existing data within the top-dimensional 
space to approve its information within a least-dimensional 
subspace, as is the position below:
•Or in other words 
•Whereas: 
•The optimum conversion Is a conversion in which the value 
is smaller.
Complete the task of face recognition using 
PCA 
Representation of each object within the training group 
and less space dimensions using PCA algorithm
Assuming that the face that is required is a recognizable face one dimensions 
Steps facial recognition 
Step1: 
the representation of the image lee form a single beam 
dimension Let named . 
Step 2: do b normalize the beam Put across from the average 
face 
step 3: find the nearest face within the training 
group for the face, which is required to identify it Where the 
difference between the youngest and error 
Step 4: If er <Tr, where Tr is a threshold, it has been identified 
on the face on that face
Motion detection
Motion detection is the first essential process in the extraction of 
information regarding moving objects and makes use of stabilization 
in functional areas, such as tracking, classification, recognition, and 
so on. 
Motion Alarm 
It is pretty easy to add motion alarm feature to all these motion 
detection algorithms 
Literature Survey 
We review many classes of algorithm used in motion detection, 
which include optical flow algorithms, two 
complementary background estimation technique, frame difference 
method 
Method of operation
Detection of Independent Motion
In this project we investigated two complementary methods for the 
detection of moving objects by a moving observer. The first is based 
on the fact that, in a rigid environment, the projected velocity at any 
point in the image is constrained to lie on a 1-D locus in velocity 
space, known as the constraint ray whose parameters depend only 
on the observer motion. 
If the observer motion is known, an independently moving object 
can, in principle, be detected because its projected velocity is 
unlikely to fall on this locus. We show how this principle can be 
adapted to use partial information about the motion field and 
observer motion that can be rapidly computed from real image 
sequences.
The second method utilizes the fact that the apparent motion of a fixed point due to 
smooth observer motion changes slowly, while the apparent motion of many moving 
objects such as animals or maneuvering vehicles may change rapidly. 
The motion field at a given time can thus be used to place constraints on the future 
motion field which, if violated, indicate the presence of an autonomously maneuvering 
object. 
In both cases, the qualitative nature of the constraints allows the methods to be used 
with the inexact motion information typically available from real image sequences. 
We have produced Implementations of the methods that run in real time on a parallel 
pipelined image processing system. 
The pictures show two examples of the real-time system in operation. In the first, the 
camera is mounted on a robot platform that is moving vertically with respect to the 
room. In the second, the camera is mounted on a vehicle that is being driven towards the 
road on which the detected vehicle is moving.
Motion Recognition
In this project, we investigated the use of robustly computable 
motion features can be used directly as a means of recognition. We 
have designed, implemented, and tested a general framework for 
detecting and recognizing both distributed motion activity on the 
basis of temporal texture, and complexly moving, compact objects on 
the basis of their activity.
Motion-Detection Steps 
Motion-detection is a two-step process. 
Step1 
identifies the objects that have moved between the two frames (using 
difference and threshold filter). The difference between each corresponding 
pixel of two frames is calculated; and the pixels with difference greater than 
the specified threshold are marked in foreground color (white). All the other 
pixels are marked in background color (black). So, the output of Step1 will be 
a binary-image with only two colors (black and white). 
The intensity value (brightness level) of the pixels are used to calculate the 
difference. The intensity value of each pixel in a grayscale image will be in the 
range 0 to 255. If RGB images are used, then the grayscale intensity value can 
be calculated as (R + G + B / 3).
Step2 
identifies the significant movements and filters out the noise that are 
wrongly identified as motions (using erosion-filter). Erosion-filter 
removes the pixels that are not surrounded by enough amount of 
neighboring pixels. Essentially, this gives the ability to shrink the 
objects thereby removing the noisy pixels 
i.e. stand-alone pixels. The binary image (output of Step1) is scanned 
pixel-by-pixel; and if enough number of pixels in current window are 
not turned on, then the entire window is turned off i.e. ignored as 
noise.
Image processing in Hardware
Theories 
Field-Programmable Gate Array (FPGA) 
A Field-Programmable Gate Array is a large-scale integrated 
circuit (LSI) which is programmable. 
Finite State Machine (FSM) 
Finite State Machine is a behavioral model which consists of a 
finite number of states and transitions between the states.
Gray-level Co-occurrence Matrix Statistics Image Generation 
are images which represent the statistic uniqueness of textures in an 
image. Many statistics images are generated. 
Gray-level Co-occurrence Matrix (GLCM) 
GLCM is a square matrix which contains numbers of times that 
patterns of 2 scaled values are found while examining pairs of pixels 
through an image. 
Static Random Access Memory (SRAM) 
SRAM is an electronic memory which is capable of storing data as 
long as there is the power supply for the device.
Static Random Access Memory (SRAM) 
SRAM is an electronic memory which is capable of storing data as long as there is the 
power supply for the device. 
Timing Diagram of the SRAM Write Operation
Experiments 
The main object of this experiment is to find the best algorithm to 
be implemented in the GLCM statistics image generation operation. 
Materials and Equipments 
1. Executable files of each algorithm which implements timer functions 
in it 
2. An Open Dragon image which its size is 3822 pixels by 2560 pixels 
3. A Computer with these specifications 
a. CPU: Intel Pentium 4 1.6 GHz 
b. Motherboard: IBM, Intel i845 
c. RAM: DDR 640 MB, 133 MHz
hardware devices 
1. Prototyping Board: Design Gateway True PCI 
2. SRAM: AMIC LP621024D 
3. Bi-directional Buffer: 150Ω Resistors 
System Components 
There are 13 modules in the system. They are … 
1. Memory Unit 7. Square Buffer 
2. Process Controller 8. GLCM Builder 
3. Memory Controller 9. Address Decoder 
4. Arbiter 10. Matrix Voter 
5. Center Indexer 11. Matrix Integrator 
6. Square Fetcher 12. Clock Divider 
13. pciif32 
Designs
Matrix Integrator 
Matrix Integrator was designed to calculate the three GLCM 
Statistic values by summation all calculated positions of GLCM 
matrix is in Memory Unit. 
Block structure showing ports of Matrix 
Integrator
Clock Divider 
Clock Divider is responsible for dividing the frequency of a 
clock signal into another frequency. 
Block structure showing ports of Clock Divider
Image Processing in Hardware is the project concentrating on 
designing a coprocessor for the computer system to compute the 
computationally-intensive part ofoperations in the digital image 
processing.
Thank you for your attention
Done by:- 
Eng.Amal Ahmed Almathani 
Eng.AnsamMansour 
Eng.Eftikhar ali alamri 
Eng.SafiaMoqbel 
Eng.Somia Abdalhmeed

More Related Content

What's hot

Edge linking hough transform
Edge linking hough transformEdge linking hough transform
Edge linking hough transformaruna811496
 
Image proccessing and its application
Image proccessing and its applicationImage proccessing and its application
Image proccessing and its applicationAshwini Awatare
 
Presentation on Digital Image Processing
Presentation on Digital Image ProcessingPresentation on Digital Image Processing
Presentation on Digital Image ProcessingSalim Hosen
 
Image Enhancement in Spatial Domain
Image Enhancement in Spatial DomainImage Enhancement in Spatial Domain
Image Enhancement in Spatial DomainA B Shinde
 
Interpixel redundancy
Interpixel redundancyInterpixel redundancy
Interpixel redundancyNaveen Kumar
 
Matlab Image Enhancement Techniques
Matlab Image Enhancement TechniquesMatlab Image Enhancement Techniques
Matlab Image Enhancement Techniquesmatlab Content
 
Point processing
Point processingPoint processing
Point processingpanupriyaa7
 
5. gray level transformation
5. gray level transformation5. gray level transformation
5. gray level transformationMdFazleRabbi18
 
Image Enhancement in Spatial Domain
Image Enhancement in Spatial DomainImage Enhancement in Spatial Domain
Image Enhancement in Spatial DomainDEEPASHRI HK
 
Color Image Processing: Basics
Color Image Processing: BasicsColor Image Processing: Basics
Color Image Processing: BasicsA B Shinde
 
Image Processing Basics
Image Processing BasicsImage Processing Basics
Image Processing BasicsA B Shinde
 
Transform coding
Transform codingTransform coding
Transform codingNancy K
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extractionRishabh shah
 
Smoothing Filters in Spatial Domain
Smoothing Filters in Spatial DomainSmoothing Filters in Spatial Domain
Smoothing Filters in Spatial DomainMadhu Bala
 

What's hot (20)

Digital image processing
Digital image processingDigital image processing
Digital image processing
 
Edge linking hough transform
Edge linking hough transformEdge linking hough transform
Edge linking hough transform
 
Image proccessing and its application
Image proccessing and its applicationImage proccessing and its application
Image proccessing and its application
 
Presentation on Digital Image Processing
Presentation on Digital Image ProcessingPresentation on Digital Image Processing
Presentation on Digital Image Processing
 
Image Enhancement in Spatial Domain
Image Enhancement in Spatial DomainImage Enhancement in Spatial Domain
Image Enhancement in Spatial Domain
 
Interpixel redundancy
Interpixel redundancyInterpixel redundancy
Interpixel redundancy
 
Matlab Image Enhancement Techniques
Matlab Image Enhancement TechniquesMatlab Image Enhancement Techniques
Matlab Image Enhancement Techniques
 
Image enhancement
Image enhancementImage enhancement
Image enhancement
 
Chain code in dip
Chain code in dipChain code in dip
Chain code in dip
 
JPEG Image Compression
JPEG Image CompressionJPEG Image Compression
JPEG Image Compression
 
Image segmentation
Image segmentationImage segmentation
Image segmentation
 
Point processing
Point processingPoint processing
Point processing
 
5. gray level transformation
5. gray level transformation5. gray level transformation
5. gray level transformation
 
Sharpening spatial filters
Sharpening spatial filtersSharpening spatial filters
Sharpening spatial filters
 
Image Enhancement in Spatial Domain
Image Enhancement in Spatial DomainImage Enhancement in Spatial Domain
Image Enhancement in Spatial Domain
 
Color Image Processing: Basics
Color Image Processing: BasicsColor Image Processing: Basics
Color Image Processing: Basics
 
Image Processing Basics
Image Processing BasicsImage Processing Basics
Image Processing Basics
 
Transform coding
Transform codingTransform coding
Transform coding
 
Image feature extraction
Image feature extractionImage feature extraction
Image feature extraction
 
Smoothing Filters in Spatial Domain
Smoothing Filters in Spatial DomainSmoothing Filters in Spatial Domain
Smoothing Filters in Spatial Domain
 

Viewers also liked

Exploring Bad Deconvolution Design - some examples
Exploring Bad Deconvolution Design - some examplesExploring Bad Deconvolution Design - some examples
Exploring Bad Deconvolution Design - some examplesGuy Maslen
 
Blind Image Deconvolution
Blind Image DeconvolutionBlind Image Deconvolution
Blind Image DeconvolutionKhalid Gidaya
 
2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...
2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...
2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...iosrjce
 
An Image Restoration Practical Method
An Image Restoration Practical MethodAn Image Restoration Practical Method
An Image Restoration Practical MethodIDES Editor
 
Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...
Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...
Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...iosrjce
 
Digital image processing techniques
Digital image processing techniquesDigital image processing techniques
Digital image processing techniquesShab Bi
 
Spatial filtering using image processing
Spatial filtering using image processingSpatial filtering using image processing
Spatial filtering using image processingAnuj Arora
 
Digital Image Processing - Image Enhancement
Digital Image Processing  - Image EnhancementDigital Image Processing  - Image Enhancement
Digital Image Processing - Image EnhancementMathankumar S
 

Viewers also liked (13)

Exploring Bad Deconvolution Design - some examples
Exploring Bad Deconvolution Design - some examplesExploring Bad Deconvolution Design - some examples
Exploring Bad Deconvolution Design - some examples
 
Demo
DemoDemo
Demo
 
Blind Image Deconvolution
Blind Image DeconvolutionBlind Image Deconvolution
Blind Image Deconvolution
 
Deconvolution
DeconvolutionDeconvolution
Deconvolution
 
2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...
2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...
2D-Euler Deconvolution technique and Electrical Self-Potential analysis for s...
 
An Image Restoration Practical Method
An Image Restoration Practical MethodAn Image Restoration Practical Method
An Image Restoration Practical Method
 
Deconvolution
DeconvolutionDeconvolution
Deconvolution
 
Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...
Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...
Deconvolution and Interpretation of Well Test Data ‘Masked’ By Wellbore Stora...
 
Digital image processing techniques
Digital image processing techniquesDigital image processing techniques
Digital image processing techniques
 
SPATIAL FILTER
SPATIAL FILTERSPATIAL FILTER
SPATIAL FILTER
 
Spatial filtering using image processing
Spatial filtering using image processingSpatial filtering using image processing
Spatial filtering using image processing
 
Digital Image Processing - Image Enhancement
Digital Image Processing  - Image EnhancementDigital Image Processing  - Image Enhancement
Digital Image Processing - Image Enhancement
 
Spatial filtering
Spatial filteringSpatial filtering
Spatial filtering
 

Similar to Image Processing Techniques & Algorithms

Comparative between global threshold and adaptative threshold concepts in ima...
Comparative between global threshold and adaptative threshold concepts in ima...Comparative between global threshold and adaptative threshold concepts in ima...
Comparative between global threshold and adaptative threshold concepts in ima...AssiaHAMZA
 
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...ijsrd.com
 
ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB
ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLABANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB
ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLABJim Jimenez
 
Digital image processing - Image Enhancement (MATERIAL)
Digital image processing  - Image Enhancement (MATERIAL)Digital image processing  - Image Enhancement (MATERIAL)
Digital image processing - Image Enhancement (MATERIAL)Mathankumar S
 
Using Image Acquisition Is The Input Text Document
Using Image Acquisition Is The Input Text DocumentUsing Image Acquisition Is The Input Text Document
Using Image Acquisition Is The Input Text DocumentLisa Williams
 
Image enhancement lecture
Image enhancement lectureImage enhancement lecture
Image enhancement lectureISRAR HUSSAIN
 
project presentation-90-MCS-200003.pptx
project presentation-90-MCS-200003.pptxproject presentation-90-MCS-200003.pptx
project presentation-90-MCS-200003.pptxNiladriBhattacharjee10
 
X-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE MethodX-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE MethodIRJET Journal
 
Using A Application For A Desktop Application
Using A Application For A Desktop ApplicationUsing A Application For A Desktop Application
Using A Application For A Desktop ApplicationTracy Huang
 
IRJET- Low Light Image Enhancement using Convolutional Neural Network
IRJET-  	  Low Light Image Enhancement using Convolutional Neural NetworkIRJET-  	  Low Light Image Enhancement using Convolutional Neural Network
IRJET- Low Light Image Enhancement using Convolutional Neural NetworkIRJET Journal
 
Blind Source Camera Identification
Blind Source Camera Identification Blind Source Camera Identification
Blind Source Camera Identification Sudhanshu Patel
 
JonathanWestlake_ComputerVision_Project1
JonathanWestlake_ComputerVision_Project1JonathanWestlake_ComputerVision_Project1
JonathanWestlake_ComputerVision_Project1Jonathan Westlake
 
Digital image processing Tool presentation
Digital image processing Tool presentationDigital image processing Tool presentation
Digital image processing Tool presentationdikshabehl5392
 
Modified Contrast Enhancement using Laplacian and Gaussians Fusion Technique
Modified Contrast Enhancement using Laplacian and Gaussians Fusion TechniqueModified Contrast Enhancement using Laplacian and Gaussians Fusion Technique
Modified Contrast Enhancement using Laplacian and Gaussians Fusion Techniqueiosrjce
 
image_enhancement-NDVI-5.pptx
image_enhancement-NDVI-5.pptximage_enhancement-NDVI-5.pptx
image_enhancement-NDVI-5.pptxGemedaBedasa
 
Intensity Enhancement in Gray Level Images using HSV Color Coding Technique
Intensity Enhancement in Gray Level Images using HSV Color Coding TechniqueIntensity Enhancement in Gray Level Images using HSV Color Coding Technique
Intensity Enhancement in Gray Level Images using HSV Color Coding TechniqueIRJET Journal
 

Similar to Image Processing Techniques & Algorithms (20)

Comparative between global threshold and adaptative threshold concepts in ima...
Comparative between global threshold and adaptative threshold concepts in ima...Comparative between global threshold and adaptative threshold concepts in ima...
Comparative between global threshold and adaptative threshold concepts in ima...
 
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...
IMAGE ENHANCEMENT IN CASE OF UNEVEN ILLUMINATION USING VARIABLE THRESHOLDING ...
 
Digital.cc
Digital.ccDigital.cc
Digital.cc
 
h.pdf
h.pdfh.pdf
h.pdf
 
ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB
ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLABANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB
ANALYSIS OF IMAGE ENHANCEMENT TECHNIQUES USING MATLAB
 
Digital image processing - Image Enhancement (MATERIAL)
Digital image processing  - Image Enhancement (MATERIAL)Digital image processing  - Image Enhancement (MATERIAL)
Digital image processing - Image Enhancement (MATERIAL)
 
Using Image Acquisition Is The Input Text Document
Using Image Acquisition Is The Input Text DocumentUsing Image Acquisition Is The Input Text Document
Using Image Acquisition Is The Input Text Document
 
Image enhancement lecture
Image enhancement lectureImage enhancement lecture
Image enhancement lecture
 
project presentation-90-MCS-200003.pptx
project presentation-90-MCS-200003.pptxproject presentation-90-MCS-200003.pptx
project presentation-90-MCS-200003.pptx
 
X-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE MethodX-Ray Image Enhancement using CLAHE Method
X-Ray Image Enhancement using CLAHE Method
 
Using A Application For A Desktop Application
Using A Application For A Desktop ApplicationUsing A Application For A Desktop Application
Using A Application For A Desktop Application
 
IRJET- Low Light Image Enhancement using Convolutional Neural Network
IRJET-  	  Low Light Image Enhancement using Convolutional Neural NetworkIRJET-  	  Low Light Image Enhancement using Convolutional Neural Network
IRJET- Low Light Image Enhancement using Convolutional Neural Network
 
Blind Source Camera Identification
Blind Source Camera Identification Blind Source Camera Identification
Blind Source Camera Identification
 
JonathanWestlake_ComputerVision_Project1
JonathanWestlake_ComputerVision_Project1JonathanWestlake_ComputerVision_Project1
JonathanWestlake_ComputerVision_Project1
 
Digital image processing Tool presentation
Digital image processing Tool presentationDigital image processing Tool presentation
Digital image processing Tool presentation
 
Q01761119124
Q01761119124Q01761119124
Q01761119124
 
Modified Contrast Enhancement using Laplacian and Gaussians Fusion Technique
Modified Contrast Enhancement using Laplacian and Gaussians Fusion TechniqueModified Contrast Enhancement using Laplacian and Gaussians Fusion Technique
Modified Contrast Enhancement using Laplacian and Gaussians Fusion Technique
 
IJ-M&M08.ppt
IJ-M&M08.pptIJ-M&M08.ppt
IJ-M&M08.ppt
 
image_enhancement-NDVI-5.pptx
image_enhancement-NDVI-5.pptximage_enhancement-NDVI-5.pptx
image_enhancement-NDVI-5.pptx
 
Intensity Enhancement in Gray Level Images using HSV Color Coding Technique
Intensity Enhancement in Gray Level Images using HSV Color Coding TechniqueIntensity Enhancement in Gray Level Images using HSV Color Coding Technique
Intensity Enhancement in Gray Level Images using HSV Color Coding Technique
 

Recently uploaded

Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Association for Project Management
 
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...Nguyen Thanh Tu Collection
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQuiz Club NITW
 
An Overview of the Calendar App in Odoo 17 ERP
An Overview of the Calendar App in Odoo 17 ERPAn Overview of the Calendar App in Odoo 17 ERP
An Overview of the Calendar App in Odoo 17 ERPCeline George
 
How to Manage Buy 3 Get 1 Free in Odoo 17
How to Manage Buy 3 Get 1 Free in Odoo 17How to Manage Buy 3 Get 1 Free in Odoo 17
How to Manage Buy 3 Get 1 Free in Odoo 17Celine George
 
Comparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptxComparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptxAvaniJani1
 
DBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdfDBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdfChristalin Nelson
 
4.9.24 Social Capital and Social Exclusion.pptx
4.9.24 Social Capital and Social Exclusion.pptx4.9.24 Social Capital and Social Exclusion.pptx
4.9.24 Social Capital and Social Exclusion.pptxmary850239
 
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationCongestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationdeepaannamalai16
 
Objectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptxObjectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptxMadhavi Dharankar
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
Shark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristicsShark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristicsArubSultan
 
The role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenshipThe role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenshipKarl Donert
 
PART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFE
PART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFEPART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFE
PART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFEMISSRITIMABIOLOGYEXP
 
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level   being good citizen -imperative- (1) (1).pdfMS4 level   being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdfMr Bounab Samir
 
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...Nguyen Thanh Tu Collection
 
Tree View Decoration Attribute in the Odoo 17
Tree View Decoration Attribute in the Odoo 17Tree View Decoration Attribute in the Odoo 17
Tree View Decoration Attribute in the Odoo 17Celine George
 

Recently uploaded (20)

Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
 
Introduction to Research ,Need for research, Need for design of Experiments, ...
Introduction to Research ,Need for research, Need for design of Experiments, ...Introduction to Research ,Need for research, Need for design of Experiments, ...
Introduction to Research ,Need for research, Need for design of Experiments, ...
 
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
 
prashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Professionprashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Profession
 
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
 
An Overview of the Calendar App in Odoo 17 ERP
An Overview of the Calendar App in Odoo 17 ERPAn Overview of the Calendar App in Odoo 17 ERP
An Overview of the Calendar App in Odoo 17 ERP
 
How to Manage Buy 3 Get 1 Free in Odoo 17
How to Manage Buy 3 Get 1 Free in Odoo 17How to Manage Buy 3 Get 1 Free in Odoo 17
How to Manage Buy 3 Get 1 Free in Odoo 17
 
Comparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptxComparative Literature in India by Amiya dev.pptx
Comparative Literature in India by Amiya dev.pptx
 
DBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdfDBMSArchitecture_QueryProcessingandOptimization.pdf
DBMSArchitecture_QueryProcessingandOptimization.pdf
 
4.9.24 Social Capital and Social Exclusion.pptx
4.9.24 Social Capital and Social Exclusion.pptx4.9.24 Social Capital and Social Exclusion.pptx
4.9.24 Social Capital and Social Exclusion.pptx
 
Congestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentationCongestive Cardiac Failure..presentation
Congestive Cardiac Failure..presentation
 
Objectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptxObjectives n learning outcoms - MD 20240404.pptx
Objectives n learning outcoms - MD 20240404.pptx
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
Shark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristicsShark introduction Morphology and its behaviour characteristics
Shark introduction Morphology and its behaviour characteristics
 
The role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenshipThe role of Geography in climate education: science and active citizenship
The role of Geography in climate education: science and active citizenship
 
PART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFE
PART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFEPART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFE
PART 1 - CHAPTER 1 - CELL THE FUNDAMENTAL UNIT OF LIFE
 
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level   being good citizen -imperative- (1) (1).pdfMS4 level   being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdf
 
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (GLOB...
 
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTAParadigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
 
Tree View Decoration Attribute in the Odoo 17
Tree View Decoration Attribute in the Odoo 17Tree View Decoration Attribute in the Odoo 17
Tree View Decoration Attribute in the Odoo 17
 

Image Processing Techniques & Algorithms

  • 1.
  • 2. Image processing is the study of any algorithm that takes an image as input and returns an image as output. WHY DO WE NEED IMAGE PROCESSING?… Since the digital image is “invisible” it must be prepared for viewing on one or more output device (laser printer,monitor,etc) The digital image can be optimized for the application by enhancing or altering the appearance of structures within it (based on: body part, diagnostic task, viewing preferences,etc) It might be possible to analyze the image in the computer and provide cues to the radiologists to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)
  • 3. Components of an Image Processing System
  • 4. Sensors Two elements required to acquire digital images Physical device: sensitive to the energy radiated by the object we wish to image. Digitizer: converts output of physical sensing device into digital form. Specialized image processing hardware: Digitizer + hardware Hardware: Performs Arithmetic Logic Unit (ALU) on entire image. Computer: Image processing system ranging from PC to supercomputer. Image Processing Software: Specialized modules performing specific tasks
  • 5. Mass Storage: Short-term storage for use during processing. on line storage for relatively fast recall Archival storage, characterized by infrequent access Image Displays: Flat screen, TV, Monitors, LCD, LED, 3D displays. Hardcopy: Laser Printers, Camera Films, Heat Sensitive devices, inkjet units, digital units like optical and CD-ROM.
  • 7. 1-KS400  One of commercial image processing software packages used for some of the applications is the KONTRON Imaging System KS400.  very powerful and convenient in use. the KS400 software has one essential disadvantage:  it does not allow to have direct access form a macro to single pixels in the image.  In practice this means that the functionality of the software is limited by the set of standard functions.  In principle, the problem can be overcome by using Free Programming KONTRON Software Development Kit which is supplied with KS400.
  • 8. 2-Image processing/analysis software : Developed The software written for UNIX consists of a set of low-level image processing/analysis functions including:- Sun raster file image (RAS) reading/writing; automatic and manual image thresholding; gray-scale and binary morphology; fractal analysis of contours using 'hand and dividers' method; fractal analysis of percolation networks; image correlation.
  • 9. 3-Image processing with MathCAD and Matlab. The MatLab and MathCAD environments are ideally suited to image processing. programs have Image Processing Toolboxes which provide a powerful and flexible environment for image processing and analysis. Both programs were used to perform different calculations on images. -There are several advantages  One of them is the ability to have direct access to any portion of available information what in general is not possible with many commercial image analysis systems.
  • 11. 1-FINDING THE NEAREST COLOUR: 1. Start at the first pixel in the image. 2. Get the actual color of the pixel. 3. Find the nearest color of this pixel from the available palette. 4. Replace the pixel with the nearest color. 5. Have we reached the end of the image? If so stop here. 6. Move on to the next pixel. 7. Go back to step 2.
  • 12.  There are a few methods for finding the nearest color. The one that I am going to explain here is called the Euclidean distance method. The algorithm for this is as follows: 1. Set minimum distance to a value higher than the highest possible. 2. Start at the first color in the palette. 3. Find the difference between each RGB value of the actual color and the current palette color. 4. Calculate the Euclidean distance. 5. If the distance is smaller than the minimum distance set minimum distance to the smaller value and note the current palette color. 6. Have we reached the end of the palette? If so stop here. 7. Move on to the next color in the palette. 8. Go back to step 3.
  • 13.
  • 14.
  • 15. 2-ERROR DIFFUSION: -Error diffusion works by comparing the actual color of a pixel against its nearest color and taking the difference between them. -This difference is known as the error. - Portions of the error are split between neighbouring pixels causing the error to be diffused hence the name “error diffusion”.
  • 16. The simplest form of error diffusion can be shown as:
  • 17. -With this form of error diffusion half of the error from the current pixel (represented by the black dot) is diffused to the pixel on the right and half to the pixel below. -In a color image error diffusion should be applied to the red, green and blue channels separately. - An important point to note here is that the total amount of error diffused should never exceed a value of 1. -It is also important to ensure that when a portion of the error is diffused to neighbouring pixels it does not cause invalid values (e.g. go below 0 or above 255). - Should a value go outside of the valid range then it should be truncated (e.g. -10 would be truncated to 0 and 260 would be truncated to 255).
  • 18. -The quality of this type of error diffusion however is pretty poor and very few people would actually use it. -A better one, and arguably the most famous, is Floyd-Steinberg error diffusion.
  • 19. With Floyd-Steinberg error diffusion the error is distributed amongst more neighbouring pixels providing a much nicer overall dithering effect. Performing the colour reduction with Floyd-Steinberg error diffusion will produce the following results: images reduced to 8 colors with error diffusion
  • 20. 3-GREYSCALE CONVERSION: to convert a particular color into greyscale we need to work out what the intensity or brightness of that color is. he formula for this is: Another method of greyscale conversion which takes into account the human perception of color uses different weights for the red, green and blue components.
  • 21. • To illustrate the above points let’s have a look at an image of some colored bars: • Let’s convert this image to greyscale using the mean method: • Converting the image to greyscale using the weighted method will give us the following:
  • 22. 4-BRIGHTNESS ADJUSTMENT: adding the desired change in brightness to each of the red, green and blue color components. The value of brightness will usually be in the range of -255 to +255 for a 24 bit palette. Negative values will darken the image and, conversely, positive values will brighten the image.
  • 23. images which have had the brightness adjusted by -128 (darkened) and +128 (brightened):
  • 24. 5-CONTRAST ADJUSTMENT: The first step is to calculate a contrast correction factor which is given by the following formula where the value C is the desired level of contrast: The next step is to perform the actual contrast adjustment itself. The following formula shows the adjustment in contrast being made to the red component of a color The value of contrast will be in the range of -255 to +255. Negative values will decrease the amount of contrast and, conversely, positive values will increase the amount of contrast.
  • 25. 6-GAMMA CORRECTION: can be described as the relationship between an input and the resulting output. For the scope of this article the input will be the RGB intensity values of an image. The relationship in this case between the input and output is that the output is proportional to the input raised to the power of gamma. The formula for calculating the resulting output is as follows: For calculating gamma correction the input value is raised to the power of the inverse of gamma. The formula for this is as follows: The range of values used for gamma will depend on the application, but personally I tend to use a range of 0.01 to 7.99.
  • 26. 7-COLOUR INVERSION AND SOLARISATION : -Color inversion, also known as the negative effect - is one of the easiest effects to achieve in image processing. -Color inversion is achieved by subtracting each RGB color value from the maximum possible value (usually 255).
  • 27. Another effect which is related to color inversion is the sola rise effect. The difference between the sola rise effect and color inversion is that with the sola rise effect only color values above or below a set threshold are inverted. The images below show the inversion of colours below a threshold of 128 in the first instance and then the inversion of colours above a threshold of 128 in the second.
  • 28. Image processing functions/algorithms Image processing applications mainly focuses on improving the visual appearance of images to a human viewer and preparing for measurement of the features and structures present. The measurement of images generally requires that features be well defined, either by edges or unique brightness, colour, texture, or some combination of these factors.
  • 29. Spatial filtering An image can be filtered to remove a band of spatial frequencies as high frequencies and low frequencies. Where rapid brightness transitions are established, high frequencies will be there Spatial filtering operations include high pass, low pass and edge detection filters.
  • 30. Sharpening The main aim in image sharpening is to highlight fine detail in the image or to enhance detail that has been blurred due to noise or other effects. The Laplacian often used for this purpose. However, image sharpening too can be interpreted in the frequency domain.  Sharpening emphasizes edges in the image and make them easier to see and recognize. In addition to that, differences between each pixel and its neigbour too, can influence sharpening effect.
  • 31. Blurring The visual effect of a low pass filter is image blurring. This is because the sharp brightness transitions had been attenuated to small brightness transitions. It have less detail and blurry.  Blurring is aimed to diminish the effects of camera noise, unauthentic pixel values or missing pixel values. For blurring effect, there are two mostly used techniques which are :- -neighbourhood averaging (Gaussian filters) . -edge preserving (median filters).
  • 32. A Gaussian blur filter modifies each pixel by looking at its neighbours and computing a "weighted average" of their values, with more weight being given to closer neighbours. The standard Gaussian blur filter found in most image processing programs is isotropic; it blends pixels values equally in all directions. For median filtering on the other hand, the outcome is that pixels with outlying values are forced to become more like their neighbours but at the same time edges are preserved.
  • 33. Edge Detection Edge detection is an image processing technique for finding the boundaries of objects within images. It works by detecting discontinuities in brightness. Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.
  • 34.
  • 35. Edge Detection Techniques Sobel Operator Robert’s cross operator: The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image.
  • 36. Prewitt’s operator: Prewitt operator [5] is similar to the Sobel operator and is used for detecting vertical and horizontal edges in images.
  • 37. 3-Image processing with MathCAD and Matlab.  The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image. The Laplacian is often applied to an image that has first been smoothed with something approximating a Gaussian Smoothing filter in order to reduce its sensitivity to noise. The operator normally takes a single gray level image as input and produces another gray level image as output.
  • 38.
  • 39.  Canny Edge Detection Algorithm Canny's intentions were to enhance the many edge detectors already out at the time he started his work. The first and most obvious is low error rate. The second criterion is that the edge points be well localized. A third criterion is to have only one response to a single edge. Hysteresis is used to track along the remaining pixels that have not been suppressed. Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a non edge). If the magnitude is above the high threshold, it is made an edge. And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above T2. Advantages and Disadvantages of Edge Detector
  • 40.
  • 42. •Due increasing concern regarding security issues around the world, the growing interest in general about the extent of private computer systems to identify the faces of accuracy, thus increasing the number of security systems and applications in this area, and has evolved significantly, and the algorithms used in varied between simplicity and complexity. •There were many questions: •What is the extent of the accuracy of the algorithm to identify the faces even be suitable for such applications?
  • 43. Stages of face recognition systems in general
  • 44. Mechanism of automated face recognition system step1 The formation and extraction memory facial recognition training set cluster, which is a collection of photos submitted in advance of the system.
  • 45. step2 • Find the most feature vector closest resemblance within the training set to beam features extracted from the image provided to the test - any image desired to know the identity of the owner through identification system feature vector An eclectic arrays of arrays original picture, and represent important values and fundamental part of the original images, and thereby reduced the size of images to represent the epitome ray images.
  • 46.
  • 47. Step 3 Here Within this system, we want to know the identity of the person and discrimination, and pass through the image of that person to the system, called in this case this image provided to get to know them in a test image. In order to draw the features vector images within this project, we will rely on the PCA algorithm to extract features of X-process feature vector images.
  • 48. Principal Component Analysis (PCA) algorithm.
  • 49. PCA algorithm •PCA algorithm is considered one of the most successful techniques that have been used in the field of image recognition in the field of image compression. •Used this algorithm in MATLAB program The main goal in the PCA algorithm • lies in reducing the large dimensions of space data to the dimensions of the smallest spaces. Usually the new spaces is a feature spaces (containing the basic and important features of the data in the original space).
  • 50. Identify problems and large-dimensional systems Systems deal with large dimensional spaces. Can make many improvements and through matching transfer existing data to the data in less space dimensions. And thus we have our dimensionality reduction from the original space with large dimensions to the new space with smaller dimensions.
  • 51. Let it be, for example, we have the following Beam: •As part of the author of N after the space , we reduce the cross-dimensional moving to another beam • To any space of K so that after K <N. consisting
  • 52. Within this context, the PCA calculates linear conversion linear transformation Which in turn meet with existing data within the top-dimensional space to approve its information within a least-dimensional subspace, as is the position below:
  • 53. •Or in other words •Whereas: •The optimum conversion Is a conversion in which the value is smaller.
  • 54. Complete the task of face recognition using PCA Representation of each object within the training group and less space dimensions using PCA algorithm
  • 55. Assuming that the face that is required is a recognizable face one dimensions Steps facial recognition Step1: the representation of the image lee form a single beam dimension Let named . Step 2: do b normalize the beam Put across from the average face step 3: find the nearest face within the training group for the face, which is required to identify it Where the difference between the youngest and error Step 4: If er <Tr, where Tr is a threshold, it has been identified on the face on that face
  • 57. Motion detection is the first essential process in the extraction of information regarding moving objects and makes use of stabilization in functional areas, such as tracking, classification, recognition, and so on. Motion Alarm It is pretty easy to add motion alarm feature to all these motion detection algorithms Literature Survey We review many classes of algorithm used in motion detection, which include optical flow algorithms, two complementary background estimation technique, frame difference method Method of operation
  • 58.
  • 59.
  • 60.
  • 62. In this project we investigated two complementary methods for the detection of moving objects by a moving observer. The first is based on the fact that, in a rigid environment, the projected velocity at any point in the image is constrained to lie on a 1-D locus in velocity space, known as the constraint ray whose parameters depend only on the observer motion. If the observer motion is known, an independently moving object can, in principle, be detected because its projected velocity is unlikely to fall on this locus. We show how this principle can be adapted to use partial information about the motion field and observer motion that can be rapidly computed from real image sequences.
  • 63. The second method utilizes the fact that the apparent motion of a fixed point due to smooth observer motion changes slowly, while the apparent motion of many moving objects such as animals or maneuvering vehicles may change rapidly. The motion field at a given time can thus be used to place constraints on the future motion field which, if violated, indicate the presence of an autonomously maneuvering object. In both cases, the qualitative nature of the constraints allows the methods to be used with the inexact motion information typically available from real image sequences. We have produced Implementations of the methods that run in real time on a parallel pipelined image processing system. The pictures show two examples of the real-time system in operation. In the first, the camera is mounted on a robot platform that is moving vertically with respect to the room. In the second, the camera is mounted on a vehicle that is being driven towards the road on which the detected vehicle is moving.
  • 65. In this project, we investigated the use of robustly computable motion features can be used directly as a means of recognition. We have designed, implemented, and tested a general framework for detecting and recognizing both distributed motion activity on the basis of temporal texture, and complexly moving, compact objects on the basis of their activity.
  • 66. Motion-Detection Steps Motion-detection is a two-step process. Step1 identifies the objects that have moved between the two frames (using difference and threshold filter). The difference between each corresponding pixel of two frames is calculated; and the pixels with difference greater than the specified threshold are marked in foreground color (white). All the other pixels are marked in background color (black). So, the output of Step1 will be a binary-image with only two colors (black and white). The intensity value (brightness level) of the pixels are used to calculate the difference. The intensity value of each pixel in a grayscale image will be in the range 0 to 255. If RGB images are used, then the grayscale intensity value can be calculated as (R + G + B / 3).
  • 67. Step2 identifies the significant movements and filters out the noise that are wrongly identified as motions (using erosion-filter). Erosion-filter removes the pixels that are not surrounded by enough amount of neighboring pixels. Essentially, this gives the ability to shrink the objects thereby removing the noisy pixels i.e. stand-alone pixels. The binary image (output of Step1) is scanned pixel-by-pixel; and if enough number of pixels in current window are not turned on, then the entire window is turned off i.e. ignored as noise.
  • 69. Theories Field-Programmable Gate Array (FPGA) A Field-Programmable Gate Array is a large-scale integrated circuit (LSI) which is programmable. Finite State Machine (FSM) Finite State Machine is a behavioral model which consists of a finite number of states and transitions between the states.
  • 70. Gray-level Co-occurrence Matrix Statistics Image Generation are images which represent the statistic uniqueness of textures in an image. Many statistics images are generated. Gray-level Co-occurrence Matrix (GLCM) GLCM is a square matrix which contains numbers of times that patterns of 2 scaled values are found while examining pairs of pixels through an image. Static Random Access Memory (SRAM) SRAM is an electronic memory which is capable of storing data as long as there is the power supply for the device.
  • 71. Static Random Access Memory (SRAM) SRAM is an electronic memory which is capable of storing data as long as there is the power supply for the device. Timing Diagram of the SRAM Write Operation
  • 72. Experiments The main object of this experiment is to find the best algorithm to be implemented in the GLCM statistics image generation operation. Materials and Equipments 1. Executable files of each algorithm which implements timer functions in it 2. An Open Dragon image which its size is 3822 pixels by 2560 pixels 3. A Computer with these specifications a. CPU: Intel Pentium 4 1.6 GHz b. Motherboard: IBM, Intel i845 c. RAM: DDR 640 MB, 133 MHz
  • 73. hardware devices 1. Prototyping Board: Design Gateway True PCI 2. SRAM: AMIC LP621024D 3. Bi-directional Buffer: 150Ω Resistors System Components There are 13 modules in the system. They are … 1. Memory Unit 7. Square Buffer 2. Process Controller 8. GLCM Builder 3. Memory Controller 9. Address Decoder 4. Arbiter 10. Matrix Voter 5. Center Indexer 11. Matrix Integrator 6. Square Fetcher 12. Clock Divider 13. pciif32 Designs
  • 74. Matrix Integrator Matrix Integrator was designed to calculate the three GLCM Statistic values by summation all calculated positions of GLCM matrix is in Memory Unit. Block structure showing ports of Matrix Integrator
  • 75. Clock Divider Clock Divider is responsible for dividing the frequency of a clock signal into another frequency. Block structure showing ports of Clock Divider
  • 76. Image Processing in Hardware is the project concentrating on designing a coprocessor for the computer system to compute the computationally-intensive part ofoperations in the digital image processing.
  • 77. Thank you for your attention
  • 78. Done by:- Eng.Amal Ahmed Almathani Eng.AnsamMansour Eng.Eftikhar ali alamri Eng.SafiaMoqbel Eng.Somia Abdalhmeed