Edge detection is used to identify points in a digital image where the image brightness changes sharply. The key steps are smoothing to reduce noise, enhancing edges through differentiation, thresholding to determine important edges, and localization to find edge positions. Common methods include using the first derivative to find gradients and zero-crossings of the second derivative. Operators like Prewitt and Sobel approximate derivatives with small pixel masks. Edge detection is useful for computer vision tasks by extracting important image features.
Adaptive Median Filters
Elements of visual perception
Representing Digital Images
Spatial and Intensity Resolution
cones and rods
Brightness Adaptation
Spatial and Intensity Resolution
Image Acquisition and Representation
A Simple Image Formation Model
Image Sampling and Quantization
Image Interpolation
Image quantization
Nearest Neighbor Interpolation
This presentation contains concepts of different image restoration and reconstruction techniques used nowadays in the field of digital image processing. Slides are prepared from Gonzalez book and Pratt book.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Adaptive Median Filters
Elements of visual perception
Representing Digital Images
Spatial and Intensity Resolution
cones and rods
Brightness Adaptation
Spatial and Intensity Resolution
Image Acquisition and Representation
A Simple Image Formation Model
Image Sampling and Quantization
Image Interpolation
Image quantization
Nearest Neighbor Interpolation
This presentation contains concepts of different image restoration and reconstruction techniques used nowadays in the field of digital image processing. Slides are prepared from Gonzalez book and Pratt book.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Neighbourhood operations
What is spatial filtering?
Smoothing operations
What happens at the edges?
Correlation and convolution
Sharpening filters
Combining filtering techniques
Some simple neighbourhood operations include:
Min: Set the pixel value to the minimum in the neighbourhood
Max: Set the pixel value to the maximum in the neighbourhood
Median: The median value of a set of numbers is the midpoint value in that set (e.g. from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the median works better than the average
Spatial smoothing may be viewed as a process for estimating the value of a pixel from its neighbours.
What is the value that “best” approximates the intensity of a given pixel given the intensities of its neighbours?
We have to define “best” by establishing a criterion.
A spatial filter is an image operation where each pixel value I(u; v) is changed by a function of the intensities of pixels in a neighborhood of (u; v).
It involves moving the filter mask from point to point in an image.
At each point (x,y), the response of the filter at that point is calculated using a predefined relationship
The area of machine learning has enabled experts to reveal
bits of knowledge from the useful information and past
occasions. One of the familiar histories in the world is Titanic
disaster. The main aim is to anticipate the passengers who have
survived using the machine learning techniques. To make the
correct predictions about the disaster various parameters are
included such as Name, Sex, Age, PassengerID, Embarked etc.
Initially the dataset has collected.
The dataset has been contemplated and deselected utilizing
different machine learning calculations like SVM, Random
forest and so forth. The methods are used in this are decision
tree, linear SVM, and logistic regression. Evaluating the Titanic
disaster to decide a relationship between the survival of
passengers and attributes of the travelers utilizing different
machine learning calculations is the main goal of this project.
Hence, various algorithms can be compared based on the
accuracy of a test dataset [1].
The overall accuracy can be calculated by undergoing
several stages as depicted by the below Fig. 1 using aforesaid
machine learning approaches.
A. Dataset
Kaggle website provides the dataset for this work [10]. The
data comprises of 891 rows in the prepare set which is a
traveller test with their related names. The Passenger class,
Ticket number, Age, Sex, name of the passenger, Decision tree characterization procedure is a standout
amongst the most prevalent systems in the developing field of
information mining. A method of building a decision tree from
the set of samples is the method involved in the implementing
decision tree algorithm. It is the form of flow chart where
every non-terminal node represents the test on a particular
attribute and class labels are held with the terminal node [2].
Here, the chance of survival can be calculated
MATLAB is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages
Labview with dwt for denoising the blurred biometric imagesijcsa
In this paper, biometric blurred image (fingerprint) denoising are presented and investigated by using
LabVIEW applications , It is blurred and corrupted with Gaussian noise. This work is proposed
algorithm that has used a discrete wavelet transform (DWT) to divide the image into two parts, this will
be increasing the manipulation speed of biometric images that are of the big sizes. This work has included
two tasks ; the first designs the LabVIEW system to calculate and present the approximation coefficients,
by which the image's blur factor reduced to minimum value according to the proposed algorithm. The
second task removes the image's noise by calculated the regression coefficients according to Bayesian-
Shrinkage estimation method.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
1. Notes on Image Processing
E D G E D E T E C T I O N O F I M A G E S
BY: Mohammed Kamel
Supervisor
Prof : R.M.Farouk
DEPARTMENT OF MATHEMATICS &
COMPUTER SCIENCE
2. what is edge detection?
Edges are places in the image with strong intensity contrast (Or) it ‘difference of
intensity between two images (Or) Edges are pixels where image brightness
changes abruptly.
Example:
(a) (b)
Figure 1.1
edge detection: (a) shows [original image] that will be used to find its edge, (b)
shows edge Detection of image using [canny edge detection] method.
3. Why is Edge Detection Useful?
-Important features can be extracted from the edges of an image (e.g., corners, lines).
-These features are used by higher-level computer vision algorithms (e.g., recognition).
-Lane and directions detection.
Figure 1.2
Figure 1.2: shows Lane and directions detection for navigation
4. Effect of Illumination (error)
reflection:
(a) (b) (c) (d)
Figure 1.3
Figure 1.3: error: (a) original image, (b) shows actual edge detection of (a), (b) shows same
original image but with light source, (d) shows how light source effect on edge of image.
5. Edge direction: perpendicular to the direction of maximum intensity change (i.e. edge normal)
Edge Descriptors!
Edge direction: perpendicular to the direction of maximum intensity change (i.e. edge normal)
Edge strength: related to the local image contrast along the Normal.
Edge position: the image position at which the edge is Located.
Figure 1.4
Figure 1.4: shows how edge direction and normal represented in image.
6. Step edge: the image intensity abruptly changes from one value on one side of the
Discontinuity to a different value on the opposite side. Is also called (Step
Function, hard edge).
Modeling Intensity Changes:
Ramp edge: a step edge where the intensity change is not instantaneous but occur
over a Finite distance.
7. Modeling Intensity Changes(cont’d)
Within some short distance (i.e., usually generated by lines
Ridge edge: the image intensity abruptly changes value but then returns to the starting
Value within some short distance (i.e., usually generated by lines)
Roof edge: a ridge edge where the intensity change is not instantaneous but occur over
a finite distance (i.e., usually generated by the intersection of two surfaces).
8. Main Steps in Edge Detection
(1) Smoothing: suppress as much noise as possible, without destroying true edges. (Or) it
Make pixel nearest as possible to each other to make image more clearly
Is also called [Blur].
(2)Enhancement: apply differentiation to enhance the quality of edges (sharpening).and apply
some kind of filter that responds strongly at edges for example Take highest
Value of pixel in image and replace it with all pixel to Increase quality of image.
(3)Thresholding: determine which edge pixels should be discarded as noise and which should be
retain (I.e. threshold edge magnitude).
(4) Localization: determine the exact edge location.
** (Sub-pixel resolution): might be required for some applications to estimate the location of an
edge to better than the spacing between pixels.
9. Edge Detection Using Derivative
Often, points that lie on an edge are detected by:
(1) Detecting the local maxima or minima of the first derivative.
(2) Detecting the zero-crossings of the second derivative.
10. Edge Detection Using First Derivative
1D functions (not centered at x)
0
( ) ( )
'( ) lim ( 1) ( )
h
f x h f x
f x f x f x
h
(h=1) Mask: [-1 1]
1,0,1Mask M= (Centered at x)
Edge Detection Using Second Derivative
Approximate finding maxima/minima of gradient magnitude by finding places where:
2
( ) 0
2
d f
x
dx
Can’t always find discrete pixels where the second derivative is ((zero)) look for zero-crossing
instead. (See below).
0
'( ) '( )
''( ) lim '( 1) '( )
h
f x h f x
f x f x f x
h
( 2) 2 ( 1) ( )f x f x f x (h=1)
Replace x+1 with x (i.e., centered at x):
''( ) ( 1) 2 ( ) ( 1)f x f x f x f x Mask: [1 -2 1]
11. -Four cases of zero-crossing: {+,-}, {+, 0,-}, {-, +}, {-, 0, +}
-Slope of zero-crossing {a, -b} is: |a+b|
-To detect “strong” zero-crossing, Threshold the slope.
20 20 20
10 10 10 10 10
10
10 10
0 0 0 0 0 0 0
0 0 0 00
( )f x
'( ) f(x 1) f(x)f x
(approximation f'(x) at x+1/2)
''( ) f(x 1) 2f(x) f(x 1)f x
(approximation f''(x) at x)
12. Smoothing
We will try to suppress as much noise as possible, without smoothing away the meaningful
edges. It make pixel nearest as possible to each other to make image more clearly It’s also
known by [Blur].
But it different than zooming because when you zoom an image using pixel replication and
zooming factor is increased, you saw a blurred image. This image also has less details, but it
is not true blurring. Because in zooming, you add new pixels to an image, which increase
the overall number of pixels in an image, whereas in blurring, the number of pixels of a
normal image and a blurred image remains the same.
Smoothing can be achieved by many ways. The common type of filters that are used to
perform blurring: [1] Mean filter (Box filter).
[2]Weighted average filter.
[3]Gaussian filter.
13. Smoothing(cont’d)
Noise :( unwanted signals)
Its random variation of brightness or color information in image, it’s also undesirable when
image capture that adds spurious and extraneous information.
Effects of noise
Consider a single row or column of the image (Plotting intensity as a function of position gives
a Signal)
15. Derivative theorem of convolution
( ) ( )h f h f
x x
(i.e., saves one operation)
f
h
x
( )h f
x
16. Thresholding:
Determine which edge pixels should be discarded as noise and which should be retained (i.e.
threshold edge magnitude).for example: by looking to image and set any number then make
biggest numbers 1 and smallest numbers 0.
-Reduce number of false edges by applying a threshold T
-all values below T are changed to 0.
-selecting a good values for T is difficult.
- Some false edges will remain if T is too low.
- Some edges will disappear if T is too high.
- Some edges will disappear due to softening of the edge contrast by shadows.
*If the gradient at a pixel is above ‘High’, declare it an ‘edge pixel’.
*If the gradient at a pixel is below ‘Low’, declare it a ‘non-edge-pixel’.
*If the gradient at a pixel is between ‘Low’ and ‘High’ then declare it an ‘edge pixel’ if and
only if it is Connected to an ‘edge pixel’ directly or via pixels between ‘Low’ and ‘High’.
18. Edge Detection Using First Derivative
(Gradient)
The first derivate of an image can be computed using the gradient
Grad(f)=
f
x
f
y
Gradient Representation
The gradient is a vector which has magnitude and direction:
Magnitude (grad (f)) = Direction (grad (f)) =
2 2
f f
x y
1
tan /
f f
y x
*Magnitude: indicates edge strength.
*Direction: indicates edge direction. i.e., perpendicular to edge direction
19. Approximate Gradient
Approximate gradient using finite differences
0
( , ) ( , )
lim
h
f f x h y f x y
x h
0
( , ) ( , )
lim
h
f f x y h f x y
y h
( , ) ( , )
( 1, ) ( , )X
X
f x h y f x yf
f x y f x y
x h
( , ) ( , )
( , 1) ( , )
y
y
f x y h f x yf
f x y f x y
y h
1Xh
1yh
Cartesian vs pixel-coordinates:
- J corresponds to x- direction.
- I corresponds to y- direction.
( 1, ) ( , )f x y f x y
( , 1) ( , )f x y f x y
( , 1) f(i, j)
f
f i j
x
( , ) ( 1, )
f
f i j f i j
y
20. Sensitive to horizontal edges!
3 2 3 3
2 3
(x , y ) (x , y )
y y
f f
(x, y ) (x, y)f Dy f
Dy
( , ) f(x, y 1)f x y
3 2 2 3(y = y + Dy, y = y, x = x, Dy = 1)
3 2(x , y )
1 1(x , y )
𝑥3, 𝑦3
𝑥2, 𝑦1
Or
(Grad in the Y direction)
2 1 1 1
2 1
(x , y ) (x , y )f f
x x
(x Dx, y) (x, y)f f
Dx
( 1, ) f(x, y)f x y
(Grad in the X direction)
2 1 1 2(x = x+Dx, x = x, y =y = y, Dx =1)
Or
Edge in the x-direction Edge in the y-direction
21. (x+1/2, y)
(X, y+1/2)
f
x
f
y
We can implement and using following masks:
f
x
Good approximation at (x+1/2, y)
Good approximation at (x, y+1/2)
-A different approximation of the gradient
( , ) ( , ) ( 1, 1)
f
x y f x y f x y
x
( , ) ( 1, ) ( , 1)
f
x y f x y f x y
y
f
y
22. another Approximation
0 1 2
7 3
6 5 4
[i,j]
a a a
a a
a a a
Consider the arrangement of pixels about the pixel (I, J):
3 x 3 neighborhood:
f
x
f
y
The partial derivatives can be computed by:
2 3 4 0 7 6(a a a ) (a a a )XM c c 6 5 4 0 1 2(a a a ) (a a a )yM c c
The constant c implies the emphasis given to pixels closer to the center of the mask
Roberts Operator
f
x
f
y
= f (i, j) - f )i + 1, j + 1)
= f (i + 1, j)- f (i, j + 1)
This approximation can be implemented by the following masks:
1 0
0 1
0 1
1 0
(Note: Mx and My are approximations at (i + 1/2, j + 1/2))
23. Prewitt Operator
XM yM
Setting c = 1, we get the Prewitt operator:
are approximation at (i, j)
1 0 1
1 0 1
1 0 1
1 1 1
0 0 0
1 1 1
Flow chart
Image Blurred Edges in x
11
11
11
11
101
101
101
Image Blurred Edges in y
1
1
111
000
111
111
111
𝑀 𝑋 = 𝑀 𝑦 =
Average
Smoothing in x
Derivative
Filtering in x
And
And
Average
Smoothing in y
Derivative
Filtering in y
24. Setting c = 2, we get the Sobel operator:
Sobel Operator
1 0 1
2 0 2
1 0 1
1 2 1
0 0 0
1 2 1
𝑀 𝑋 = 𝑀 𝑦 =
Flow chart
Image Blurred Edges in x
11
22
11
11
101
202
101
Image Blurred Edges in y
1
1
121
000
121
121
121
Average
Smoothing in x
Derivative
Filtering in x
Derivative
Filtering in y
Average
Smoothing in y
And
And
26. Edge Detection Steps Using Gradient
A) Smooth the input image ( ( , ) ( , )*G(x, y))F x y f x y
( , )*M ( , )X XF f x y x y
C) ( , )*M ( , )y yF f x y x y
B)
D) Magnitude ( , ) X yx y F F
.
E) Direction
1
( , ) tan (F / F )X yx y
F) IF magnitude ( , ) Tx y , then possible edge point.
(a) (b) (c) (d) (e)
Figure1.6
Figure 1.6:(a) shows [Original image], (b) shows how image represented using [Sobel 3x3],(c) show [Sobel
3x3 grayscale],(d) shows how image represented using [Prewitt 3x3], and (e) shows [Prewitt 3x3 grayscale] .
27. Marr Hildreth Edge Detector
The derivative operators presented so far are not very useful because they are very
sensitive to noise. To filter the noise before enhancement, Marr and Hildreth proposed a
Gaussian Filter, combined with the Laplacian for edge detection. Is the Laplacian of
Gaussian (LoG)?
The fundamental characteristics of LoG edge detector are:
[1] Smooth image by Gaussian filter:
S = G * I
Where: S: is smoothed image.
G: is Gaussian filter represented by
2 2
2
21
( , )
2
x y
G x y e
I: the original image.
[2] Apply Laplacian
2 2
2
2 2
s s s
x y
Note: [ [ ] is used for gradient or derivative.]is used for laplacian
28. Marr Hildreth Edge Detector
(cont’d)
[3] Deriving the Laplacian of Gaussian (LoG)
2 2 2
( * ) ( )*IS G I G
2 2
2
2 2
22
23
1
(2 )
2
x y
x y
g e
Used in mechanics, electromagnetics, wave theory, quantum mechanics and Laplace equation
[4] Find zero crossings
(a)Scan along each row, record an edge point at the location of zero-crossing.
(b)Repeat above step along each column.
-Four cases of zero-crossing: {+,-}, {+, 0,-}, {-, +}, {-, 0, +}
-Slope of zero-crossing {a, -b} is: |a+b|
-To detect “strong” zero-crossing, and threshold the slope.
The Separability of LoG
Similar to separability of Gaussian filter two-dimensional Gaussian can be separated into 2
one-dimensional Gaussians.
29. Image G(x) G(y) +
I G
Laplacian of Gaussian Filtering
Image
Gxx(x)
Gyy(y)
G(x)
G(y)
+ S2
30. Canny has shown that the first derivative of the Gaussian closely approximates the operator that
optimizes the product of signal-to-noise ratio and localization. (i.e., analysis based on "step-
edges" corrupted by "Gaussian noise“)
J. Canny, a Computational Approach to Edge Detection, IEEE Trans. Pattern Analysis and
Machine Intelligence, 8:679-714, 1986.
:
canny edge detector
31. Steps of Canny edge detector:
Xf yf[1]Compute and
( * ) * *GXf G f G f
x x
( * ) * *Gyf G f G f
y y
Xf yf
G(x, y) is the Gaussian function
2
( , y)
x
G x
2
( , y)
y
G x
Gx
Gx(x, y) is the derivate of (x, y) with respect to x: (x, y) =
Gy
Gy(x, y) is the derivate of (x, y) with respect to y: (x, y) =
[2]Compute the gradient magnitude and direction: Magnitude (I, j) =
2 2
X yf f
[3]Apply non-maxima suppression.
[4]Apply hysteresis thresholding/edge linking.
32. Canny Edge Detector Steps
1. Noise Reduction: Since edge detection is susceptible to noise in the image, first step is to
Remove the noise in the image with a 5x5 Gaussian filter.
2. Finding Intensity Gradient of the Image: Smoothened image is then filtered with Sobel kernel
in both horizontal and vertical direction to get first derivative in Horizontal
Direction and vertical direction from these two images, we can Find Edge
Gradient and Direction for each pixel as follows:
2 2
X yG GEdge Gradient= Angle ( ) =
1
tan y
X
G
G
3. Non-maxima suppression: After getting gradient magnitude and direction, a full scan of
image is done to remove any unwanted pixels which may not
constitute the edge for this, at every pixel, pixel is checked if it is
a local maximum in its neighborhood in the direction of gradient.
33. shows Point A is on the edge (in vertical direction).
Gradient direction is normal to the edge. Point B and C are in gradient directions. So point A is
checked with point B and C to see if it forms a local maximum. If so, it is considered for next
stage, otherwise, it is suppressed (put to zero). In short, the result you get is a binary image with
“thin edges”.
4. Hysteresis thresholding: (Hysteresis: A lag or momentum factor).
1 if ( , ) for some threshold T
( , )
0 otherwise
f x y T
E x y
- Can only select “strong” edges and does not guarantee “continuity”.
- For “maybe” edges, decide on the edge if neighboring pixel is a strong edge.
34. Hysteresis thresholding uses two thresholds:
-Low threshold 1t -high threshold ht ht 1t(usually = 2 )
1
0 1
0
( , ) definitely an edge
( , ) mayebe an edge,depends on context
( , ) < definitely not an edge
f x y t
t f x y t
f x y t
(a) (b) (c) (d) (e)
Figure 1.7
(a) shows [Original image], (b) shows how image represented using [Grayscale image], (c) show [Gradient
magnitude], (d) shows how image represented using [Thresholded gradient magnitude], and (e) shows
[non-maxima suppression (thinning)].
35. Criteria for Optimal Edge Detection
(A) Good detection: Minimize the probability of false positives (i.e., spurious edges).
Minimize the probability of false negatives (i.e., missing real edges).
(B) Good localization: Detected edges must be as close as possible to the true edges.
(C) Single response: Minimize the number of local maxima around the true edge.