The success of an image recognition procedure is related to the quality of the edges marked. The
aim of this research is to investigate and evaluate edge detection techniques when applied to
noisy images at different scales. Sobel, Prewitt, and Canny edge detection algorithms are
evaluated using artificially generated images and comparison criteria: edge quality (EQ) and map
quality (MQ). The results demonstrated that the use of these criteria can be utilized as an aid for
further analysis and arbitration to find the best edge detector for a given image.
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...IJECEIAES
Edge detection is the process of segmenting an image by detecting discontinuities in brightness. Several standard segmentation methods have been widely used for edge detection. However, due to inherent quality of images, these methods prove ineffective if they are applied without any preprocessing. In this paper, an image pre-processing approach has been adopted in order to get certain parameters that are useful to perform better edge detection with the standard edge detection methods. The proposed preprocessing approach involves median filtering to reduce the noise in image and then edge detection technique is carried out. Finally, Standard edge detection methods can be applied to the resultant pre-processing image and its Simulation results are show that our pre-processed approach when used with a standard edge detection method enhances its performance.
Abstract Edge detection is a fundamental tool used in most image processing applications. We proposed a simple, fast and efficient technique to detect the edge for the identifying, locating sharp discontinuities in an image and boundary of an image. In this paper, we found that proposed method called LookUp Table performs well, which requires least computational time as compared to conventional Edge Detection techniques. And also in this paper we presented a comparative performance of various conventional Edge Detection Techniques. Keywords: Edge detectors, Lookup table.
Enhanced Optimization of Edge Detection for High Resolution Images Using Veri...ijcisjournal
dge Detection plays a crucial role in Image Processing and Segmentation where a set of algorithms aims
to identify various portions of a digital image at which a sharpened image is observed in the output or
more formally has discontinuities. The contour of Edge Detection also helps in Object Detection and
Recognition. Image edges can be detected by using two attributes such as Gradient and Laplacian. In our
Paper, we proposed a system which utilizes Canny and Sobel Operators for Edge Detection which is a
Gradient First order derivative function for edge detection by using Verilog Hardware Description
Language and in turn compared with the results of the previous paper in Matlab. The process of edge
detection in Verilog significantly reduces the processing time and filters out unneeded information, while
preserving the important structural properties of an image. This edge detection can be used to detect
vehicles in Traffic Jam, Medical imaging system for analysing MRI, x-rays by using Xilinx ISE Design
Suite 14.2.
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
Comparative Analysis of Common Edge Detection Algorithms using Pre-processing...IJECEIAES
Edge detection is the process of segmenting an image by detecting discontinuities in brightness. Several standard segmentation methods have been widely used for edge detection. However, due to inherent quality of images, these methods prove ineffective if they are applied without any preprocessing. In this paper, an image pre-processing approach has been adopted in order to get certain parameters that are useful to perform better edge detection with the standard edge detection methods. The proposed preprocessing approach involves median filtering to reduce the noise in image and then edge detection technique is carried out. Finally, Standard edge detection methods can be applied to the resultant pre-processing image and its Simulation results are show that our pre-processed approach when used with a standard edge detection method enhances its performance.
Abstract Edge detection is a fundamental tool used in most image processing applications. We proposed a simple, fast and efficient technique to detect the edge for the identifying, locating sharp discontinuities in an image and boundary of an image. In this paper, we found that proposed method called LookUp Table performs well, which requires least computational time as compared to conventional Edge Detection techniques. And also in this paper we presented a comparative performance of various conventional Edge Detection Techniques. Keywords: Edge detectors, Lookup table.
Enhanced Optimization of Edge Detection for High Resolution Images Using Veri...ijcisjournal
dge Detection plays a crucial role in Image Processing and Segmentation where a set of algorithms aims
to identify various portions of a digital image at which a sharpened image is observed in the output or
more formally has discontinuities. The contour of Edge Detection also helps in Object Detection and
Recognition. Image edges can be detected by using two attributes such as Gradient and Laplacian. In our
Paper, we proposed a system which utilizes Canny and Sobel Operators for Edge Detection which is a
Gradient First order derivative function for edge detection by using Verilog Hardware Description
Language and in turn compared with the results of the previous paper in Matlab. The process of edge
detection in Verilog significantly reduces the processing time and filters out unneeded information, while
preserving the important structural properties of an image. This edge detection can be used to detect
vehicles in Traffic Jam, Medical imaging system for analysing MRI, x-rays by using Xilinx ISE Design
Suite 14.2.
Analysis and Detection of Image Forgery Methodologiesijsrd.com
"Forgery" is a subjective word. An image can become a forgery based upon the context in which it is used. An image altered for fun or someone who has taken a bad photo, but has been altered to improve its appearance cannot be considered a forgery even though it has been altered from its original capture. The other side of forgery are those who perpetuate a forgery for gain and prestige. They create an image in which to dupe the recipient into believing the image is real and from this they are able to gain payment and fame. Detecting these types of forgeries has become serious problem at present. To determine whether a digital image is original or doctored is a big challenge. To find the marks of tampering in a digital image is a challenging task. Now these marks of tampering can be done by various operations such as rotation, scaling, JPEG compression, Gaussian noise etc. called as attacks. There are various methods proposed in this field in recent years to detect above mentioned attacks. This paper provides a detailed analysis of different approaches and methodologies used to detect image forgery. It is also analysed that block-based features methods are robust to Gaussian noise and JPEG compression and the key point-based feature methods are robust to rotation and scaling.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
Edge detection is one of the most frequent processes in digital image processing for various purposes, one of which is detecting road damage based on crack paths that can be checked using a Canny algorithm. This paper proposed a mobile application to detect cracks in the road and with customized threshold function in the requests to produce useful and accurate edge detection. The experimental results show that the use of threshold function in a canny algorithm can detect better damage in the road
A Fuzzy Set Approach for Edge DetectionCSCJournals
Image segmentation is one of the most studied problems in image analysis, computer vision, pattern recognition etc. Edge detection is a discontinuity based approach used for image segmentation. In this paper, an edge detection using fuzzy set is proposed, where an image is considered as a fuzzy set and pixels are taken as elements of fuzzy set. The fuzzy approach converts the color image to a partially segmented image; finally an edge detector is convolved over the partially segmented image to obtain an edged image. The approach is implemented using MATLAB 7.11. (R2010b). For qualitative and quantitative comparison, BSD (Berkeley Segmentation Database) images are used for experimentation. Performance parameters used are PSNR (dB) and Performance ratio (PR) of true to false edges. It has been shown that the proposed approach performs better than Canny’s edge detection algorithm under almost all scenarios. The proposed approach reduces false edge detection and double edges.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...IOSR Journals
Abstract: Due to higher processing power to cost ratio, it is now possible to replace the manual detection methods used in the IC (Integrated Circuit) industry by Image-processing based automated methods, to detect a broken pin of an IC connected on a PCB during manufacturing, which will make the process faster, easier and cheaper. In this paper an accurate and fast automatic detection method is used where the top view camera shots of PCBs are processed using advanced methods of 2-dimensional discrete wavelet pre-processing before applying edge-detection. Comparison with conventional edge detection methods such as Sobel, Prewitt and Canny edge detection without 2-D DWT is also performed. Keywords :2-dimensional wavelets, Edge detection, Machine vision, Image processing, Canny.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Editor IJCATR
A technique for enhancing decompressed fingerprint image using Wiener2 filter is proposed. First compression is done by sparse representation. Compression of fingerprint is necessary for reducing the memory consumption and efficient transfer of fingerprint images. This is very essential for the application which includes access control and forensics. So the fingerprint image is compressed using sparse representation. In this technique, first dictionary is constructed for patches of fingerprint images. Then a fingerprint is selected and the coefficients are obtained and encoded. Thus the compressed fingerprint is obtained. But when the fingerprint is reconstructed, it is affected by noise. So Wiener2 filter is used to filter the noise in the image. The ridge and bifurcation count is extracted from decompressed and enhanced fingerprints. The experiment result shows that the enhanced fingerprint image preserves more bifurcation than decompressed fingerprint image. The future analysis can be considered for preserving ridges.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
Vhdl implementation for edge detection using log gabor filter for disease det...eSAT Journals
Abstract Edge detection is first and essential step in the field of image processing. Detected edges play a very important role such as image enhancement, object detection, focus area selection and many more. In medical application like tonsillitis, tumor, fracture can be detected in its early stage by detecting edges of disease. There are different and many ways for edge detection, However, the most may be grouped into three categories, first order gradient, second order and optimal edge detection.. Sobel edge detection is gradient based edge detection method used for finding edges of image. Also Sobel edge detection method provide one more advantage that it having better noise sensitivity as compared to other edge detection method. Here new concept Log-Gabor filter is used for best contrast ridges, efficient noise reduction and improved edges of an images. Most image processing tools such LabVIEW are not suited for strong real-time constraints, so to overcome this problem hardware implementation FPGA used. Proposed model for disease detection is design in LabVIEW platform with NI Vision Assistant tool 14.0 . Keywords: Edge Detection, Sobel Operator, Log-Gabor Filter, Labview 14.0.
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
Mobile User Authentication Based On User Behavioral Pattern (MOUBE)CSCJournals
Smart devices are equipped with multiple authentication techniques and still remain prone to
attacks since all of these techniques require explicit user intervention. The purpose of this paper
is to capture the user behavior in order to use it as an implicit authentication technique.
In this paper, we introduce a novel authentication model to be used complementary to the
existing models; Particularly, the context of the user, the duration of usage of each application
and the occurrence time were examined and modeled using the cubic spline function as an
authentication technique. A software system composed of two software components has been
implemented on Android platform. Preliminary results show a 76% accuracy rate in determining
the rightful owner of the device.
Automated Diagnosis of Malaria in Tropical Areas Using 40X Microscopic Images...CSCJournals
This paper proposes a new algorithm to measure parasitemia stemming from Plasmodium
falciparum by using optical images of capillary blood smears from Sub-Saharan Africa. The
approach is an improvement of the previous noteworthy method by developing a two-tone
adaptive median filter and Sauvola segmentation. The analysis was performed on the database
of 100X and 40X microscopic images originating from real time collection of patients’ blood of
some Cameroon’s laboratories. The obtained results were very satisfactory with a detection of
infected cells on 40X images of a sensitivity at 81.58%, a specificity at 97.11% and an accuracy
at 96.71%. Compared to the previous works, the values portray a real improvement in most
performance’s criteria for 100X images and 40X images as well. There is a significant step of
automated Malaria diagnosis in tropical areas and in the performance assessed. The results of
the method gave more strength for the two magnification ranges. The time spent in analysis is
obviously reduced with 40X magnification and the inaccessibility of the information by the visual
laboratory measurement is henceforth genuinely assaulted. The low cost of the system opens
therefore the possibility of a cost effective method of diagnosing malaria in low and middleincome
countries
Edge detection is one of the most frequent processes in digital image processing for various purposes, one of which is detecting road damage based on crack paths that can be checked using a Canny algorithm. This paper proposed a mobile application to detect cracks in the road and with customized threshold function in the requests to produce useful and accurate edge detection. The experimental results show that the use of threshold function in a canny algorithm can detect better damage in the road
A Fuzzy Set Approach for Edge DetectionCSCJournals
Image segmentation is one of the most studied problems in image analysis, computer vision, pattern recognition etc. Edge detection is a discontinuity based approach used for image segmentation. In this paper, an edge detection using fuzzy set is proposed, where an image is considered as a fuzzy set and pixels are taken as elements of fuzzy set. The fuzzy approach converts the color image to a partially segmented image; finally an edge detector is convolved over the partially segmented image to obtain an edged image. The approach is implemented using MATLAB 7.11. (R2010b). For qualitative and quantitative comparison, BSD (Berkeley Segmentation Database) images are used for experimentation. Performance parameters used are PSNR (dB) and Performance ratio (PR) of true to false edges. It has been shown that the proposed approach performs better than Canny’s edge detection algorithm under almost all scenarios. The proposed approach reduces false edge detection and double edges.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...IOSR Journals
Abstract: Due to higher processing power to cost ratio, it is now possible to replace the manual detection methods used in the IC (Integrated Circuit) industry by Image-processing based automated methods, to detect a broken pin of an IC connected on a PCB during manufacturing, which will make the process faster, easier and cheaper. In this paper an accurate and fast automatic detection method is used where the top view camera shots of PCBs are processed using advanced methods of 2-dimensional discrete wavelet pre-processing before applying edge-detection. Comparison with conventional edge detection methods such as Sobel, Prewitt and Canny edge detection without 2-D DWT is also performed. Keywords :2-dimensional wavelets, Edge detection, Machine vision, Image processing, Canny.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
Fingerprint Image Compression using Sparse Representation and Enhancement wit...Editor IJCATR
A technique for enhancing decompressed fingerprint image using Wiener2 filter is proposed. First compression is done by sparse representation. Compression of fingerprint is necessary for reducing the memory consumption and efficient transfer of fingerprint images. This is very essential for the application which includes access control and forensics. So the fingerprint image is compressed using sparse representation. In this technique, first dictionary is constructed for patches of fingerprint images. Then a fingerprint is selected and the coefficients are obtained and encoded. Thus the compressed fingerprint is obtained. But when the fingerprint is reconstructed, it is affected by noise. So Wiener2 filter is used to filter the noise in the image. The ridge and bifurcation count is extracted from decompressed and enhanced fingerprints. The experiment result shows that the enhanced fingerprint image preserves more bifurcation than decompressed fingerprint image. The future analysis can be considered for preserving ridges.
Face Recognition Using Neural Network Based Fourier Gabor Filters & Random Pr...CSCJournals
Face detection and recognition has many applications in a variety of fields such as authentication, security, video surveillance and human interaction systems. In this paper, we present a neural network system for face recognition. Feature vector based on Fourier Gabor filters is used as input of our classifier, which is a Back Propagation Neural Network (BPNN). The input vector of the network will have large dimension, to reduce its feature subspace we investigate the use of the Random Projection as method of dimensionality reduction. Theory and experiment indicates the robustness of our solution.
A Detailed Analysis on Feature Extraction Techniques of Panoramic Image Stitc...IJEACS
Image stitching is a technique which is used for attaining a high resolution panoramic image. In this technique, distinct aesthetic images that are imaged from different view and angles are combined together to produce a panoramic image. In the field of computer graphics, photographic and computer vision, Image stitching techniques are considered as current research areas. For obtaining a stitched image it becomes mandatory that one should have the knowledge of geometric relations among multiple image co-ordinate system [1].First, image stitching will be done based on feature key point matches. Final image with seam will be blended with image blending technique. Hence in this paper we are going to address multiple distinct techniques like some invariant features as Scale Invariant Feature Transform and Speeded up Robust Transform and Corner techniques as Harris Corner Detection Technique that are useful in sorting out the issues related with stitching of images.
Vhdl implementation for edge detection using log gabor filter for disease det...eSAT Journals
Abstract Edge detection is first and essential step in the field of image processing. Detected edges play a very important role such as image enhancement, object detection, focus area selection and many more. In medical application like tonsillitis, tumor, fracture can be detected in its early stage by detecting edges of disease. There are different and many ways for edge detection, However, the most may be grouped into three categories, first order gradient, second order and optimal edge detection.. Sobel edge detection is gradient based edge detection method used for finding edges of image. Also Sobel edge detection method provide one more advantage that it having better noise sensitivity as compared to other edge detection method. Here new concept Log-Gabor filter is used for best contrast ridges, efficient noise reduction and improved edges of an images. Most image processing tools such LabVIEW are not suited for strong real-time constraints, so to overcome this problem hardware implementation FPGA used. Proposed model for disease detection is design in LabVIEW platform with NI Vision Assistant tool 14.0 . Keywords: Edge Detection, Sobel Operator, Log-Gabor Filter, Labview 14.0.
Improving image resolution through the cra algorithm involved recycling proce...csandit
Image processing concepts are widely used in medical fields. Digital images are prone to a
variety of types of noise. Noise is the result of errors in the image acquisition process for
reconstruction that result in pixel values that reflect the true intensities of the real scenes. A lot
of researchers are working on the field analysis and processing of multi-dimensional images.
Work previously hasn’t sufficient to stop them, so they continue performance work is due by the
researcher. In this paper we contribute a novel research work for analysis and performance
improvement about to image resolution. We proposed Concede Reconstruction Algorithm (CRA)
Involved Recycling Process to reduce the remained problem in improvement part of an image
processing. The CRA algorithms have better response from researcher to use them
Mobile User Authentication Based On User Behavioral Pattern (MOUBE)CSCJournals
Smart devices are equipped with multiple authentication techniques and still remain prone to
attacks since all of these techniques require explicit user intervention. The purpose of this paper
is to capture the user behavior in order to use it as an implicit authentication technique.
In this paper, we introduce a novel authentication model to be used complementary to the
existing models; Particularly, the context of the user, the duration of usage of each application
and the occurrence time were examined and modeled using the cubic spline function as an
authentication technique. A software system composed of two software components has been
implemented on Android platform. Preliminary results show a 76% accuracy rate in determining
the rightful owner of the device.
Automated Diagnosis of Malaria in Tropical Areas Using 40X Microscopic Images...CSCJournals
This paper proposes a new algorithm to measure parasitemia stemming from Plasmodium
falciparum by using optical images of capillary blood smears from Sub-Saharan Africa. The
approach is an improvement of the previous noteworthy method by developing a two-tone
adaptive median filter and Sauvola segmentation. The analysis was performed on the database
of 100X and 40X microscopic images originating from real time collection of patients’ blood of
some Cameroon’s laboratories. The obtained results were very satisfactory with a detection of
infected cells on 40X images of a sensitivity at 81.58%, a specificity at 97.11% and an accuracy
at 96.71%. Compared to the previous works, the values portray a real improvement in most
performance’s criteria for 100X images and 40X images as well. There is a significant step of
automated Malaria diagnosis in tropical areas and in the performance assessed. The results of
the method gave more strength for the two magnification ranges. The time spent in analysis is
obviously reduced with 40X magnification and the inaccessibility of the information by the visual
laboratory measurement is henceforth genuinely assaulted. The low cost of the system opens
therefore the possibility of a cost effective method of diagnosing malaria in low and middleincome
countries
Implementation of RSA Algorithm with Chinese Remainder Theorem for Modulus N ...CSCJournals
Cryptography has several important aspects in supporting the security of the data, which
guarantees confidentiality, integrity and the guarantee of validity (authenticity) data. One of the
public-key cryptography is the RSA cryptography. The greater the size of the modulus n, it will be
increasingly difficult to factor the value of n. But the flaws in the RSA algorithm is the time
required in the decryption process is very long. Theorem used in this research is the Chinese
Remainder Theorem (CRT). The goal is to find out how much time it takes RSA-CRT on the size
of modulus n 1024 bits and 4096 bits to perform encryption and decryption process and its
implementation in Java programming. This implementation is intended as a means of proof of
tests performed and generate a cryptographic system with the name "RSA and RSA-CRT Text
Security". The results of the testing algorithm is RSA-CRT 1024 bits has a speed of
approximately 3 times faster in performing the decryption. In testing the algorithm RSA-CRT 4096
bits, the conclusion that the decryption process is also effective undertaken more rapidly.
However, the flaws in the key generation process and the RSA 4096 bits RSA-CRT is that the
time needed is longer to generate the keys.
Gene Selection for Patient Clustering by Gaussian Mixture ModelCSCJournals
Clustering is the basic composition of data analysis, which also plays a significant role in
microarray analysis. Gaussian mixture model (GMM) based clustering is very popular approach
for clustering. However, GMM approach is not so popular for patients/samples clustering based
on gene expression data, because gene expression datasets usually contains the large number
(m) of genes (variables) in presence of a few (n) samples observations, and consequently the
estimates of GMM parameters are not possible for patient clustering, because there does not
exists the inverse of its covariance matrix due to m>n. To conquer these problems, we propose to
apply a few ‘q’ top DE (differentially expressed) genes (i.e., q<n /><m) between two or more
patient classes, which are selected proportionally from all DE gene’s groups. Here, the fact
behind our proposal that the EE (equally expressed) genes between two or more classes have no
significant contribution to the minimization of misclassification error rate (MER). For selecting few
top DE genes, at first, we clustering genes (instead of patients/samples) by GMM approach. Then
we detect DE and EE gene clusters (groups) by our proposed rule. Then we select q (few) top DE
genes from different DE gene clusters by the rule of proportional to cluster size. Application of
such a few ‘q’ number of top DE genes overcomes the inverse problem of covariance matrix in
the estimation process of GMM’s parameters, and ultimately for gene expression data
(patient/sample) clustering. The performance of the proposed method is investigated using both
simulated and real gene expression data analysis. It is observed that the proposed method
improves the performance over the traditional GMM approaches in both situations.
Dynamic Multi Levels Java Code Obfuscation Technique (DMLJCOT)CSCJournals
Several obfuscation tools and software are available for Java programs but larger part of these
software and tools just scramble the names of the classes or the identifiers that stored in a
bytecode by replacing the identifiers and classes names with meaningless names. Unfortunately,
these tools are week, since the java, compiler and java virtual machine (JVM) will never load and
execute scrambled classes. However, these classes must be decrypted in order to enable JVM
loaded them, which make it easy to intercept the original bytecode of programs at that point, as if
it is not been obfuscated. In this paper, we presented a dynamic obfuscation technique for java
programs. In order to deter reverse engineers from de-compilation of software, this technique
integrates three levels of obfuscation, source code, lexical transformation and the data
transformation level in which we obfuscate the data structures of the source code and byte-code
transformation level. By combining these levels, we achieved a high level of code confusion,
which makes the understanding or decompiling the java programs very complex or infeasible.
The proposed technique implemented and tested successfully by many Java de-compilers, like
JV, CAVJ, DJ, JBVD and AndroChef. The results show that all decompiles are deceived by the
proposed obfuscation technique
The availability of advanced capturing and computation techniques delivered the way to study on
trajectory data, which denote the mobility of a variety of moving objects, such as people, vehicles
and animals. Trajectory data mining is of the research trend in data mining research to cope with
the current demand of trajectory data analysis providing profit rich applications. Data clustering is
one of the best techniques to group the community data. In this paper efforts are made to review
the trendy research being done in trajectory data mining. The review is three fold, surveying the
literature on location and community based trajectory mining, trajectory data bases and trajectory
querying. The review explored the trajectory data mining framework. This review can help outline
the field of trajectory data mining, providing a quick outlook of this field to the community.
Trajectory clustering methods are discussed. The opportunities and applications of cluster based
trajectory data mining are presented. The method of similarity based community clustering is
adopted to go with the future work.
Robust Analysis of Multibiometric Fusion Versus Ensemble Learning Schemes: A ...CSCJournals
Identification of person using multiple biometric is very common approach used in existing user
validation of systems. Most of multibiometric system depends on fusion schemes, as much of the
fusion techniques have shown promising results in literature, due to the fact of combining multiple
biometric modalities with suitable fusion schemes. However, similar type of practices are found in
ensemble of classifiers, which increases the classification accuracy while combining different
types of classifiers. In this paper, we have evaluated comparative study of traditional fusion
methods like feature level and score level fusion with the well-known ensemble methods such as
bagging and boosting. Precisely, for our frame work experimentations, we have fused face and
palmprint modalities and we have employed probability model - Naive Bayes (NB), neural
network model - Multi Layer Perceptron (MLP), supervised machine learning algorithm - Support
Vector Machine (SVM) classifiers for our experimentation. Nevertheless, machine learning
ensemble approaches namely, Boosting and Bagging are statistically well recognized. From
experimental results, in biometric fusion the traditional method, score level fusion is highly
recommended strategy than ensemble learning techniques.
The Royal Split Paradigm: Real-Time Data Fragmentation and Distributed Networ...CSCJournals
With data encryption, access control, and monitoring technology, high profile data breaches still
occur. To address this issue, this work focused on securing data at rest and data in motion by
utilizing current distributed network technology in conjunction with a data fragmenting and
defragmenting algorithm. Software prototyping was used to exhaustively test this new paradigm
within the confines of the Defense Technology Experimental Research (DETER) virtual testbed.
The virtual testbed was used to control all aspects within the testing network including: node
population, topology, file size, and number of fragments. In each topology, and for each
population size, different sized files were fragmented, distributed to nodes on the network,
recovered, and defragmented. All of these tests were recorded and documented. The results
produced by this prototype showed, with the max wait time removed, an average wait time of
.0287 s/fragment and by increasing the number of fragments, N, the complexity, X, would
increase as demonstrated in the formula: X = (.00287N!).
Application of Fuzzy Logic in Load Balancing of Homogenous Distributed Systems1CSCJournals
Various studies have shown that distributing the work load evenly among processors of a
distributed system highly improves system performance and increases resource utilization. This
process is known as load balancing. Fuzzy logic has been applied in many fields of science and
industry to deal with uncertainties. Existing research in using fuzzy logic for the purpose of load
balancing has only concentrated in utilizing fuzzy logic concepts in describing processors load
and tasks execution length. The responsibility of the fuzzy-based load balancing process itself,
however, has not been discussed and in most reported work is assumed to be performed in a
distributed fashion by all nodes in the network. This paper proposes a new fuzzy dynamic load
balancing algorithm for homogenous distributed systems. The proposed algorithm utilizes fuzzy
logic in dealing with inaccurate load information, making load distribution decisions, and
maintaining overall system stability. In terms of control, we propose a new approach that specifies
how, when, and by which node the load balancing is implemented. Our approach is called
Centralized-But-Distributed (CBD). An evaluation study of the proposed algorithm shows that our
algorithm is able to reduce the average response time and average queue length as compared to
known load balancing algorithms reported in the literature.
Performance Evaluation of Web Services In Linux On MulticoreCSCJournals
Contemporary Business requires the ability to seamlessly exchange information between internal
business units, customers, and partner, is vital for success. Most organizations employ a variety of
different applications to store and exchange data in dissimilar way and therefore cannot “communicate” to
one another productively [1]. Service Oriented Architecture (SOA) components provide services to other
components via communication protocols typically over a network [2].The technologies like DCOM, RMI,
COBRA, Web Services etc. are developed using SOA, which contributed best to fulfill requirements to
some extent, but components result from these technologies are mostly either language specific or
platform specific,[3]. The services or components developed for one platform may not be able to
communicate and reusable in other platform, as they are mostly language specific or platform specific.
“World Wide Web Consortium (W3C) International community to develop web standards” issued WS-*
specifications for programming language vendors for Web services, which confirms a standard means of
interoperating between different software applications running on a variety of platforms or frameworks
[4][5]. This paper tests web services performance gain along with interoperability, reusability by using
“NAS Parallel Benchmarks (NPB)” set of program [6] developed by NASA Advanced Supercomputing
Division to evaluate the performance of supercomputers.
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
A Novel Approach for Edge Detection using Modified ACIES Filteringidescitation
The most important humiliating attribute of an image is noise, which may
become obvious during image capturing, communication or processing and may conceivably
be reliant on or sovereign of image. In order to supplement the high- frequency components
in this paper we are proposing a new class of filter called ACIES filter which implies spatial
filter shape that has a high positive component at the centre. Since restraint of noise can
only be achieved by smoothing the image. Sharpening with ACIES filter highlights the fine
details of an image and enhances the clarity of the image at the boundaries. Experimental
results show that the concert of the proposed ACIES filter is acceptable and Quality of the
consequent images is remarkably well even under the strongly noise-corrupted conditions.
If the edges in an image can be recognized specifically, then all the objects in the image can
be located and basic properties such as region, edge, and contour can be measured. The
performance of the proposed approach is measured with image quality metrics such as
PSNR, MSE, NAE and NK.
Edge detection of herbal plants is a set of mathematical methods which aim at identifying points in a digital image at which the image brightness changes sharply and has discontinuities. They are defined as the set of curved line segments termed edges. Effective edge detection for microscopic image of herbal plant is proposed through this paper which compares the edge detected images and then performs further segmentation. Comparison between Sobel operator, Prewitt, Canny and Robert cross operators is performed. Our method after efficient edge detection performs Gabor filter and K-means clustering to procure a better image. It is then subjected to further segmentation. Experimental methods in our proposed algorithm show that our method achieves a better edge detection as compared to other edge detector operators. Our proposed algorithm provides the maximum PSNR value of 43.684 amongst the other commercial edge detection operators.
Fast Segmentation of Sub-cellular OrganellesCSCJournals
Segmentation and counting sub-cellular structure is a very challenging problem even for medical experts. A fast and efficient method for segmentation and counting of sub-cellular structure is proposed. The proposed method uses a hybrid combination of several image processing techniques and is effective in segmenting the sub-cellular structures in a fast and effective manner.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Frequency Domain Blockiness and Blurriness Meter for Image Quality AssessmentCSCJournals
Image and video compression introduces distortions (artefacts) to the coded image. The most prominent artefacts added are blockiness and blurriness. Many existing quality meters are normally distortion-specific. This paper proposes an objective quality meter for quantifying the combined blockiness and blurriness distortions in frequency domain. The model first applies edge detection and cancellation, then spatial masking to mimic the characteristics of the human visual system. Blockiness is then estimated by transforming image into frequency domain, followed by finding the ratio of harmonics to other AC components. Blurriness is determined by comparing the high frequency coefficients of the reference and coded images due to the fact that blurriness reduces the high frequency coefficients. Then, both blockiness and blurriness distortions are combined for a single quality metric. The meter is tested on blocky and blurred images from the LIVE image database, with a correlation coefficient of 95-96%.
Denoising and Edge Detection Using SobelmethodIJMER
The main aim of our study is to detect edges in the image without any noise , In many of the images edges carry important information of the image, this paper presents a method which consists of sobel operator and discrete wavelet de-noising to do edge detection on images which include white Gaussian noises. There were so many methods for the edge detection, sobel is the one of the method, by using this sobel operator or median filtering, salt and pepper noise cannot be removed properly, so firstly we use complex wavelet to remove noise and sobel operator is used to do edge detection on the image. Through the pictures obtained by the experiment, we can observe that compared to other methods, the method has more obvious effect on edge detection.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Removal of Gaussian noise on the image edges using the Prewitt operator and t...IOSR Journals
Abstract: Image edge detection algorithm is applied on images to remove Gaussian noise that is present in the
image during capturing or transmission using a method which combines Prewitt operator and threshold
function technique to do edge detection on the image. This method is better than a method which combines
Prewitt operator and mean filtering. In this paper, firstly use mean filtering to remove initially Gaussian noise,
then use Prewitt operator to do edge detection on the image, and finally applied a threshold function technique
with Prewitt operator.
Keywords: Gaussian noise, Prewitt operator, edge detection, threshold function
Visual Quality for both Images and Display of Systems by Visual Enhancement u...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Hardware Unit for Edge Detection with Comparative Analysis of Different Edge ...paperpublications3
Abstract: An edge in an image is a contour across which the brightness of the image changes abruptly. In image processing, an edge is often interpreted as one class of singularities. Edge detection is an important task in image processing. It is a main tool in pattern recognition, image segmentation, and scene analysis. An edge detector is basically a high pass filter that can be applied to extract the edge points in an image. This topic has attracted many researchers and many achievements have been made. Many researchers provided different approaches based on mathematical calculations which some of them are either robust or cost effective. A new algorithm will be proposed to detect the edges of image with increased robustness and throughput. Using this algorithm we will reduce the time complexity problem which is faced by previous algorithm. We will also propose hardware unit for proposed algorithm which will reduce the area, power and speed problem. We will compare our proposed algorithm with previous approach. For image quality measurement we will use some scientific parameters those are PSNR, SSIM, FSIM. Implementation of proposed algorithm will be done by Matlab and hardware implementation will be done by using of Verilog on Xilinx 14.1 simulator. Verification will be done on Model sim.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Performance Evaluation of Image Edge Detection Techniques
1. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 170
Performance Evaluation of Image Edge Detection Techniques
Maher I. Rajab mirajab@uqu.edu.sa
College of Computer & Information Systems
Computer Engineering Department
Umm Al-Qura University
Makkah 21955, Saudi Arabia
Abstract
The success of an image recognition procedure is related to the quality of the edges marked. The
aim of this research is to investigate and evaluate edge detection techniques when applied to
noisy images at different scales. Sobel, Prewitt, and Canny edge detection algorithms are
evaluated using artificially generated images and comparison criteria: edge quality (EQ) and map
quality (MQ). The results demonstrated that the use of these criteria can be utilized as an aid for
further analysis and arbitration to find the best edge detector for a given image.
Keywords: Image Edge Detection, Gaussian Noise, Gaussian Smoothing.
1. INTRODUCTION
Edge extraction is usually the starting step in segmentation because it effectively detects the
limits of the objects. These limits are commonly called "edges" [1]. Edge extraction is used mostly
in image recognition, classification or interpretation procedures because it provides a compressed
amount of information for processing [2]. The evidence for the best detector type is judged by
studying the edge maps relative to each other through statistical evaluation [3]. Singh et al. [4]
concluded that the Sobel, Prewitt, Roberts, Canny provide low quality edge maps as compared to
Laplician of Gaussian (LoG). Comparison is done on the basis of two parameters PSNR and
MSE.
This paper is organized into five sections as follows:
Section one introduces the topics of edge detection and scale space analysis. Section two
reviews the theory and the mathematical background that is used to assess the analysis of the
techniques and procedures followed throughout this work. Section three starts with the images
that are used, and also a specific generated reference image are described along with other
related methods which are adopted to achieve the aims of this research. Section four summarizes
the results found when evaluating the success rate of the investigated edge detectors. Finally, in
section five conclusions and suggestions for future work are considered.
2. THEORY
In this section, the essential derivative concept for edge detectors (sec 2.2) In 2-D images is
introduced, along with a suggested approach which has been used in edge detector
implementations.
2.1 Edge Detection
Firstly an "edge point" represents an abrupt change in an image brightness function which occurs
at boundaries; each edge point has a magnitude and direction, i.e. it is a vector variable [5]. The
edge magnitude represents edge intensity or the strength of the edge point and edge direction
depicts the tangent of the direction at that point.
2. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 171
Since the image varies in two dimensions, accordingly edge variations are also calculated in two
directions, i.e. horizontal direction i and vertical direction j. Therefore, two partial derivative
components are needed to describe the edge gradient in both i and j directions:
iIIi
(1)
jII j (2)
also the angle between Ii and Ij can also be specified as:
j
i
I
I
arctan (3)
The edge strength magnitude at each pixel (i,j) can be simply expressed as:
2 2
( , ) i jI IE i j (4)
or
II ji)j,i(E (5)
To simplify image calculations and transformations, i.e. square and square root transformations,
in this work, instead of using the definition in equation (4), it is suggested that E(i,j) be expressed
as the maximum of the two directional gradients [5] and :
( , ) max ,E i j I Ii j (6)
The definition of in equation (3) is the same as before. Equation (6) has been used extensively
throughout the implementation procedure in various edge detectors. It is noticed that equation (6)
calibrates the edge strength E within its maximum allowable range, i.e. E has a value not
exceeding 255 when representing an image in 8-bit or 256 gray levels. In contrast, the standard
magnitude notation in (4) increases the value of E by 2 at equal gradients, i.e. iI equals jI
and the simple addition in (5) doubles E.
2.2 Image Edge Detectors
Various edge detectors will be evaluated according to the maximum edge map [1] that they
produce as discussed later. The robustness of the edge detectors to different levels of noise will
be investigated as well. Common edge detectors are considered in this section. The first and
second derivative approaches are discussed below.
2.2.1 First Derivative Edge Detectors
Some of the edge operators utilize pixel differences to approximate derivatives for example, the
Roberts operator [5]. In contrast other edge detectors apply several rotational kernels to
approximate first derivatives such as Sobel, Prewitt, Robinson, and Kirsch [5].
Since digital images are discrete in nature, so derivatives can be approximated as differences
between pixel values. Therefore, the partial derivatives in (1) and (2) can be approximated by one
pixel difference as in (7) and (8), respectively:
3. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 172
j,iIj,1iIj,iIi (7)
j,iI1j,iIj,iI j (8)
The Roberts operator involves two kernels (templates) to measure the rate of change of image
intensity in two orthogonal directions as shown in the figure below.
1 0 0 1
0 -1 -1 0
(a) (b)
FIGURE 1: The Roberts edge operator kernels.
This will yield two images Ia and Ib from the above kernel in Figures 1(a) and 1(b), respectively.
The absolute pixel values of j,iIa and j,iIb are:
1j,1iIj,iIj,iIa (9)
j,1iI1j,iIj,iIb (10)
Consequently, the Roberts edge magnitude would be calculated by applying equation (6), using
the results from (9) and (10), as
II ji ,max)j,i(E (11)
It is noticed that the Roberts operator uses a total of 4-neighbourhoods, or two pixels in each
orthogonal direction, to approximate the derivative which results in high sensitivity to noise [5]. In
contrast, other first derivative edge detectors utilize up to 8-neighbourhood pixels in each 3x3
rotational kernel, for example Robinson and Kirsch [5]. For example, the first three possible
successive rotational kernels in the Robinson edge detector are :
k1= k2= k3=
The eight rotational kernels (k1 to k8) for Sobel and Prewitt are generated and listed in Appendix
A. The simple rotation [5] is applied onto the 8-neighbourhoods pixels by rotating them one
consistent circular step, in a clockwise or anticlockwise direction, to yield a kernel ki , as shown
between k1 to k2 or k2 to k3.
Edge operator is a good high-pass filter for an image, i.e. it can detect sharp intensity changes,
when the image is free of noise. However, this operator is not considered as a good edge
detector because it also detects the noise pixels as edges and increases their intensity [2,7].
1 1 1
1 -2 1
-1 -1 -1
1 1 1
-1 -2 1
-1 -1 1
-1 1 1
-1 -2 1
-1 1 1
4. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 173
A practical approach in attenuating or reducing the amount of noise is called noise smoothing or
simply smoothing [2,8]. This can be tackled with the Gaussian smoothing operator j,iG , or
Gaussian for short, and is given by [2]
2
22
2
ji
e;j,iG
(12)
where is the scale parameter, the standard deviation, which defines the bandwidth of the filter
and therefore the level of smoothing [2]; the larger the parameter , the less the presence of
higher frequencies, and the more the blurring of the image [2]. Noise smoothing using the
Gaussian kernel is a linear filtering operator, since the image is convolved with a constant kernel
or kernel model (G) as [8]
GIIG (13)
The integer Gaussian filter, refer to Appendix B, is adopted in this work because it rounds to
integers the real numbers contained in the kernel. The smallest integer used in the kernel is 1.
Appendix B lists ten integer Gaussian kernels at different scales ; which have been used
throughout this study.
3. METHODLOGY
In the previous section a literature review on the topics relating to this work was presented. In this
section the generated synthetic images used in evaluating various edge detection algorithms is
introduced. Within this section some examples of edge detection techniques are implemented
and the generated and used images are also considered. Finally, the necessary comparison
criteria, which are used as an aid for arbitration, are investigated.
3.1 Gaussian Noise Deviates
Noise, in general, is defined differently according to the case it is introduced in [8]. In image and
signal processing:
• It might be considered as false contours when trying to group meaningful objects or lines.
• In edge or line detection, noise might be the spurious fluctuations of pixel values which
may be introduced by the image acquisitions system [8].
In our case, the noise is chosen to be an additive amount [8] to an artificially generated [1] noise-
free image (discussed in the next section). Therefore, a generated noise image n(i , j) added to
the true image, free-of-noise, I(i , j) will yield In(i , j) as
j,inj,iIj,iIn (14)
The Gaussian noise deviates model is chosen in this study because:
1. Define the Gaussian noise values which are normally distributed deviates with zero mean and
at fixed standard deviations, n as nn(i , j; σ ) . These noise values are completely independent
of each other and also, of the true image e.g. ,I i j . Noise is considered to be additive.
2. The symmetrical nature of this model is a practical approximation. In contrast impulsive noise,
otherwise known as spots or peak noise, randomly alters pixels so that their values deviate
considerably from the true values ,I i j .
5. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 174
Two Gaussian deviates are generated by the algorithm introduced by Press et al. [9]. This
algorithm has been developed to generate these normal deviates at the desired n and in two
dimensions i and j to yield finally a noise image, file, of size mxn .
3.2 Generated Images
Ramalho's artificially generated images [1,10] Band A and Band B are shown in Figures 3.1 and
3.2; these are mainly used in this study. The Ramelho's Band B reference edge image, as shown
in figure 3.3, is also used as a reference for correct and wrong marked edge pixels; it is a binary
image i.e. one level represents the correct edges while the other refers to wrong marked edges.
Therefore, edge images which result from the set of edge detection algorithms (discussed in
section 3.4) are compared with the aid of this reference image to compute the percentage of
missed edges. Comparison criteria are investigated in the next section.
FIGURE 3.1: Ramalho's [1] Band A original
image.
FIGURE 3.2: Ramalho's [1] Band B original image.
Enhancement image operations are performed on the Band B reference image to yield closed
contours of one pixel wide edges coinciding with the centers of corresponding pixels of the
original image [11] (Band B original image shown in figure 3.2). The enhancement process has
included the following three major steps:
1. Simple segmentation operation to segment the original image into a one closed contour object
and background.
2. Logically based routine to mark outside boundaries of the main object.
3. Logical and mathematical operations between Ramelho's original reference and the image
result from step 2.
Figure 3.4 shows the output images resulting from the steps followed to prepare this enhanced
reference.
6. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 175
FIGURE 3.3: Ramalho's[1] Band B Reference
image (256x256) pixels.
FIGURE 3.4: Enhanced Band B Reference image
(150x150) pixels.
3.3 Comparison Criteria
According to Pratt [12], the only performance measure of ultimate importance is the
completeness and quality of and edge map. A feasible approach introduced by Ramalho [1]
shows significant advantages of using a neural network in the comparison of edge detection
techniques, in contrast with the conventional edge detection techniques. Ramalho's [1] has
developed two ratios to compare and evaluate this quality of edges which are produced by a
selected set of edge detection algorithms. These ratios are designated as edge quality (EQ) and
map quality (MQ) as follows
CW
C
EQ
(15)
MCW
C
MQ
(16)
where C refers to the correctly marked edges, W to the wrongly marked edges, and M to missed
edges. The map quality ratio is chosen in this study as a base measure for the quality of edges
produced by a set of edge detection algorithms (as discussed in the next section) because (i)
edge detectors which mark a small number of correct edges are penalized [1], (ii) it is more
accurate since the missed points are also included; in contrast EQ gives in such case a
misleading result. For example the case when an edge operator marks a very small number of
both wrong and correct points, In this case if W goes to zero then EQ will measure a false ratio of
one.
3.4 Edge Detection Algorithms
A diversity of edge detection algorithms and their implementations are considered in this section.
First derivative edge detectors which are based on the convolution operations, are introduced in
section two. Due to the similarities between these filters, one algorithm is introduced ; the Sobel
edge detector algorithm is selected as an example from this category.
The thresholding step is commonly used in edge detection algorithms [1,8] to determine a
threshold value () above which the image points are considered as edges [8]. Different
thresholding techniques are reviewed by Shaoo et al. [13]. The problem with this type of
traditional edge detection approach is that a low threshold produces false edges, but a high
threshold misses important edges [14]. Panda et al. [15] investigated different first and second
derivative filters to find edge map after denoising a corrupted gray scale image. Subjective
method has been used by visually comparing the performance of the proposed derivative filter
with other existing first and second order derivative filters. For objective evaluation of the
derivative filters, the root mean square error and root mean square of signal to noise ratio are
used.
7. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 176
In this research the suggested method for noisy images, requires a strategy to take into
consideration the overall probability of these images. The standard deviation of the Gaussian
noise should be taken into account when calculating the threshold.
The discussion above can be summarized in the following proposed edge detection algorithm:
Algorithm: Proposed Sobel edge detection based on eight Sobel rotational kernels.
{Detection}
For i=1 to 8 do {eight rotational kernels k1 to k8 }
k iI I k {convolve image I with kernel ik }
end for
1 8
( , ) max k kI to IE i j {maximum gradients}
{Thresholding}
{ computes threshold . is the mean of ( , )E i j , is its
standard deviation, and is an adjustable parameter }
( , )E i j { mark edges which have edge strength greater than }
The above algorithm consists mainly of two major parts: Detection and Thresholding.
The Detection step performs eight convolution operations on the image I, by operating eight
rotational kernels k1 to k8 , which model eight orientation images (at eight orientations
i.e. 47...,2,4,0 ). Each orientation image represents a directional gradient and the
greatest gradient will constitute the edge strength E( i , j) : this was discussed in section two.
In the Thresholding operation, the selection of is designed to be such that all pixels points of the
edge strength E( i , j) are greater than . However it is important to determine a method to
calculate . A method is suggested for choosing this threshold according to a combination of the
mean and the standard deviation of the edge detector output image, E( i , j), as discussed in
the next section.
Extensive experiments have been carried out on noisy images (e.g. Band A and Band B shown in
figures 3.1 and 3.2, respectively). It is found that a common approximation that is made to the
threshold value is a percentage of the highest value of E( i , j) as follows:
j,iEmax (17)
For example 5.0 will mark all pixels of E( i , j) which are above 50% of its maximum. This
relation is misleading, especially when an image is corrupted by a large amount of noise, for
example when noise n(i ,j) with large variance, is added to the true image I(i ,j). For example, in
this study, five levels of noise are chosen at .75and,50,30,10,1n
The suggested method to compute will takes into consideration the amount of additive noise.
The derived relation given as
(18)
8. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 177
where is the mean value of the strength image E( i , j), is its standard deviation, and is an
adjustable parameter which may give a flexible range to control the output of an edge detector
more. Here is taken to be the sum of and i.e. 1 .
The relation in (18) is successfully experimented on different images and yields a qualitatively
good edge output. Therefore, the {Thresholding} section in the above algorithm is developed as
{Thresholding}
j,iEMean {Get the mean value of image E( i , j) }
j,iEStdDev {Get the standard deviation of image E( i , j)}
In the next section the performance of different edge detectors are evaluated according to the
comparison criteria which were mentioned earlier in this section.
4. RESULTS AND DISCUSSION
In this section the output of three edge detectors are evaluated using the map quality ratio (MQ),
section 3.3. Firstly, five levels of noise are added to the Band B image (Figure 3.2), then each
level is smoothed at gradual scales; ten gradual smoothing steps are used. Finally, for each edge
detector there are five curves showing the fluctuation of edge success rate; the percentage of
map quality MQ represents this rate.
4.1 Effect of Gaussian Noise Deviates on Edge Detectors
To achieve the foregoing objectives, the performance of each edge detector is analyzed for five
levels of Gaussian noise deviates; .75,50,30,10,1n and Figure 4.1 shows the Band B
image exposed to these five levels of noise:
9. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 178
FIGURE 4.1: Five noise levels added to Band B image.
Images (a) at 1n , (b) 10n , (c) 30n , (d) 50n ,
(e) 75n .
At each noise level there are ten gradual steps of smoothing according to convolution kernel
width given as
5 gW (19)
This is an adequate width of the Gaussian filter because it subtends 98.76% of the area under the
curve [8]. The degree of smoothing is determined by the standard deviation of the Gaussian g ;
larger standard deviation Gaussians require larger convolution kernels. To clear ambiguities,
Appendix C lists in tabular form the detailed data summarizing the values of g , kernel sizes,
percentage of correct, wrong, and missed edges, and the overall map quality MQ. Figures 4.2 to
4.4 illustrate the performance of the three edge detectors: Sobel, Prewitt, and Canny. The edge
quality decreases with the increase of scale g of the Gaussians.
In this research the suggested method for noisy images, requires a strategy to take into
consideration the overall probability of these images. The improved selection of thresholding in
Sobel and Prewitt algorithms using the sum of mean and standard deviation of the Gaussian
noise. Then the image is run through the proposed Sobel or Prewitt algorithms, this process is
hardly affected by noise. The MQ results indicated that the Proposed Sobel and Prewitt edge
10. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 179
0
20
40
60
80
1 2 3 4
N
= 75.41
N
= 50.41
N
= 31.43
N
= 10.08
N
= 1.0
g
%SobelEdgeMapQuality
detection algorithms outperform Canny algorithm. The enhanced detection step in Sobel and
Prewitt performs eight convolution operations and this will enrich the detection of edge pixels.
FIGURE 4.2: Edge map quality of smoothed Sobel edge detector
corrupted by Gaussian noise.
FIGURE 4.3: Edge map quality of smoothed Prewitt edge
detector corrupted by Gaussian noise.
0
15
30
45
60
75
1 2 3 4
g
PrewittEdgeMapQuality
11. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 180
FIGURE 4.4: Edge map quality of smoothed Canny edge detector corrupted by Gaussian noise.
5. CONCLUSION AND FUTURE WORK
The aims of this research have been stated and methods are also suggested to aid in achieving
the following objectives:
The necessary comparison criteria that assist in the evaluation of the performance of edge
detector are investigated.
Edge detectors outputs 'edge marked' are evaluated by using the map quality MQ ratio.
The performance of three edge detectors are analyzed for five levels of Gaussian noise deviates;
.75,50,30,10,1n and At each noise level there are ten gradual steps of smoothing
according to convolution kernel width given as 5W .
Future work suggests that the learning set of artificially generated images used by Ramalho can
be used to train the neural network which will (i) be utilized as an aid to the arbitration strategy,
and (ii) used to define the ideal sigma g that corresponds to the ideal noisy image within every
level of noise. The neural network will also be used to determine the overall performance of the
edge detector itself.
6. REFERENCES
[1] M. Ramalho. "Edge detection using neural network arbitration." Ph.D. Thesis, University
of Nottingham, UK, 1996.
[2] M. Garcý´a-Silvente, J.A. Garcý´a, J. Fdez-Valdivia, A. Garrido. "A new edge detector
integrating scale-spectrum information." Image and Vision Computing, vol. 15, pp. 913–923,
1997.
0
10
20
30
40
50
1 2 3 4
g
%CannyEdgeMapQuality
12. Maher I. Rajab
International Journal of Computer Science and Security (IJCSS), Volume (10) : Issue (5) : 2016 181
[3] M. Juneja, P. Sandhu. "Performance Evaluation of Edge Detection Techniques for Images in
Spatial Domain." International Journal of Computer Theory and Engineering, vol. 1, no. 5,
2009.
[4] I. Singh, A. Oberoi, M. Oberoi. "Performance Evaluation of Edge Detection Techniques for
Square, Hexagon and Enhanced Hexagonal Pixel Images, International Journal of
Computer Applications, vol. 121, no.12, 2015.
[5] M. Sonka, V. Hlavac, R. Boyle. Image Processing, Analysis, and Machine Vision. Toronto,
CA: Thomson, 2008, pp. 132-146,
[6] D. Marr, E. Hildreth. "Theory of Edge Detection." Royal Society of London B, vol. 207, no.
1167, pp. 187-217, Feb. 29, 1980.
[7] R.C. Gonzalez, P. Wintz. Digital Signal Processing. Addison-Wesley Publishing Company,
1993.
[8] E. Trucco, A. Verri. Introductory Techniques for 3-D Computer Vision. Prentice Hall, 1998.
[9] W.H. Press, B.P. Flannery, S.A. Teukolsky, W.T. Vetterling. Numerical Recipes in Pascal,
Cambridge University Press, 1989.
[10] M. Ramalho, and K.M. Curtis. "Neural Network Arbitration of Edge Maps," in Transputer
Applications and Systems '94; A. Gloria., M. Jane., D. Marini. (Eds), IOS Press,
Netherlands, 1994.
[11] J. Canny. "A computational approach to edge detection." IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 8, no. 6, pp. 679-698, 1986.
[12] W.K. Pratt. Digital Image Processing, New York: Wiley InterSciencce, 1991.
[13] P.K. Shaoo, S. Soltani, A.K.C. Wong. "A survey of thresholding techniques." Computer
Vision, Graphics, and Image Processing, vol. 41, pp. 233-260, 1998.
[14] S. Lakshmi, V. Sankaranarayanan (2010), "A study of Edge Detection Techniques for
Segmentation Computing Approaches.", IJCA Special Issue on Computer Aided Soft
Computing Techniques for Imaging and Biomedical Applications CASCT. Available:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.206.3030&rep=rep1&type=pdf
[15] C.S. Panda, S. Patnaik (2009), "Filtering Corrupted Image and Edge Detection in Restored
Grayscale Image Using Derivative Filters.", International Journal of Image Processing (IJIP).
Vol. 3 (3), 105-119. Available:
http://www.cscjournals.org/manuscript/Journals/IJIP/Volume3/Issue3/IJIP-28.pdf