The Intelligent Transportation Systems (ITS) are the subject of a world economic competition. They are the
application of new information and communication technologies in the transport sector, to make the
infrastructures more efficient, more reliable and more ecological. License Plates Recognition (LPR) is the
key module of these systems, in which the License Plate Localization (LPL) is the most important stage,
because it determines the speed and robustness of this module. Thus, during this step the algorithm must
process the image and overcome several constraints as climatic and lighting conditions, sensors and angles
variety, LPs’ no-standardization, and the real time processing. This paper presents a classification and
comparison of License Plates Localization (LPL) algorithms and describes the advantages, disadvantages
and improvements made by each of them.
This document provides an overview of image processing techniques for traffic applications. It discusses automatic lane finding using color-based and texture-based segmentation as well as feature-driven approaches. Object detection methods like thresholding, edge detection using Canny operator, and background differencing are also covered. Additionally, the document proposes an emergency vehicle detection system using red beacon detection and frequency analysis to override normal traffic light patterns when an emergency vehicle is detected.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
A completed modeling of local binary pattern operatorWin Yu
This document presents the completed local binary pattern (CLBP) operator for texture classification. CLBP generalizes and completes the local binary pattern (LBP) by using a local difference sign-magnitude transform to encode the missing texture information not captured by LBP. The CLBP operator fuses three codes - CLBP_C for the center pixel, CLBP_S for the signs of differences, and CLBP_M for the magnitudes. Experiments on the Outex texture database show CLBP achieves much better classification accuracy than LBP and other state-of-the-art methods.
Performance analysis of contourlet based hyperspectral image fusion methodijitjournal
Recently, contourlet transform has been widely used in hyperspectral image fusion due to its advantages,
such as high directionality and anisotropy; and studies show that the contourlet-based fusion methods
perform better than the existing conventional methods including wavelet-based fusion methods. Few studies
have been done to comparatively analyze the performance of contourlet-based fusion methods;
furthermore, no research has been done to analyze the contourlet-based fusion methods by focusing on
their unique transform mechanisms. In addition, no research has focused on the original contourlet
transform and its upgraded versions. In this paper, we investigate three different kinds of contourlet
transform: i) original contourlet transform, ii) nonsubsampled contourlet transform, iii) contourlet
transform with sharp frequency localization. The latter two transforms were developed to overcome the
major drawbacks of the original contourlet transform; so it is necessary and beneficial to see how they
perform in the context of hyperspectral image fusion. The results of our comparative analysis show that the
latter two transforms perform better than the original contourlet transform in terms of increasing spatial
resolution and preserving spectral information.
Enhancement performance of road recognition system of autonomous robots in sh...sipij
This document summarizes a research paper that proposed an algorithm to improve road recognition for autonomous vehicles in shadow scenarios. The researchers conducted experiments to test their algorithm's performance on key metrics like true positive rate and error rate. Their algorithm first converted images to HSV color space to detect shadows, then used normalized difference index and morphological operations to eliminate shadow effects before segmentation and classification. Test results showed their algorithm enhanced road recognition in the presence of shadows, advancing autonomous vehicle navigation capabilities.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
This document provides an overview of image processing techniques for traffic applications. It discusses automatic lane finding using color-based and texture-based segmentation as well as feature-driven approaches. Object detection methods like thresholding, edge detection using Canny operator, and background differencing are also covered. Additionally, the document proposes an emergency vehicle detection system using red beacon detection and frequency analysis to override normal traffic light patterns when an emergency vehicle is detected.
Comparison of various Image Registration Techniques with the Proposed Hybrid ...idescitation
Image Registration is termed as the method to
transform different forms of image data into one coordinate
system. Registration is a important part in image processing
which is used for matching the pictures which are obtained at
different time intervals or from various sensors. A broad range
of registration techniques have been developed for the various
types of image data. These techniques are independently
studied for many applications resulting in the large body of
result. Vision is the most advanced of human sensors, so
naturally images play one of the most important roles in
human perception. Image registration is one of the branches
encompassed by the diverse field of digital image processing.
Due to its importance in many application areas as well as
since its nature is complicated; image registration is now the
topic of much recent research. Registration algorithms tend
to compute transformations to set correspondence betweenthe two images. In this paper the survey is done on various
image registration techniques. Also the different techniques
are compared with the proposed system of the projec
This document summarizes a research paper that proposes a novel approach for enhancing digital images using morphological operators. The approach aims to improve contrast in images with poor lighting conditions. It uses two morphological operators based on Weber's law - the first employs blocked analysis while the second uses opening by reconstruction to define a multi-background. The performance of the proposed operators is evaluated on images with various backgrounds and lighting conditions. Key steps include dividing images into blocks, estimating minimum/maximum intensities in each block to determine background criteria, and applying contrast enhancement transformations based on the criteria. Opening by reconstruction is also used to approximate image background without modifying structures. Experimental results demonstrate the approach enhances images with poor lighting.
Leader follower formation control of ground vehicles using camshift based gui...ijma
Autonomous ground vehicles have been designed for the purpose of that relies on ranging and bearing
information received from forward looking camera on the Formation control . A visual guidance control
algorithm is designed where real time image processing is used to provide feedback signals. The vision
subsystem and control subsystem work in parallel to accomplish formation control. A proportional
navigation and line of sight guidance laws are used to estimate the range and bearing information from the
leader vehicle using the vision subsystem. The algorithms for vision detection and localization used here
are similar to approaches for many computer vision tasks such as face tracking and detection that are
based color-and texture based features, and non-parametric Continuously Adaptive Mean-shift algorithms
to keep track of the leader. This is being proposed for the first time in the leader follower framework. The
algorithms are simple but effective for real time and provide an alternate approach to traditional based
approaches like the Viola Jones algorithm. Further to stabilize the follower to the leader trajectory, the
sliding mode controller is used to dynamically track the leader. The performance of the results is
demonstrated in simulation and in practical experiments.
A completed modeling of local binary pattern operatorWin Yu
This document presents the completed local binary pattern (CLBP) operator for texture classification. CLBP generalizes and completes the local binary pattern (LBP) by using a local difference sign-magnitude transform to encode the missing texture information not captured by LBP. The CLBP operator fuses three codes - CLBP_C for the center pixel, CLBP_S for the signs of differences, and CLBP_M for the magnitudes. Experiments on the Outex texture database show CLBP achieves much better classification accuracy than LBP and other state-of-the-art methods.
Performance analysis of contourlet based hyperspectral image fusion methodijitjournal
Recently, contourlet transform has been widely used in hyperspectral image fusion due to its advantages,
such as high directionality and anisotropy; and studies show that the contourlet-based fusion methods
perform better than the existing conventional methods including wavelet-based fusion methods. Few studies
have been done to comparatively analyze the performance of contourlet-based fusion methods;
furthermore, no research has been done to analyze the contourlet-based fusion methods by focusing on
their unique transform mechanisms. In addition, no research has focused on the original contourlet
transform and its upgraded versions. In this paper, we investigate three different kinds of contourlet
transform: i) original contourlet transform, ii) nonsubsampled contourlet transform, iii) contourlet
transform with sharp frequency localization. The latter two transforms were developed to overcome the
major drawbacks of the original contourlet transform; so it is necessary and beneficial to see how they
perform in the context of hyperspectral image fusion. The results of our comparative analysis show that the
latter two transforms perform better than the original contourlet transform in terms of increasing spatial
resolution and preserving spectral information.
Enhancement performance of road recognition system of autonomous robots in sh...sipij
This document summarizes a research paper that proposed an algorithm to improve road recognition for autonomous vehicles in shadow scenarios. The researchers conducted experiments to test their algorithm's performance on key metrics like true positive rate and error rate. Their algorithm first converted images to HSV color space to detect shadows, then used normalized difference index and morphological operations to eliminate shadow effects before segmentation and classification. Test results showed their algorithm enhanced road recognition in the presence of shadows, advancing autonomous vehicle navigation capabilities.
A Novel Feature Extraction Scheme for Medical X-Ray ImagesIJERA Editor
X-ray images are gray scale images with almost the same textural characteristic. Conventional texture or color
features cannot be used for appropriate categorization in medical x-ray image archives. This paper presents a
novel combination of methods like GLCM, LBP and HOG for extracting distinctive invariant features from Xray
images belonging to IRMA (Image Retrieval in Medical applications) database that can be used to perform
reliable matching between different views of an object or scene. GLCM represents the distributions of the
intensities and the information about relative positions of neighboring pixels of an image. The LBP features are
invariant to image scale and rotation, change in 3D viewpoint, addition of noise, and change in illumination A
HOG feature vector represents local shape of an object, having edge information at plural cells. These features
have been exploited in different algorithms for automatic classification of medical X-ray images. Excellent
experimental results obtained in true problems of rotation invariance, particular rotation angle, demonstrate that
good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary
patterns.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
This document outlines the implementation plan and milestones for a final year project on face recognition using local binary pattern (LBP) features. The project involves enrolling images by extracting LBP histograms and storing them in a matrix. It then tests the matching algorithm on low quality images and evaluates for false and positive matches. The conclusion reflects on lessons learned about LBP algorithms and face recognition challenges with image quality.
1) The document discusses a final year project on face recognition using local features such as Gabor and LBP. 2) It reviews literature on biometrics and common face recognition algorithms like PCA, LDA, and LBP. 3) The methodology section explains how LBP works by comparing pixel values to label images and extracting histograms to represent facial features.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
This document summarizes a research paper that proposes a novel method for texture analysis using wavelet transforms and partial differential equations (PDEs). The method involves applying wavelet transforms to images to obtain directional information. Anisotropic diffusion, a PDE technique, is then used on the directional information to compute a texture approximation. Various statistical features are extracted from the texture approximation. Linear discriminant analysis enhances class separability of the features before classification using k-nearest neighbors. The method is evaluated on the Brodatz texture dataset and results show it achieves better classification accuracy than other methods while having lower computational cost.
A Robust Object Recognition using LBP, LTP and RLBPEditor IJMTER
The document proposes two new feature sets called Discriminative Robust Local Binary Pattern (DRLBP) and Discriminative Robust Local Ternary Pattern (DRLTP) for object recognition. It summarizes the drawbacks of existing features like Local Binary Pattern (LBP), Local Ternary Pattern (LTP), and Robust LBP (RLBP) that do not differentiate between weak and strong contrast patterns or brightness reversal. The proposed DRLBP and DRLTP combine edge and texture information into a single representation to better analyze objects and address the issues with prior features. They are designed to improve object recognition performance.
IRJET-A Review on Implementation of High Dimension Colour Transform in Domain...IRJET Journal
This document reviews algorithms for detecting salient regions in images using high dimensional color transforms. It summarizes several existing methods that use color contrast, frequency analysis, and superpixel segmentation. A key method discussed creates a saliency map by finding the optimal linear combination of color coefficients in a high dimensional color space. This allows more accurate detection of salient objects versus methods using only RGB color. The performance of this high dimensional color transform method is improved by also utilizing relative location and color contrast between superpixels as learned features.
REGION CLASSIFICATION BASED IMAGE DENOISING USING SHEARLET AND WAVELET TRANSF...csandit
This paper proposes a neural network based region classification technique that classifies
regions in an image into two classes: textures and homogenous regions. The classification is
based on training a neural network with statistical parameters belonging to the regions of
interest. An application of this classification method is applied in image denoising by applying
different transforms to the two different classes. Texture is denoised by shearlets while
homogenous regions are denoised by wavelets. The denoised results show better performance
than either of the transforms applied independently. The proposed algorithm successfully
reduces the mean square error of the denoised result and provides perceptually good results.
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET Journal
This document proposes a method to detect digital image forgeries using local binary patterns (LBP) and histogram of oriented gradients (HOG). It extracts LBP features from the input image, then applies HOG to the LBP features. These combined features are classified using a support vector machine (SVM) as authentic or tampered. Testing on CASIA datasets achieved detection rates of 92.3% for CASIA-1 and 96.1% for CASIA-2, outperforming other existing methods. The method is effective at forgery detection while having reduced time complexity.
Implementation of High Dimension Colour Transform in Domain of Image ProcessingIRJET Journal
This document discusses implementing a high-dimensional color transform method for salient region detection in images. It aims to extract salient regions by designing a saliency map using global and local image features. The creation of the saliency map involves mapping colors from RGB space to a high-dimensional color space to find an optimal linear combination of color coefficients. This allows composing an accurate saliency map. The performance is further improved by using relative location and color contrast between superpixels as features and resolving the saliency estimation from an initial trimap using a learning-based algorithm. The method is analyzed on a dataset of training images.
This document presents a novel fuzzy feature extraction method using Local Binary Patterns (LBP) for face recognition. It begins with an introduction to face recognition challenges and common approaches. It then describes using LBP to extract local features from divided image windows. Features are extracted by computing membership values based on pixel intensities and taking the product with central pixel value. Local and central pixel information values are combined as the total information for classification using Support Vector Machine (SVM) or K-Nearest Neighbor classifiers. Experimental results on two databases show the proposed approach achieves better recognition rates than other methods. The conclusion is that accounting for both central and neighborhood pixel information is effective for images with expression, illumination and pose variations.
IRJET- A Novel Gabor Feed Forward Network for Pose Invariant Face Recogni...IRJET Journal
The document proposes an Analytic Gabor Feed Forward Network (AGFN) for pose invariant face recognition. AGFN uses a single hidden layer to efficiently extract Gabor features from raw face images without computationally expensive convolutions. Features from multiple orientations and scales are fused at the output layer. The network is trained using total error rate minimization to find a globally optimal solution without iterations. Experiments on several public face datasets showed the AGFN approach achieved accurate face recognition while being computationally efficient.
Improved algorithm for road region segmentation based on sequential monte car...csandit
In recent years, many researchers and car makers put a lot of intensive effort into development
of autonomous driving systems. Since visual information is the main modality used by human
driver, a camera mounted on moving platform is very important kind of sensor, and various
computer vision algorithms to handle vehicle surrounding situation are under intensive
research. Our final goal is to develop a vision based lane detection system with ability to
handle various types of road shapes, working on both structured and unstructured roads, ideally
under presence of shadows. This paper presents a modified road region segmentation algorithm
based on sequential Monte-Carlo estimation. Detailed description of the algorithm is given,
and evaluation results show that the proposed algorithm outperforms the segmentation
algorithm developed as a part of our previous work, as well as an conventional algorithm based
on colour histogram.
This document proposes a novel video scaling algorithm based on linear interpolation with quality enhancement. The algorithm generates two reference frames from the original video frame at different resolutions using Lanczos3 filtering. It then interpolates two intermediate frames from the reference frames. The scaled output frame is obtained through linear interpolation of the intermediate frames. Finally, sharpening is applied as an enhancement step to improve the quality of the scaled frame. Experimental results on various test images show the proposed algorithm achieves PSNR values over 45dB, outperforming other interpolation techniques like bilinear and b-spline interpolation in terms of image quality of the scaled frames.
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...IOSR Journals
1) The document discusses extracting the medial axis transform (MAT) of an image pattern using the Euclidean distance transform. The image is first converted to binary, then the Euclidean distance transform is used to compute the distance of each non-zero pixel to the closest zero pixel.
2) The medial axis transform represents the core or skeleton of an image pattern. There are different algorithms for extracting the skeleton or medial axis, including sequential and parallel algorithms. The skeleton provides a simple representation that preserves topological and size characteristics of the original shape.
3) The document provides background on medial axis transforms and different skeletonization algorithms. It then describes preparing the binary image and applying the Euclidean distance transform to extract the MAT and skeleton
Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length.
Number Plate Recognition of Still Images in Vehicular Parking SystemIRJET Journal
This document discusses a proposed method for number plate recognition in vehicle parking systems using image processing techniques. It begins with an abstract that outlines the increasing need for automated vehicle management systems due to rising vehicle and traffic volumes. It then provides an overview of the key steps in number plate recognition systems - plate detection, character segmentation, and character recognition. The proposed method uses profile projection for segmentation and neural networks for recognition. The document reviews several existing plate detection methods and their limitations. It proposes a new method that uses edge detection and morphological operations to isolate the license plate from an image while removing noise. Finally, it discusses factors to consider for license plate detection and different image segmentation techniques used in existing automatic number plate recognition systems.
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...IOSR Journals
This document describes an implementation of fuzzy logic for high-resolution remote sensing image classification with improved accuracy. It discusses using an object-based approach with fuzzy rules to classify urban land covers in a satellite image. The approach involves image segmentation using k-means clustering or ISODATA clustering. Features are then extracted from the image objects and fuzzy logic is applied to classify the objects based on membership functions. The method was tested on different sensor and resolution images in MATLAB and showed improved classification accuracy over other techniques, achieving lower entropy in results. Future work planned includes designing an unsupervised classification model combining k-means clustering and fuzzy-based object orientation.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
A New Approach for Segmentation of Fused Images using Cluster based ThresholdingIDES Editor
This paper proposes the new segmentation technique
with cluster based method. In this, the multi source medical
images like MRI (Magnetic Resonance Imaging), CT
(computed tomography) & PET (positron emission
tomography) are fused and then segmented using cluster based
thresholding approach. The edge details of an image have
become an essential technique in clinical and researchoriented
applications. The more edge details of the fused image
have obtainable with this method. The objective of the
clustering process is to partition a fused image coefficients
into a number of clusters having similar features. These
features are useful to generate the threshold value for further
segmentation of fused image. Finally the segmented output
is compared with standard FCM method and modified Otsu
method. Experimental results have shown that the proposed
cluster based thresholding method is able to effectively extract
important edge details of fused image.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
This document discusses techniques for image segmentation and edge detection. It proposes a generalized boundary detection method called Gb that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation is also introduced to improve boundary detection accuracy with minimal extra computation. Common methods for edge detection are described, including gradient-based, texture-based, and projection profile-based approaches. Improved Harris and corner detection algorithms are presented to more accurately detect edges and corners. The output of Gb using soft segmentations as input is shown to correlate well with occlusions and whole object boundaries while capturing general boundaries.
Multiple Ant Colony Optimizations for Stereo MatchingCSCJournals
The stereo matching problem, which obtains the correspondence between right and left images, can be cast as a search problem. The matching of all candidates in the same line forms a 2D optimization task and the two dimensional (2D) optimization is a NP-hard problem. There are two characteristics in stereo matching. Firstly, the local optimization process along each scan-line can be done concurrently; secondly, there are some relationship among adjacent scan-lines can be explored to promote the matching correctness. Although there are many methods, such as GCPs, GGCPs are proposed, but these so called GCPs maybe not be ground. The relationship among adjacent scan-lines is posteriori, that is to say the relationship can only be discovered after every optimization is finished. The Multiple Ant Colony Optimization(MACO) is efficient to solve large scale problem. It is a proper way to settle down the stereo matching task with constructed MACO, in which the master layer values the sub-solutions and propagate the reliability after every local optimization is finished. Besides, whether the ordering and uniqueness constraints should be considered during the optimization is discussed, and the proposed algorithm is proved to guarantee its convergence to find the optimal matched pairs.
This document outlines the implementation plan and milestones for a final year project on face recognition using local binary pattern (LBP) features. The project involves enrolling images by extracting LBP histograms and storing them in a matrix. It then tests the matching algorithm on low quality images and evaluates for false and positive matches. The conclusion reflects on lessons learned about LBP algorithms and face recognition challenges with image quality.
1) The document discusses a final year project on face recognition using local features such as Gabor and LBP. 2) It reviews literature on biometrics and common face recognition algorithms like PCA, LDA, and LBP. 3) The methodology section explains how LBP works by comparing pixel values to label images and extracting histograms to represent facial features.
PDE BASED FEATURES FOR TEXTURE ANALYSIS USING WAVELET TRANSFORMIJCI JOURNAL
This document summarizes a research paper that proposes a novel method for texture analysis using wavelet transforms and partial differential equations (PDEs). The method involves applying wavelet transforms to images to obtain directional information. Anisotropic diffusion, a PDE technique, is then used on the directional information to compute a texture approximation. Various statistical features are extracted from the texture approximation. Linear discriminant analysis enhances class separability of the features before classification using k-nearest neighbors. The method is evaluated on the Brodatz texture dataset and results show it achieves better classification accuracy than other methods while having lower computational cost.
A Robust Object Recognition using LBP, LTP and RLBPEditor IJMTER
The document proposes two new feature sets called Discriminative Robust Local Binary Pattern (DRLBP) and Discriminative Robust Local Ternary Pattern (DRLTP) for object recognition. It summarizes the drawbacks of existing features like Local Binary Pattern (LBP), Local Ternary Pattern (LTP), and Robust LBP (RLBP) that do not differentiate between weak and strong contrast patterns or brightness reversal. The proposed DRLBP and DRLTP combine edge and texture information into a single representation to better analyze objects and address the issues with prior features. They are designed to improve object recognition performance.
IRJET-A Review on Implementation of High Dimension Colour Transform in Domain...IRJET Journal
This document reviews algorithms for detecting salient regions in images using high dimensional color transforms. It summarizes several existing methods that use color contrast, frequency analysis, and superpixel segmentation. A key method discussed creates a saliency map by finding the optimal linear combination of color coefficients in a high dimensional color space. This allows more accurate detection of salient objects versus methods using only RGB color. The performance of this high dimensional color transform method is improved by also utilizing relative location and color contrast between superpixels as learned features.
REGION CLASSIFICATION BASED IMAGE DENOISING USING SHEARLET AND WAVELET TRANSF...csandit
This paper proposes a neural network based region classification technique that classifies
regions in an image into two classes: textures and homogenous regions. The classification is
based on training a neural network with statistical parameters belonging to the regions of
interest. An application of this classification method is applied in image denoising by applying
different transforms to the two different classes. Texture is denoised by shearlets while
homogenous regions are denoised by wavelets. The denoised results show better performance
than either of the transforms applied independently. The proposed algorithm successfully
reduces the mean square error of the denoised result and provides perceptually good results.
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET Journal
This document proposes a method to detect digital image forgeries using local binary patterns (LBP) and histogram of oriented gradients (HOG). It extracts LBP features from the input image, then applies HOG to the LBP features. These combined features are classified using a support vector machine (SVM) as authentic or tampered. Testing on CASIA datasets achieved detection rates of 92.3% for CASIA-1 and 96.1% for CASIA-2, outperforming other existing methods. The method is effective at forgery detection while having reduced time complexity.
Implementation of High Dimension Colour Transform in Domain of Image ProcessingIRJET Journal
This document discusses implementing a high-dimensional color transform method for salient region detection in images. It aims to extract salient regions by designing a saliency map using global and local image features. The creation of the saliency map involves mapping colors from RGB space to a high-dimensional color space to find an optimal linear combination of color coefficients. This allows composing an accurate saliency map. The performance is further improved by using relative location and color contrast between superpixels as features and resolving the saliency estimation from an initial trimap using a learning-based algorithm. The method is analyzed on a dataset of training images.
This document presents a novel fuzzy feature extraction method using Local Binary Patterns (LBP) for face recognition. It begins with an introduction to face recognition challenges and common approaches. It then describes using LBP to extract local features from divided image windows. Features are extracted by computing membership values based on pixel intensities and taking the product with central pixel value. Local and central pixel information values are combined as the total information for classification using Support Vector Machine (SVM) or K-Nearest Neighbor classifiers. Experimental results on two databases show the proposed approach achieves better recognition rates than other methods. The conclusion is that accounting for both central and neighborhood pixel information is effective for images with expression, illumination and pose variations.
IRJET- A Novel Gabor Feed Forward Network for Pose Invariant Face Recogni...IRJET Journal
The document proposes an Analytic Gabor Feed Forward Network (AGFN) for pose invariant face recognition. AGFN uses a single hidden layer to efficiently extract Gabor features from raw face images without computationally expensive convolutions. Features from multiple orientations and scales are fused at the output layer. The network is trained using total error rate minimization to find a globally optimal solution without iterations. Experiments on several public face datasets showed the AGFN approach achieved accurate face recognition while being computationally efficient.
Improved algorithm for road region segmentation based on sequential monte car...csandit
In recent years, many researchers and car makers put a lot of intensive effort into development
of autonomous driving systems. Since visual information is the main modality used by human
driver, a camera mounted on moving platform is very important kind of sensor, and various
computer vision algorithms to handle vehicle surrounding situation are under intensive
research. Our final goal is to develop a vision based lane detection system with ability to
handle various types of road shapes, working on both structured and unstructured roads, ideally
under presence of shadows. This paper presents a modified road region segmentation algorithm
based on sequential Monte-Carlo estimation. Detailed description of the algorithm is given,
and evaluation results show that the proposed algorithm outperforms the segmentation
algorithm developed as a part of our previous work, as well as an conventional algorithm based
on colour histogram.
This document proposes a novel video scaling algorithm based on linear interpolation with quality enhancement. The algorithm generates two reference frames from the original video frame at different resolutions using Lanczos3 filtering. It then interpolates two intermediate frames from the reference frames. The scaled output frame is obtained through linear interpolation of the intermediate frames. Finally, sharpening is applied as an enhancement step to improve the quality of the scaled frame. Experimental results on various test images show the proposed algorithm achieves PSNR values over 45dB, outperforming other interpolation techniques like bilinear and b-spline interpolation in terms of image quality of the scaled frames.
Medial Axis Transformation based Skeletonzation of Image Patterns using Image...IOSR Journals
1) The document discusses extracting the medial axis transform (MAT) of an image pattern using the Euclidean distance transform. The image is first converted to binary, then the Euclidean distance transform is used to compute the distance of each non-zero pixel to the closest zero pixel.
2) The medial axis transform represents the core or skeleton of an image pattern. There are different algorithms for extracting the skeleton or medial axis, including sequential and parallel algorithms. The skeleton provides a simple representation that preserves topological and size characteristics of the original shape.
3) The document provides background on medial axis transforms and different skeletonization algorithms. It then describes preparing the binary image and applying the Euclidean distance transform to extract the MAT and skeleton
Image hashing is an efficient way to handle digital data authentication problem. Image hashing represents quality summarization of image features in compact manner. In this paper, the modified center symmetric local binary pattern (CSLBP) image hashing algorithm is proposed. Unlike CSLBP 16 bin histogram, Modified CSLBP generates 8 bin histogram without compromise on quality to generate compact hash. It has been found that, uniform quantization on a histogram with more bin results in more precision loss. To overcome quantization loss, modified CSLBP generates the two histogram of a four bin. Uniform quantization on a 4 bin histogram results in less precision loss than a 16 bin histogram. The first generated histogram represents the nearest neighbours and second one is for the diagonal neighbours. To enhance quality in terms of discrimination power, different weight factor are used during histogram generation. For the nearest and the diagonal neighbours, two local weight factors are used. One is the Standard Deviation (SD) and other is the Laplacian of Gaussian (LoG). Standard deviation represents a spread of data which captures local variation from mean. LoG is a second order derivative edge detection operator which detects edges well in presence of noise. The proposed algorithm is resilient to the various kinds of attacks. The proposed method is tested on database having malicious and non-malicious images using benchmark like NHD and ROC which confirms theoretical analysis. The experimental results shows good performance of the proposed method for various attacks despite the short hash length.
Number Plate Recognition of Still Images in Vehicular Parking SystemIRJET Journal
This document discusses a proposed method for number plate recognition in vehicle parking systems using image processing techniques. It begins with an abstract that outlines the increasing need for automated vehicle management systems due to rising vehicle and traffic volumes. It then provides an overview of the key steps in number plate recognition systems - plate detection, character segmentation, and character recognition. The proposed method uses profile projection for segmentation and neural networks for recognition. The document reviews several existing plate detection methods and their limitations. It proposes a new method that uses edge detection and morphological operations to isolate the license plate from an image while removing noise. Finally, it discusses factors to consider for license plate detection and different image segmentation techniques used in existing automatic number plate recognition systems.
Implementation of Fuzzy Logic for the High-Resolution Remote Sensing Images w...IOSR Journals
This document describes an implementation of fuzzy logic for high-resolution remote sensing image classification with improved accuracy. It discusses using an object-based approach with fuzzy rules to classify urban land covers in a satellite image. The approach involves image segmentation using k-means clustering or ISODATA clustering. Features are then extracted from the image objects and fuzzy logic is applied to classify the objects based on membership functions. The method was tested on different sensor and resolution images in MATLAB and showed improved classification accuracy over other techniques, achieving lower entropy in results. Future work planned includes designing an unsupervised classification model combining k-means clustering and fuzzy-based object orientation.
The development of multimedia system technology in Content based Image Retrieval (CBIR) System is
one in every of the outstanding area to retrieve the images from an oversized collection of database. The feature
vectors of the query image are compared with feature vectors of the database images to get matching images.It is
much observed that anyone algorithm isn't beneficial in extracting all differing kinds of natural images. Thus an
intensive analysis of certain color, texture and shape extraction techniques are allotted to spot an efficient CBIR
technique that suits for a selected sort of images. The Extraction of an image includes feature description and
feature extraction. During this paper, we tend to projected Color Layout Descriptor (CLD), grey Level Co-
Occurrences Matrix (GLCM), Marker-Controlled Watershed Segmentation feature extraction technique that
extract the matching image based on the similarity of Color, Texture and shape within the database. For
performance analysis, the image retrieval timing results of the projected technique is calculated and compared
with every of the individual feature.
A New Approach for Segmentation of Fused Images using Cluster based ThresholdingIDES Editor
This paper proposes the new segmentation technique
with cluster based method. In this, the multi source medical
images like MRI (Magnetic Resonance Imaging), CT
(computed tomography) & PET (positron emission
tomography) are fused and then segmented using cluster based
thresholding approach. The edge details of an image have
become an essential technique in clinical and researchoriented
applications. The more edge details of the fused image
have obtainable with this method. The objective of the
clustering process is to partition a fused image coefficients
into a number of clusters having similar features. These
features are useful to generate the threshold value for further
segmentation of fused image. Finally the segmented output
is compared with standard FCM method and modified Otsu
method. Experimental results have shown that the proposed
cluster based thresholding method is able to effectively extract
important edge details of fused image.
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
Stereo vision is one of the interesting research topics in the computer vision field. Two cameras are used to generate a disparity map, resulting in the depth estimation. Camera calibration is the most important step in stereo vision. The calibration step is used to generate an intrinsic parameter of each camera to get a better disparity map. In general, the calibration process is done manually by using a chessboard pattern, but this process is an exhausting task. Self-calibration is an important ability required to overcome this problem. Self-calibration required a robust and good matching algorithm to find the key feature between images as reference. The purpose of this paper is to analyze the performance of three matching algorithms for the autocalibration process. The matching algorithms used in this research are SIFT, SURF, and ORB. The result shows that SIFT performs better than other methods.
This document discusses techniques for image segmentation and edge detection. It proposes a generalized boundary detection method called Gb that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation is also introduced to improve boundary detection accuracy with minimal extra computation. Common methods for edge detection are described, including gradient-based, texture-based, and projection profile-based approaches. Improved Harris and corner detection algorithms are presented to more accurately detect edges and corners. The output of Gb using soft segmentations as input is shown to correlate well with occlusions and whole object boundaries while capturing general boundaries.
Performance of Efficient Closed-Form Solution to Comprehensive Frontier Exposureiosrjce
This document discusses boundary detection techniques for images. It proposes a generalized boundary detection method (Gb) that combines low-level and mid-level image representations in a single eigenvalue problem to detect boundaries. Gb achieves state-of-the-art results at low computational cost. Soft segmentation and contour grouping methods are also introduced to further improve boundary detection accuracy with minimal extra computation. The document presents outputs of Gb on sample images and concludes that Gb effectively detects boundaries in a principled manner by jointly resolving constraints from multiple image interpretation layers in closed form.
Validation Study of Dimensionality Reduction Impact on Breast Cancer Classifi...ijcsit
A fundamental problem in machine learning is identifying the most representative subset of features from
which we can construct a predictive model for a classification task. This paper aims to present a validation
study of dimensionality reduction effect on the classification accuracy of mammographic images. The
studied dimensionality reduction methods were: locality-preserving projection (LPP), locally linear
embedding (LLE), Isometric Mapping (ISOMAP) and spectral regression (SR). We have achieved high
rates of classifications. In some combinations the classification rate was 100%. But in most of the cases the
classification rate is about 95%. It was also found that the classification rate increases with the size of the
reduced space and the optimal value of space dimension is 60. We proceeded to validate the obtained
results by measuring some validation indices such as: Xie-Beni index, Dun index and Alternative Dun
index. The measurement of these indices confirms that the optimal value of reduced space dimension is
d=60.
This document summarizes an article that introduces a license plate detection system. It discusses the key steps in a license plate recognition system: image capture, preprocessing, character segmentation, and character recognition. For each step, it reviews different algorithms used in existing license plate recognition systems. In particular, it discusses techniques for preprocessing like binarization, license plate detection using connected component labeling and Hough transforms, skew correction using Hough transforms, and character segmentation using projection profiles. It notes that most systems use neural networks for character recognition.
An efficient image segmentation approach through enhanced watershed algorithmAlexander Decker
This document proposes an efficient image segmentation approach combining an enhanced watershed algorithm and color histogram analysis. The watershed algorithm is applied to preprocessed images after merging the results with an enhanced edge detection. Over-segmentation issues are addressed through a post-processing step applying color histogram analysis to each segmented region, improving overall performance. The document provides background on image segmentation techniques, reviews related work applying watershed algorithms, and discusses challenges like over-segmentation that watershed approaches can face.
ROLE OF HYBRID LEVEL SET IN FETAL CONTOUR EXTRACTIONsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Role of Hybrid Level Set in Fetal Contour Extractionsipij
Image processing technologies may be employed for quicker and accurate diagnosis in analysis and
feature extraction of medical images. Here, existing level set algorithm is modified and it is employed for
extracting contour of fetus in an image. In traditional approach, fetal parameters are extracted manually
from ultrasound images. An automatic technique is highly desirable to obtain fetal biometric measurements
due to some problems in traditional approach such as lack of consistency and accuracy. The proposed
approach utilizes global & local region information for fetal contour extraction from ultrasonic images.
The main goal of this research is to develop a new methodology to aid the analysis and feature extraction.
Unimodal Multi-Feature Fusion and one-dimensional Hidden Markov Models for Lo...IJECEIAES
The objective of low-resolution face recognition is to identify faces from small size or poor quality images with varying pose, illumination, expression, etc. In this work, we propose a robust low face recognition technique based on one-dimensional Hidden Markov Models. Features of each facial image are extracted using three steps: firstly, both Gabor filters and Histogram of Oriented Gradients (HOG) descriptor are calculated. Secondly, the size of these features is reduced using the Linear Discriminant Analysis (LDA) method in order to remove redundant information. Finally, the reduced features are combined using Canonical Correlation Analysis (CCA) method. Unlike existing techniques using HMMs, in which authors consider each state to represent one facial region (eyes, nose, mouth, etc), the proposed system employs 1D-HMMs without any prior knowledge about the localization of interest regions in the facial image. Performance of the proposed method will be measured using the AR database.
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology
There exists a plethora of algorithms to perform image segmentation and there are several issues related to
execution time of these algorithms. Image Segmentation is nothing but label relabeling problem under
probability framework. To estimate the label configuration, an iterative optimization scheme is
implemented to alternately carry out the maximum a posteriori (MAP) estimation and the maximum
likelihood (ML) estimations. In this paper this technique is modified in such a way so that it performs
segmentation within stipulated time period. The extensive experiments shows that the results obtained are
comparable with existing algorithms. This algorithm performs faster execution than the existing algorithm
to give automatic segmentation without any human intervention. Its result match image edges very closer to
human perception.
NEW LOCAL BINARY PATTERN FEATURE EXTRACTOR WITH ADAPTIVE THRESHOLD FOR FACE R...gerogepatton
This paper represents a feature extraction method constructed on the local binary pattern (LBP) structure.
The proposed method introduces a new adaptive thresholding function to the LBP method replacing the
fixed thresholding at zero. The introduced function is a Gaussian Distribution Function (GDF) variation.
The proposed technique uses the global and local information of the image and image blocks to perform
the adaptation. The adaptive function adds to the on-hand im-age’s features by preserving the information
of the amplitude of the pixel difference rather than just considering the sign of the pixel difference in the
process of LBP coding. This feature improves the accuracy of the face recognition system by providing
additional information. The proposed method demonstrates a higher recognition rate than other presented
techniques (%97.75). The proposed method was also tested with different types of noise to demonstrate its
effectiveness in the presence of various levels of noise. The Extended Yale B dataset was used for the
testing along with Support Vector Machine (SVM) as classifier.
Image Blur Detection with 2D Haar Wavelet Transform and Its Effect on Skewed ...Vladimir Kulyukin
This document presents an algorithm for image blur detection using 2D Haar wavelet transforms. The algorithm splits images into tiles and applies the 2D Haar wavelet transform to each tile to detect regions with pronounced changes. Tiles with similar changes are grouped into clusters. Images with large clusters are classified as sharp, while images with small clusters are classified as blurred. The algorithm is integrated with a skewed barcode scanning algorithm to filter out blurred images, improving scanning success rates. Experimental results found it performed comparably or better than two other blur detection algorithms and positively impacted skewed barcode scanning when integrated.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document provides a summary of feature extraction techniques in both the spatial and transformed domains. In the spatial domain, traditional methods like Canny, Sobel, and Harris detectors are discussed. The paper then focuses on the Scale Invariant Feature Transform (SIFT) technique developed by David Lowe, which extracts local features that are invariant to scale, rotation, and illumination changes. Several variants of SIFT are also examined, including PCA-SIFT, ASIFT, and A2SIFT, which improve robustness, distinctiveness, and performance. In the transformed domain section, the document explores techniques that use Fourier and wavelet transforms for feature extraction.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document provides a summary and analysis of various feature extraction techniques in both the spatial and transformed domains. In the spatial domain, traditional methods like Canny, Sobel, and Harris detectors are discussed. The paper then focuses on the Scale Invariant Feature Transform (SIFT) algorithm developed by David Lowe, which extracts key local image features that are invariant to scale, rotation, and illumination changes. Several variants of SIFT are also examined, including PCA-SIFT, ASIFT, and A2SIFT, which improve upon SIFT's robustness and performance. In the transformed domain section, the document explores feature extraction methods that use transforms like the Fourier and wavelet transforms to extract image features.
IJERD (www.ijerd.com) International Journal of Engineering Research and Devel...IJERD Editor
This document provides a summary of feature extraction techniques in both the spatial and transformed domains. In the spatial domain, traditional methods like Canny, Sobel, and Harris detectors are discussed. The paper finds these methods extract too much irrelevant information or lose some prominent features. The Scale Invariant Feature Transform (SIFT) is presented as addressing these issues by detecting local features in a scale and rotation invariant manner. In the transformed domain, techniques using Fourier and wavelet transforms are explored. The paper provides an overview of various feature extraction methods and their advantages and limitations.
Importance of Mean Shift in Remote Sensing SegmentationIOSR Journals
1) Mean shift is a non-parametric clustering technique that can segment remote sensing images into homogeneous regions without prior knowledge of the number of clusters or constraints on cluster shape.
2) The document presents a case study demonstrating mean shift can segment an image containing oil storage tanks into distinct regions faster than level set segmentation.
3) Mean shift is shown to be well-suited for remote sensing image segmentation tasks like forest mapping and land cover classification due to its ability to handle noise, gradients, and texture variations common in real-world images.
A Survey on: Hyper Spectral Image Segmentation and Classification Using FODPSOrahulmonikasharma
The Spatial analysis of image sensed and captured from a satellite provides less accurate information about a remote location. Hence analyzing spectral becomes essential. Hyper spectral images are one of the remotely sensed images, they are superior to multispectral images in providing spectral information. Detection of target is one of the significant requirements in many are assuc has military, agriculture etc. This paper gives the analysis of hyper spectral image segmentation using fuzzy C-Mean (FCM)clustering technique with FODPSO classifier algorithm. The 2D adaptive log filter is proposed to denoise the sensed and captured hyper spectral image in order to remove the speckle noise.
A Comparative Case Study on Compression Algorithm for Remote Sensing ImagesDR.P.S.JAGADEESH KUMAR
This document summarizes research on compression algorithms for remote sensing images. It begins with an abstract describing the challenges of transmitting large remote sensing images from sensors to networks. The document then reviews 18 different research papers on various compression algorithms for remote sensing images, including wavelet-based algorithms, fractal coding methods, and region-based approaches. It evaluates each algorithm's performance in compressing remote sensing images while maintaining quality. The document aims to perform a comparative case study of these different compression algorithms.
Similar to CLASSIFICATION AND COMPARISON OF LICENSE PLATES LOCALIZATION ALGORITHMS (20)
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
artificial intelligence and data science contents.pptxGauravCar
What is artificial intelligence? Artificial intelligence is the ability of a computer or computer-controlled robot to perform tasks that are commonly associated with the intellectual processes characteristic of humans, such as the ability to reason.
› ...
Artificial intelligence (AI) | Definitio
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
CLASSIFICATION AND COMPARISON OF LICENSE PLATES LOCALIZATION ALGORITHMS
1. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
DOI: 10.5121/sipij.2021.12201 1
CLASSIFICATION AND COMPARISON OF LICENSE
PLATES LOCALIZATION ALGORITHMS
Mustapha Saidallah, Fatimazahra Taki, Abdelbaki El Belrhiti El Alaoui and
Abdeslam El Fergougui
Computer Networks and Systems Laboratory, Faculty of Sciences, Moulay Ismail
University, Meknes, Morocco
ABSTRACT
The Intelligent Transportation Systems (ITS) are the subject of a world economic competition. They are the
application of new information and communication technologies in the transport sector, to make the
infrastructures more efficient, more reliable and more ecological. License Plates Recognition (LPR) is the
key module of these systems, in which the License Plate Localization (LPL) is the most important stage,
because it determines the speed and robustness of this module. Thus, during this step the algorithm must
process the image and overcome several constraints as climatic and lighting conditions, sensors and angles
variety, LPs’ no-standardization, and the real time processing. This paper presents a classification and
comparison of License Plates Localization (LPL) algorithms and describes the advantages, disadvantages
and improvements made by each of them.
KEYWORDS
License Plate Localization, LPL constraints, performance, real time, multi-localization, convolutional neural
network, deep learning.
1. INTRODUCTION
The mobility needs of people and products make the road traffic more and more cumbersome. ITS
are designed to circumvent the challenges engendered by this problem. LPR is the centerpiece of
this domain. It is based on three phases: the LPL, the characters segmentation of LP, and the Optical
Characters Recognition (OCR).
An efficient LPL algorithm should provide accurate results in real time. Therefore, it must be able
to manage the localization phase constraints: the multi-localization of all LP types, the noise due
to climatic conditions, dirty LP or complex acquisition scenes, the lighting conditions that cause
low contrast on the image or a multiple color appearance the non-standardization, the
sensor/vehicle distance and the capture angle that significantly influence on colors, sizes and shapes
of the LPs.
In our study, we classified the LPL algorithms into three families: the family of algorithms based
on mathematical processing is presented in section 2, the family of algorithms based on the plate
features is presented in section 3 and the family of algorithms based on deep learning in section 4.
Comparative mapping performance of the studied algorithms according to this phase constraints is
presented in section 5. In Section 6, a conclusion of our work will be provided.
2. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
2
2. FAMILY BASED ON MATHEMATICAL PROCESSING
The mathematical processing of images is a very active field, where the advanced theoretical are
concretized to manipulate the image content. It consists in transforming an image to another to
extract useful information for the application. It involves partial differential equations, geometric
tools, and optimization concepts. Then for the validation phase the LP properties are exploited.
This family can be divided into five algorithm classes based on: edges extraction, morphological
operations, hybrid methods, transforms and Sliding Concentric Windows (SCW).
2.1. Class of Algorithms based on Edge Extraction
These algorithms apply the convolution of high-pass filter and the original image; they highlight
the high frequencies and attenuate the rest. These are the techniques that transform the image into
a set of edges, not necessarily closed, forming the significant borders of image. These methods
consider the characters and the LP contours as a reference point for localization. They proceed
essentially in four steps:
• The Application of filter to extract the edge points.
• The binarization of the Contours image by choosing a threshold.
• The Analysis of result to extract the candidates.
• The validation by geometrical, textural or density properties.
The authors of [1] use the Sobel filter to extract edges from the color image, and the histogram to
select the binarization threshold. The logical AND of the binary and the HSV images, was exploited
to extract the yellow candidates, and the other types are extracted from the Integral Image. The
validation phase was based on the analysis of geometrical properties of Connected Component
(CC). The regions with the high vertical gradient density are the candidates on [2]. The validation
phase was based on aspect ratio, shape and tracking criterion.
In [3,4] the edge extraction is performed on the grayscale image to reduce the processing time and
the texture was the criterion used to exactly isolate the LP.
The edge extraction in [5,6] is performed by the Canny filter, it is a noise reduction by a Gaussian
filter before applying the Sobel filter. The contours density is the property that has confirmed the
LP region.
Simplicity and speed are the qualities of these methods. But these qualities are defects in case of
noisy images, complex acquisition scenes or if the LP borders do not present a great variation
compared to the rest of image. Moreover, these algorithms use two thresholds, the edge extraction
threshold which is difficult to be automatically selected under the various lighting conditions and
the region contour density threshold for the validation phase that makes these algorithms very
dependent to the sensor-vehicle distance.
2.2. Class of Algorithms based on Morphological Operations
This a set of nonlinear operators intended to probe the image with small geometries called
Structuring Elements (SE) to highlight certain shapes in the image.
• The choice of the adequate structuring element.
• The application of morphological operations based on Dilation and Erosion.
3. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
3
• A binary representation of the pixels (white present the objects and black present the
background).
• The CC Analysis based on the histogram of the results.
The authors of [7] describe a localization algorithm based on opening. It is a dilation of the erosed
grayscale image. The validation phase was based on the horizontal and vertical projection of the
result and the priori knowledge of the LP location. Therefore, the multilocalization is not allowed
by this algorithm.
In [8] a combination of hat transform and morphological gradient was used to extract the
candidates. The density, the location and the length/width ratio are used to confirm the LP region.
The strength of this class is its robustness to noise since it puts in advance the image shapes and
denoises it at a time. Nevertheless, its treatments take an enormous time what makes them
inadaptable to real-time systems. Moreover, the morphological operations don’t give good results
in case of poorly contrasted images where the sensitivity of these methods to the lighting
conditions. Finally, the choice of adequate SE is an important parameter influenced and limited by
a fixed sensor-vehicle distance.
2.3. Class of Hybrid Algorithms
These algorithms combine the edge detection and the morphological operations. They simplify the
image by keeping just edges. Then they apply morphological operations to denoise the picture and
complete the shapes, to define the CC. and based on geometrical and/or textural criteria the LPs
are validated.
In [9] the magnitude of the vertical gradients is used to detect candidate regions. The opening is
the morphological operation applied to reduce the noise and to remove the objects non-LP. Then
the length/width ratio, the size and the orientation of candidates are the geometrical criteria used to
validate the LP.
Treatment was reversed in [10] to solve the sensitivity of this class to lighting conditions. The
algorithm starts with a Top-hat and Bottom-hat transforms that allowed to achieve a high contrasted
grayscale image. Then the edges are extracted via the Sobel filter. The validation step consists in
the CC analysis according to geometric criteria.
The authors of [11,12] propose a candidate’s location by extracting the contours of the binary image
using the Canny filter. They added to the opening a dilation operation to connect close objects. The
region that checks the geometric criteria: rectangular shape and width/height ratio is a LP. Rotation
of the plate is carried out in [10] for correcting the deviation.
In [13] the extraction of the image contours is performed by the second derivative via the Laplacian
Of Gaussian (LOG) filter followed by a dilation to refine the contours and connect objects, and by
classifying candidates to "text" and "no text", the LP is exactly located.
These algorithms showed a great improvement of the localization rate. Thus, they lend well to the
no-standardization and the multi-localization constrain. However, the problems related to the edge
extraction (Identification of thresholds under different lighting conditions) and to the
morphological operations (Significant processing time) remain strongly posed.
4. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
4
2.4. Class of Algorithms based on Transforms
In this class we will study the sub-classes based on the Hough Transform (HT), the Wavelet
Transform (WT) and Generalized Symmetry Transform (GST).
The Hough Transform is a technique of simple pattern recognition in an image by binding the
pixels between them; its major disadvantage is the great processing time. To solve this problem,
the authors of [14] proposed a pretreatment step which consists in binarizing the contour map of
the original image, and on the result they applied the Hough Transform to extract the parallelogram
objects. The validation phase consists in assessing the length/width ratio of candidates, then
calculating the number of objects cut by two horizontal lines drawn at 1/3 and 2/3 of the candidate’s
height.
The HT does not give a good result when the image is noisy. The authors of [15] proposed to
denoise the image via a Median filter to solve this problem. Then they applied a second filter to
isolate the area where the plate is probably located to reduce the processing time.
The results of this subclass are accurate, and it can locate any type of plates at once, because it
focuses on the shape of the plate and not on the type. However, these algorithms cannot be
implemented in real-time systems or on images where the rectangular shape of the LP is not clear
(Because of an obscuring object, continuity of intensity, or physical damage). Further, this type of
algorithms cannot locate too deviated plates.
The Wavelet Transform decomposes the image to a set of cards, each presenting a characteristic.
The authors of [16,17] started from the idea that the great difference in intensity between the LP
and its characters is a key element to locate it. Therefore, they decompose the grayscale image into
four frames via the WT, of which they used two. On the first, containing the horizontal details, they
search the area where there is a maximum value of variation to locate a reference line. By analyzing
the horizontal projection curve of the second image, containing the vertical details, they arrived to
determine the sizes of the existing LPs on the image. Then by scanning the image they determined
candidates. The validation phase was based on the geometric properties analysis.
These algorithms are independent of the sensor-vehicle distance and they are capable to detect
several LP at the same time. However, they give false results when the contrast between the
characters and the background is not quite clear. Thus, their sensitivity to the noises is high.
The GST in [18] evaluates evidence of local symmetry; it extracts a map of symmetrical contours
of the image in all directions. The authors used in [19] the symmetry of the LP like an element-key
to locate it. And to reduce the processing time the authors focused on the extraction of the four
corners of the plate; consequently, they scanned the image only in the two diagonal directions (45
and -45). The multiplication of the two resultants and the connection of the extracted corners, give
the LP candidates. Finally, a neural network system is used to validate the LP.
To adapt this subclass to real-time systems it is necessary to fix a limited number of scans. Thus,
to achieve accurate results it is necessary to consider the lighting conditions and noise constraints.
Moreover, the algorithm becomes weakening in the case of damaged LPs or if one of its corners is
hidden.
2.5. Class of Algorithms based on Sliding Concentric Windows
This method is based on pixel intensities to extract the regions of interest. The LP localization in
[20] passes by two stages: The first consists in scanning the image by two SCW using the standard
5. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
5
deviation; if those values exceeds the predefined thresholds, the pixel belongs to a region of interest
and it is posted at 1, if not the pixel is worth 0. The result is a binary image of candidates. The
second phase uses geometrical criteria to validate the LP. The method is adapted to the multi-
localization applications. However, since it is based on the pixel’s intensities, a low-resolution or
a poor lighting conditions are its blocking points. Thus, its simplicity made it unable to process
noisy images or complex scenes.
3. FAMILY BASED ON LP FEATURES
These methods assume that a good prior knowledge of the LP properties, help to implement a
robust model of its localization. They consider that the LP is an object that has defining properties
in the image: specific texture, unique color composition, or both combinations. Identifying the LP
properties is the first step of these algorithms. Then, each pixel is represented by a vector of these
properties. Therefore, the classification methods classify and label pixels to indicate their affiliation
or not to a region. Finally, the validation phase is based on geometrical criteria or histogram
analysis. The localization rate of these methods is generally high, but they are specific to the treated
LPs types.
3.1. Class of Algorithms based on LP Texture
The algorithms of this class are focused on the analysis of LP texture. They assume that the LP and
its characters have a specific texture that distinguishes them from other image objects.
Based on the four following characteristics:
• In a small region contains more than one character,
• Characters and background are in sharp contrast,
• The size of the plate region is relatively fixed with a fixed length and width ratio,
• The tilt angle of license plate is in a certain range.
The algorithm in [21] constructs the attribute vectors to classify the pixels. The validation phase
was based on the analysis of the result histogram.
The idea of [22] is that the LP is in a textural region of the image. Therefore, they decompose the
image into blocks. The density of each block is calculated to classify it in "textural" or "not
textural". The validation phase was based on geometrical criteria.
The authors of [23] implement four LPL algorithms based on the following LP features:
• The length/width ratio of the connected region,
• The background area,
• The character/plate area density,
• The number of edges in the plate center.
These features are exploited by deterministic, probabilistic and both combination methods, taking
several thresholds for each one to validate the results.
This class uses the values showing the properties degree in a region. These values are directly
affected by the noise, the sensor-vehicle distance, and the image resolution.
6. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
6
3.2. Class of Algorithms based on LP Colors
A unique colors combination which occurs almost exclusively in the LP area is the basic idea of
these algorithms. In [24] the pixels classification is performed in the HSV (Hue, Saturation, Value)
space to solve the problem of the wide RGB (Red, Green, Blue) value under different illumination.
The validation phase was based on the evaluation of the length/width ratio and the projection curve
analysis.
The authors of [25] have based their method on Fuzzy Logic. They decompose the color image
into 4 cards: E denote the edge map computed using the color edge detector implemented by
authors, H, S and V are the maps preserving the hue, saturation, and intensity components of the
transformed image. E, H, S and V are the entries of the fuzzy system that classified the pixels. The
result represents the degrees of the pixels belonging to a LP. The validation phase was delegated
to the OCR. The richness of the test base confirmed the algorithm localization performance.
However, the processing time made it inapplicable in real time systems.
The algorithms of this class reach high localization rates, but in an important processing time.
Moreover, the sensitivity to the lighting conditions is their major disadvantage. Furthermore, the
LPs have different colors and shapes because of the non-standardization which make these
algorithms specifics to treated LP types.
3.3. Class of Algorithms based on Vector Quantization (VQ)
This concept is described in [26]. It is to define a set of vectors forming a dictionary. They include
information on the image blocks. These blocks are obtained by selecting the high contrasted strips
in a quad tree decomposition phase. These strips are validated by sorting them according to the
expected LP size and the code words that describe the blocks content.
The authors of [26] assume that the LP corresponds generally to the first two bands of the sorted
list. The test of this method in a real system has shown its high dependence to the sensor-vehicle
distance (A considerable change in this factor requires the update of the entire system). Moreover,
it can’t process too noisy images or localize several plates at a time.
4. FAMILY BASED ON DEEP LEARNING
Deep learning is a new technique in the area of machine learning, which attempts to model high-
level abstractions in data. There are various deep learning architectures such as deep convolutional
neural network, deep belief network, and so on. Due to the deep network structures, they have been
widely used in many applications with great success. Compared with other architectures, a deep
convolutional neural network has fewer weights and less complexity. In addition, it allows one to
directly use original images as inputs which enable a hierarchical learning of features.
The authors of [27] proposed a work-flow and methods for segmentation and annotation free
Automated LPR (ALPR) with improved plate localization and automatic failure identification.
Their proposed workflow first localizes the license plate region in the captured image using a two-
stage approach where a set of candidate regions are first extracted using a weak SNoW classifier
and then scrutinized by a strong convolutional neural network (CNN) classifier in the second stage.
Images that fail a primary confidence test for plate localization are further classified to identify
reasons for failure, such as license plate not present, LP too bright, LP too dark or no vehicle found.
In the localized plate region, they perform segmentation and OCR jointly by using a probabilistic
inference method based on hidden Markov models where the most likely code sequence is
7. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
7
determined by applying the Viterbi algorithm. In order to reduce manual annotation required for
training classifiers for OCR, they propose to use either artificially generated synthetic license plate
images or character samples acquired by trained ALPR systems already operating in other sites.
The performance gap due to differences between training and target domain distributions is
minimized using an unsupervised domain adaptation. They evaluated the performance of their
proposed methods on license plate images captured in several US jurisdictions under realistic
conditions.
To avoid large accumulated error in typical three-step LPR methods and improve recognition
performance in challenging complex environment, an end-to-end recognition method based on
deep convolution neural network named LPR-Net is proposed in [28]. The input of LPR-Net is a
gray or color image with any size and the output is the license plate number of input image. If no
license plate in input images, LPR-net will output “no license”. The proposed LPR-Net is a hybrid
deep architecture that consists of a basic net, a multi-scale net, a regression net, and a classification
net. Firstly, LPR-Net resizes the input image to 500 × 500 pixels. Then the resized image is input
to basic net to get basic features. After that, the basic features are input to multiscale net to obtain
multi-scale features. Finally, the multi-scale features are input to regression net and classification
net to locate plate and identify characters.
The authors of [29] developed a deep learning application to detect and recognize Korean cars’
license plates from images. It is an advanced application that targets to provide deep learning
solution that can be applied in many areas including Intelligent Transportation System, Internet of
Things and Smart City. They developed their method that is a combination of scene text recognition
technique with Geometrical Image Transformation (GIT) to recognize number plates for combined
neural networks and achieving 99.8% and 95.7% of detection and recognition accuracy
respectively.
The authors of [30] proposed a novel machine learning approach to recognizing the license plate
number. They used one of the most successful deep learning methods, convolutional neural
network for extracting the visual features automatically. Suitable localization and segmentation
techniques are employed before CNN model to enhance the accuracy of the proposed model. In
addition to this, the Deep License Plate Number Recognition (D-PNR) model also takes care of
proper identification from images those are hazy and is not suitable-inclined or noisy images. The
efficiency of license plate recognition system improved using deep Learning framework to keep
the trade-off between accuracy and time complexity. So, an efficient and real-time-based D-PNR
model proposed for the license plate recrimination. It is able to give end results with 98.02%
accuracy after setting suitable learning parameters.
The authors of [31] presented an embedded platform-based Italian license plate detection and
recognition system using deep neural classifiers. In their work, trained parameters of a highly
precise ALPR system are imported and used to replicate the same neural classifiers on an NVidia
Shield K1 tablet. A CUDA-based framework is used to realize these neural networks. The flow of
the trained architecture is simplified to perform the license plate recognition in realtime. Results
show that the tasks of plate and character detection and localization can be performed in real-time
on a mobile platform by simplifying the flow of the trained architecture. However, the accuracy of
the simplified architecture would be decreased accordingly.
The authors of [32] used deep learning method for real-time traffic images. They first extract
license plate candidates from each frame using edge information and geometrical properties,
ensuring high recall using one class Support Vector Machines (SVM). The verified candidates are
used for Number Plate (NP) recognition purpose along with a spatial transformer network (STN)
8. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
8
for character recognition. Their system is evaluated on several traffic images with vehicles having
different license plate formats in terms of tilt, distances, colors, illumination, character size,
thickness etc. Also, the background was very challenging. Results demonstrate robustness to such
variations and impressive performance in both the localization and recognition.
5. COMPARISON AND SYNTHESIS
The exactitude of a LPR system depends on the localization phase, i.e., to be efficient, its
localization rate should be around 100%. The experimental tests of localization algorithms are not
done on unified databases. Indeed, a rich database that allows to test the performance of the
algorithms must contain many several plate images, taken in different lighting conditions, from
various distances, angles, and acquisition scenes. These images must present all LP types. In
addition, a localization algorithm should not simply isolate the LP, but it must also isolate it in real
time.
To qualify the algorithms, we distinguish four levels of the “performance” parameter. This later
binds the localization rate and the database richness as the Table 1 shows.
Indeed, if the database contains a significant number of images and various constraints related to
this phase, its richness is very high and if the localization rate is around 100%, then the algorithm
will be qualified “very powerful: AAA”.
Table 1. Values of the performance parameter.
Database richness Localization rate Performance
Very high Very high AAA
Very high Medium AA
Medium High AA
Medium Medium A
Low High A
High Low A
Low Medium B
Medium Low B
Low Low B
In addition to the performance of the LPL algorithms, Table 2 shows their sensitivity to the
localization phase constraints. This sensitivity is deduced from the analysis of algorithms and
which is not necessarily present in the database. For example, if the database does not contain
images of vehicles with constraints:
• LP type: the database contains only the car LP of a single line,
• No-standardization: the database contains only the plates respecting a given standard.
And if the analysis shows that the algorithm can support only one LP type of different standards,
the answer in Table 2 will be "No" for the LP type and "Yes" for the Nostandardization.
From Table 2, we deduce on the one hand, the absence of an ideal LPL algorithm: very powerful
and which solve the various constraints of this phase. Moreover, the most efficient algorithms are
9. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
9
deep learning algorithms because they solve most constraints. On the other hand, the least
overcome constraint by the different algorithms is sensitivity to lighting conditions. Thus, this
constraint is circumvented just by nine algorithms; three of them are based on the LP features and
four on deep learning.
Table 2. Performance of LPL algorithms and their sensitivity to localization constraints.
6. CONCLUSIONS
We have presented the functioning principle of different LP Localization methods by analyzing the
strengths and weaknesses of each of them. Our study has demonstrated the lack of an ideal
10. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
10
localization algorithm and confirmed that a meaningful comparison of these methods performance
requires their evaluation on the same test database, containing enough scenes with the various
constraints of this phase. Finally, we fill a table showing main properties of the considered
algorithms which could help a researcher to select the most appropriate one for his specific task
and conditions.
REFERENCES
[1] L. Luo, H. Sun, W. Zhou, and L. Luo, “An Efficient Method of License Plate Location,” in 2009 First
International Conference on Information Science and Engineering, Dec. 2009, pp. 770–773, doi:
10.1109/ICISE.2009.250.
[2] N. Thome, A. Vacavant, L. Robinault, and S. Miguet, “A cognitive and video-based approach for
multinational License Plate Recognition,” Machine Vision and Applications, vol. 22, no. 2, pp. 389–
407, Mar. 2011, doi: 10.1007/s00138-010-0246-3.
[3] K. Parasuraman and P. V. Kumar, “An efficient method for indian vehicle license plate extraction and
character segmentation,” in IEEE International conference on computational intelligence and
computing research, 2010, vol. 18, pp. 1–4.
[4] P. Tarabek, “A real-time license plate localization method based on vertical edge analysis,” in 2012
Federated Conference on Computer Science and Information Systems (FedCSIS), Sep. 2012, pp. 149–
154.
[5] S. Draghici, “A neural network based artificial vision system for licence plate recognition,”
International Journal of Neural Systems, vol. 8, no. 01, pp. 113–126, 1997.
[6] K. P. Thooyamani, V. Khanaa, and R. Udayakumar, “Application of pattern recognition for farsi license
plate recognition,” Middle-East Journal of Scientific Research, vol. 18, no. 12, pp. 1768– 1774, 2013.
[7] C. Wu, L. C. On, C. H. Weng, T. S. Kuan, and K. Ng, “A Macao license plate recognition system,” in
2005 International Conference on Machine Learning and Cybernetics, 2005, vol. 7, pp. 4506– 4510.
[8] H. K. Sulehria, Y. Zhang, and D. Irfan, “Mathematical morphology methodology for extraction of
vehicle number plates,” International journal of computers, vol. 1, no. 3, pp. 69–73, 2007.
[9] Shen-Zheng Wang and Hsi-Jian Lee, “Detection and recognition of license plate characters with
different appearances,” in Proceedings of the 2003 IEEE International Conference on Intelligent
Transportation Systems, Oct. 2003, vol. 2, pp. 979–984 vol.2, doi: 10.1109/ITSC.2003.1252632.
[10] A. Rabee and I. Barhumi, “License plate detection and recognition in complex scenes using
mathematical morphology and support vector machines,” in IWSSIP 2014 Proceedings, May 2014, pp.
59–62.
[11] V. D. Mai, D. Miao, R. Wang, and H. Zhang, “An Improved Method for Vietnam License Plate
Location, Segmentation and Recognition,” in 2011 International Conference on Computational and
Information Sciences, Oct. 2011, pp. 212–215, doi: 10.1109/ICCIS.2011.79.
[12] Bai Hongliang and Liu Changping, “A hybrid license plate extraction method based on edge statistics
and morphology,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004.
ICPR 2004., Aug. 2004, vol. 2, pp. 831-834 Vol.2, doi: 10.1109/ICPR.2004.1334387.
[13] R. Radha and C. P. Sumathi, “A novel approach to extract text from license plate of vehicles,” Signal
& Image Processing, vol. 3, no. 4, p. 181, 2012.
[14] T. D. Duan, T. H. Du, T. V. Phuoc, and N. V. Hoang, “Building an automatic vehicle license plate
recognition system,” in Proc. Int. Conf. Comput. Sci. RIVF, 2005, vol. 1, pp. 59–63.
[15] P. Prabhakar, P. Anupama, and S. R. Resmi, “Automatic vehicle number plate detection and
recognition,” in 2014 International Conference on Control, Instrumentation, Communication and
Computational Technologies (ICCICCT), Jul. 2014, pp. 185–190, doi:
10.1109/ICCICCT.2014.6992954.
[16] Ching-Tang Hsieh, Yu-Shan Juan, and Kuo-Ming Hung, “Multiple license plate detection for complex
background,” in 19th International Conference on Advanced Information Networking and Applications
(AINA’05) Volume 1 (AINA papers), Mar. 2005, vol. 2, pp. 389–392 vol.2, doi:
10.1109/AINA.2005.257.
[17] A. Boudjella, B. B. Samir, H. B. Daud, and R. Syahira, “License plate recognition part II: Wavelet
transform and Euclidean distance method,” in 2012 4th International Conference on Intelligent and
Advanced Systems (ICIAS2012), Jun. 2012, vol. 2, pp. 695–700, doi: 10.1109/ICIAS.2012.6306103.
11. Signal & Image Processing: An International Journal (SIPIJ) Vol.12, No.2, April 2021
11
[18] D. Reisfeld, H. Wolfson, and Y. Yeshurun, “Context-free attentional operators: The generalized
symmetry transform,” International Journal of Computer Vision, vol. 14, no. 2, pp. 119–130, Mar.
1995, doi: 10.1007/BF01418978.
[19] Dong-Su Kim and Sung-Il Chien, “Automatic car license plate extraction using modified generalized
symmetry transform and image warping,” in ISIE 2001. 2001 IEEE International Symposium on
Industrial Electronics Proceedings (Cat. No.01TH8570), Jun. 2001, vol. 3, pp. 2022–2027 vol.3, doi:
10.1109/ISIE.2001.932025.
[20] C. Anagnostopoulos, T. Alexandropoulos, S. Boutas, V. Loumos, and E. Kayafas, “A templateguided
approach to vehicle surveillance and access control,” in IEEE Conference on Advanced Video and
Signal Based Surveillance, 2005., Sep. 2005, pp. 534–539, doi: 10.1109/AVSS.2005.1577325.
[21] F. Jun and D. Shuguang, “A vehicle license plate location and correction method based the
characteristics of license plate,” in Proceedings of the 10th World Congress on Intelligent Control and
Automation, Jul. 2012, pp. 42–46, doi: 10.1109/WCICA.2012.6357836.
[22] H. Anoual, S. EL FKIHI, A. JILBAB, and D. ABOUTAJDINE, “A NOVEL TEXTURE-BASED
ALGORITHM FOR LOCALIZING VEHICLE LICENSE PLATES.,” Journal of Theoretical &
Applied Information Technology, vol. 45, no. 1, 2012.
[23] R. Al-Hmouz and S. Challa, “License plate localization based on a probabilistic model,” Machine
Vision and Applications, vol. 21, no. 3, pp. 319–330, Apr. 2010, doi: 10.1007/s00138-008-0164-9.
[24] X. Shi, W. Zhao, and Y. Shen, “Automatic License Plate Recognition System Based on Color Image
Processing,” in Computational Science and Its Applications – ICCSA 2005, Berlin, Heidelberg, 2005,
pp. 1159–1168, doi: 10.1007/11424925_121.
[25] Shyang-Lih Chang, Li-Shien Chen, Yun-Chung Chung, and Sei-Wan Chen, “Automatic license plate
recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 1, pp. 42–53, Mar.
2004, doi: 10.1109/TITS.2004.825086.
[26] R. Zunino and S. Rovetta, “Vector quantization for license-plate location and image coding,” IEEE
Transactions on Industrial Electronics, vol. 47, no. 1, pp. 159–167, Feb. 2000, doi: 10.1109/41.824138.
[27] O. Bulan, V. Kozitsky, P. Ramesh, and M. Shreve, “Segmentation- and Annotation-Free License Plate
Recognition With Deep Localization and Failure Identification,” IEEE Transactions on Intelligent
Transportation Systems, vol. 18, no. 9, pp. 2351–2363, Sep. 2017, doi: 10.1109/TITS.2016.2639020.
[28] D. Wang, Y. Tian, W. Geng, L. Zhao, and C. Gong, “LPR-Net: Recognizing Chinese license plate in
complex environments,” Pattern Recognition Letters, vol. 130, pp. 148–156, Feb. 2020, doi:
10.1016/j.patrec.2018.09.026.
[29] S. Usmankhujaev, S. Lee, and J. Kwon, “Korean License Plate Recognition System Using Combined
Neural Networks,” in Distributed Computing and Artificial Intelligence, 16th International Conference,
Cham, 2020, pp. 10–17, doi: 10.1007/978-3-030-23887-2_2.
[30] K. Kumar, S. Sinha, and P. Manupriya, “D-PNR: Deep License Plate Number Recognition,” in
Proceedings of 2nd International Conference on Computer Vision & Image Processing, Singapore,
2018, pp. 37–46, doi: 10.1007/978-981-10-7898-9_4.
[31] S. T. H. Rizvi, D. Patti, T. Björklund, G. Cabodi, and G. Francini, “Deep Classifiers-Based License
Plate Detection, Localization and Recognition on GPU-Powered Mobile Platform,” Future Internet,
vol. 9, no. 4, Art. no. 4, Dec. 2017, doi: 10.3390/fi9040066.
[32] S. Sharma, P. Kumar, and K. Kumar, “A-PNR: Automatic Plate Number Recognition,” in Proceedings
of the 7th International Conference on Computer and Communication Technology, New York, NY,
USA, Nov. 2017, pp. 106–110, doi: 10.1145/3154979.3154999.