ABSTRACT
This paper introduces an effective method of fingerprint classification based on discriminative feature gathering from orientation field. A nonlinear support vector machines (SVMs) is adopted for the classification. The orientation field is estimated through a pixel-Wise gradient descent method and the percentage of directional block classes is estimated. These percentages are classified into four-dimensional vector considered as a good feature that can be combined with an accurate singular point to classify the fingerprint into one of five classes. This method shows high classification accuracy relative to other spatial domain classifiers.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This project report describes the implementation of Otsu's method for image segmentation. Otsu's method is a global thresholding technique that automatically performs image thresholding. It finds the optimal threshold to separate foreground objects from the background by minimizing intra-class variance. The report provides an overview of image segmentation and thresholding techniques. It explains Otsu's algorithm and how it maximizes between-class variance. Results of applying Otsu's method on sample images using histogram analysis, the graythresh function, and adaptive thresholding are presented. The report concludes that Otsu's method is a simple and effective approach for automatic image thresholding and segmentation.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
Fuzzy c-means clustering for image segmentationDharmesh Patel
1. The document discusses fuzzy c-means clustering, an image segmentation technique that allows pixels to belong to multiple clusters, unlike k-means clustering.
2. The fuzzy c-means algorithm initializes membership values and centroid values, then iteratively updates these values until convergence.
3. Experimental results on sample images show the output segmentation for varying numbers of clusters, demonstrating both capabilities and limitations of fuzzy c-means clustering.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
This document proposes a method for change detection in images that combines Change Vector Analysis, K-Means clustering, Otsu thresholding, and mathematical morphology. It involves detecting intensity changes using CVA, segmenting the difference image using K-Means, calculating a threshold with Otsu's method, applying the threshold and morphological operations, and comparing results to other change detection techniques. Experimental results on medical and other images show the proposed method achieves satisfactory change detection with fewer errors compared to other methods.
This summarizes an academic paper that proposes an unsupervised algorithm to detect regions of interest (ROIs) in images using fast feature detectors. It detects keypoints using Speeded-Up Robust Features (SURF) and Features from Accelerated Segment Test (FAST) to maximize interest points. It categorizes keypoints as foreground or background using k-nearest neighbors classification on texture descriptors. ROIs are identified as groups of foreground keypoints. Preliminary experiments showed this approach can efficiently detect ROIs without computationally expensive comparisons between images.
This document discusses tracking multiple objects in video using probabilistic distributions. It proposes using particle filters to represent object positions with random particles. The method initializes particles randomly, updates their positions each frame based on probabilistic distributions, and uses maximum likelihood estimation to compute the distribution parameters. It models object motion using a beta distribution and estimates the distribution's alpha and beta parameters from each frame to predict object positions. The results show this approach can effectively track multiple moving objects, especially when there are occlusions.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This project report describes the implementation of Otsu's method for image segmentation. Otsu's method is a global thresholding technique that automatically performs image thresholding. It finds the optimal threshold to separate foreground objects from the background by minimizing intra-class variance. The report provides an overview of image segmentation and thresholding techniques. It explains Otsu's algorithm and how it maximizes between-class variance. Results of applying Otsu's method on sample images using histogram analysis, the graythresh function, and adaptive thresholding are presented. The report concludes that Otsu's method is a simple and effective approach for automatic image thresholding and segmentation.
Automatic rectification of perspective distortion from a single image using p...ijcsa
Perspective distortion occurs due to the perspective projection of 3D scene on a 2D surface. Correcting the distortion of a single image without losing any desired information is one of the challenging task in the field of Computer Vision. We consider the problem of estimating perspective distortion from a single still image of an unstructured environment and to make perspective correction which is both quantitatively accurate as well as visually pleasing. Corners are detected based on the orientation of the image. A method based on plane homography and transformation is used to make perspective correction. The algorithm infers frontier information directly from the images, without any reference objects or prior knowledge of the camera parameters. The frontiers are detected using geometric context based segmentation. The goal of this paper is to present a framework providing fully automatic and fast perspective correction.
Fuzzy c-means clustering for image segmentationDharmesh Patel
1. The document discusses fuzzy c-means clustering, an image segmentation technique that allows pixels to belong to multiple clusters, unlike k-means clustering.
2. The fuzzy c-means algorithm initializes membership values and centroid values, then iteratively updates these values until convergence.
3. Experimental results on sample images show the output segmentation for varying numbers of clusters, demonstrating both capabilities and limitations of fuzzy c-means clustering.
Developing 3D Viewing Model from 2D Stereo Pair with its Occlusion RatioCSCJournals
We intend to make a 3D model using a stereo pair of images by using a novel method of local matching in pixel domain for calculating horizontal disparities. We also find the occlusion ratio using the stereo pair followed by the use of The Edge Detection and Image SegmentatiON (EDISON) system, on one the images, which provides a complete toolbox for discontinuity preserving filtering, segmentation and edge detection. Instead of assigning a disparity value to each pixel, a disparity plane is assigned to each segment. We then warp the segment disparities to the original image to get our final 3D viewing Model.
This document proposes a method for change detection in images that combines Change Vector Analysis, K-Means clustering, Otsu thresholding, and mathematical morphology. It involves detecting intensity changes using CVA, segmenting the difference image using K-Means, calculating a threshold with Otsu's method, applying the threshold and morphological operations, and comparing results to other change detection techniques. Experimental results on medical and other images show the proposed method achieves satisfactory change detection with fewer errors compared to other methods.
This summarizes an academic paper that proposes an unsupervised algorithm to detect regions of interest (ROIs) in images using fast feature detectors. It detects keypoints using Speeded-Up Robust Features (SURF) and Features from Accelerated Segment Test (FAST) to maximize interest points. It categorizes keypoints as foreground or background using k-nearest neighbors classification on texture descriptors. ROIs are identified as groups of foreground keypoints. Preliminary experiments showed this approach can efficiently detect ROIs without computationally expensive comparisons between images.
Ech a novel multilevel thresholding technique for minutiae based fingerprint ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses different techniques for image segmentation, which is the process of partitioning an image into meaningful regions or objects. It covers several main methods of region segmentation, including region growing, clustering, and split-and-merge. It also discusses techniques for finding line and curve segments in an image, such as using the Hough transform or edge tracking procedures. Finally, it provides examples of applying these segmentation techniques to extract regions, straight lines, and circles from images.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
This document summarizes a research paper that proposes a new texture feature descriptor called "NEW" for image segmentation. The NEW descriptor labels neighboring pixels and forms eight-component binary vectors to represent texture. Fuzzy c-means clustering is then used to segment images into regions based on texture. Experimental results on texture images from the Brodatz dataset show the NEW descriptor can successfully segment images into the correct number of texture regions. Accuracy, precision, and recall metrics are used to evaluate the segmentation performance.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
ActiveContour(Snake) • LevelSet
• Applications
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energyfunctional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
Paper 58 disparity-of_stereo_images_by_self_adaptive_algorithmMDABDULMANNANMONDAL
This document summarizes a research paper that proposes a new stereo matching algorithm called Self Adaptive Algorithm (SAA) to efficiently compute stereo correspondence or disparity maps from stereo images. SAA aims to improve matching speed by reducing the search zone and avoiding false matches through an adaptive search approach. It dynamically selects the search range based on previous matching results, reducing the range by 50% with each iteration. Experimental results on standard stereo datasets show that SAA outperforms other methods in terms of speed while maintaining accuracy, with processing speeds of 535 fps and 377 fps for different image pairs. SAA reduces computational time by 70.53-99.93% compared to other state-of-the-art methods.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This paper proposes a new method for visual segmentation based on fixation points. The method segments the region of interest in two steps: (1) generating a probabilistic boundary edge map combining multiple visual cues, and (2) finding the optimal closed contour around the fixation point in the transformed polar edge map. The paper shows this fixation-based segmentation approach improves accuracy over previous methods, especially when incorporating motion and stereo cues. It also introduces a region merging algorithm to further refine segmentation results. Evaluation on video and stereo image datasets demonstrates mean F-measures of 0.95 and 0.96 respectively when combining cues, compared to 0.62 and 0.65 without.
Image restoration based on morphological operationsijcseit
This document discusses image restoration using morphological operations. It begins with an abstract describing mathematical morphology and its applications to tasks like noise suppression, feature extraction, and image restoration. It then covers 6 morphological operations (erosion, dilation, opening, closing, boundary extraction, and region filling) and provides mathematical definitions and illustrations of their effects. Examples of applying these operations to grayscale images using different structuring element shapes are shown. The document concludes that morphological operations are effective for image restoration by applying dilation and erosion with the same factor to remove noise while retaining object shapes.
Nose Tip Detection Using Shape index and Energy Effective for 3d Face Recogni...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
FUZZY IMAGE SEGMENTATION USING VALIDITY INDEXES CORRELATIONijcsit
This paper introduces an algorithm for image segmentation using a clustering technique; the technique is
based on the fuzzy c means algorithm (FCM) that is executed iteratively with different number of clusters.
Furthermore, simultaneously five validity indexes are calculated and their information is correlated to
determine the optimal number of clusters in order to segment an image, results and simulations are shown
in the paper.
A Review Paper on Stereo Vision Based Depth EstimationIJSRD
Stereo vision is a challenging problem and it is a wide research topic in computer vision. It has got a lot of attraction because it is a cost efficient way in place of using costly sensors. Stereo vision has found a great importance in many fields and applications in today’s world. Some of the applications include robotics, 3-D scanning, 3-D reconstruction, driver assistance systems, forensics, 3-D tracking etc. The main challenge of stereo vision is to generate accurate disparity map. Stereo vision algorithms usually perform four steps: first, matching cost computation; second, cost aggregation; third, disparity computation or optimization; and fourth, disparity refinement. Stereo matching problems are also discussed. A large number of algorithms have been developed for stereo vision. But characterization of their performance has achieved less attraction. This paper gives a brief overview of the existing stereo vision algorithms. After evaluating the papers we can say that focus has been on cost aggregation and multi-step refinement process. Segment-based methods have also attracted attention due to their good performance. Also, using improved filter for cost aggregation in stereo matching achieves better results.
This paper proposed a facial expression recognition approach based on Gabor wavelet transform. Gabor wavelet filter is first used as pre-processing stage for extraction of the feature vector representation. Dimensionality of the feature vector is reduced using Principal Component Analysis and Local binary pattern (LBP) Algorithms. Experiments were carried out of The Japanese female facial expression (JAFFE) database. In all experiments conducted on JAFFE database, results obtained reveal that GW+LBP has outperformed other approaches in this paper with Average recognition rate of 90% under the same experimental setting.
The document discusses image representation and feature extraction techniques. It describes how representation makes image information more accessible for computer interpretation using either boundaries or pixel regions. Feature extraction quantifies these representations by extracting descriptors like geometric properties, statistical moments, and textures. Desirable properties for descriptors include being invariant to transformations, compact, robust to noise, and having low complexity. Various boundary and regional descriptors are defined, such as chain codes, shape numbers, and moments.
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
This document discusses a method for detecting diseases in wheat crops using image processing and artificial neural networks. It involves taking digital images of wheat crop leaves and preprocessing the images by applying Gaussian and median filters to reduce noise. The images are then segmented using CIELAB color space. Texture features like area, perimeter, contrast, and energy are extracted from the images using Gabor filters. These features are then fed into an artificial neural network classifier to identify the type of disease present in the wheat crop. The method aims to help farmers more quickly and accurately detect diseases so they can better manage their crops and increase agricultural productivity.
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
This document presents a fusion-based method for removing Gaussian noise from images using curvelets and wavelets with a Gaussian filter. The proposed method aims to address artifacts that appear when using curvelets alone. It first applies Gaussian filtering, wavelet denoising, and curvelet denoising separately. It then fuses the results of these three approaches to obtain a better denoised image with fewer artifacts. The method is tested on various standard test images and medical images corrupted with white Gaussian noise. Results are evaluated using peak signal-to-noise ratio and weighted peak signal-to-noise ratio, which accounts for human visual sensitivity.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
This document summarizes a research paper that proposes recognizing handwritten Odia (an Indian language) numerals using a single layer perceptron neural network. It first extracts gradient and curvature features from images of handwritten digits. These features are used to train a single layer perceptron classifier. The system achieves 85% accuracy on a dataset of 100 handwritten digit patterns written by 100 people. It aims to provide an efficient way to recognize Odia numerals using a non-linear classifier with reduced complexity compared to other methods.
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...ZaidHussein6
This document summarizes a research paper that proposes a stereo vision algorithm called the Canny Block Matching Algorithm (CBMA) to estimate distance from stereo images. CBMA uses the Canny edge detector to extract edges from images and block matching with Sum of Absolute Difference (SAD) to determine disparity maps and reduce processing time. The algorithm was tested on stereo image pairs and achieved an error reduction of about 2% and processing time reduction compared to other methods. Interpolation techniques including bilinear, 1st order polynomial and 2nd order polynomial were also evaluated to enhance the output images and further reduce errors.
Ech a novel multilevel thresholding technique for minutiae based fingerprint ...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This document discusses different techniques for image segmentation, which is the process of partitioning an image into meaningful regions or objects. It covers several main methods of region segmentation, including region growing, clustering, and split-and-merge. It also discusses techniques for finding line and curve segments in an image, such as using the Hough transform or edge tracking procedures. Finally, it provides examples of applying these segmentation techniques to extract regions, straight lines, and circles from images.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
SEGMENTATION USING ‘NEW’ TEXTURE FEATUREacijjournal
This document summarizes a research paper that proposes a new texture feature descriptor called "NEW" for image segmentation. The NEW descriptor labels neighboring pixels and forms eight-component binary vectors to represent texture. Fuzzy c-means clustering is then used to segment images into regions based on texture. Experimental results on texture images from the Brodatz dataset show the NEW descriptor can successfully segment images into the correct number of texture regions. Accuracy, precision, and recall metrics are used to evaluate the segmentation performance.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document proposes a new algorithm to reduce blocking artifacts in compressed images using a combination of the SAWS technique, Fuzzy Impulse Artifact Detection and Reduction Method (FIDRM), and Noise Adaptive Fuzzy Switching Median Filter (NAFSM). FIDRM uses fuzzy rules to detect noisy pixels, while NAFSM uses a median filter to correct pixels based on local information. Experimental results on test images show the proposed approach achieves better PSNR than other deblocking methods.
Lec11: Active Contour and Level Set for Medical Image SegmentationUlaş Bağcı
ActiveContour(Snake) • LevelSet
• Applications
Enhancement, Noise Reduction, and Signal Processing • MedicalImageRegistration • MedicalImageSegmentation • MedicalImageVisualization • Machine Learning in Medical Imaging • Shape Modeling/Analysis of Medical Images Deep Learning in Radiology Fuzzy Connectivity (FC) – Affinity functions • Absolute FC • Relative FC (and Iterative Relative FC) • Successful example applications of FC in medical imaging • Segmentation of Airway and Airway Walls using RFC based method Energyfunctional – Data and Smoothness terms • GraphCut – Min cut – Max Flow • ApplicationsinRadiologyImages
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
Paper 58 disparity-of_stereo_images_by_self_adaptive_algorithmMDABDULMANNANMONDAL
This document summarizes a research paper that proposes a new stereo matching algorithm called Self Adaptive Algorithm (SAA) to efficiently compute stereo correspondence or disparity maps from stereo images. SAA aims to improve matching speed by reducing the search zone and avoiding false matches through an adaptive search approach. It dynamically selects the search range based on previous matching results, reducing the range by 50% with each iteration. Experimental results on standard stereo datasets show that SAA outperforms other methods in terms of speed while maintaining accuracy, with processing speeds of 535 fps and 377 fps for different image pairs. SAA reduces computational time by 70.53-99.93% compared to other state-of-the-art methods.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
This paper proposes a new method for visual segmentation based on fixation points. The method segments the region of interest in two steps: (1) generating a probabilistic boundary edge map combining multiple visual cues, and (2) finding the optimal closed contour around the fixation point in the transformed polar edge map. The paper shows this fixation-based segmentation approach improves accuracy over previous methods, especially when incorporating motion and stereo cues. It also introduces a region merging algorithm to further refine segmentation results. Evaluation on video and stereo image datasets demonstrates mean F-measures of 0.95 and 0.96 respectively when combining cues, compared to 0.62 and 0.65 without.
Image restoration based on morphological operationsijcseit
This document discusses image restoration using morphological operations. It begins with an abstract describing mathematical morphology and its applications to tasks like noise suppression, feature extraction, and image restoration. It then covers 6 morphological operations (erosion, dilation, opening, closing, boundary extraction, and region filling) and provides mathematical definitions and illustrations of their effects. Examples of applying these operations to grayscale images using different structuring element shapes are shown. The document concludes that morphological operations are effective for image restoration by applying dilation and erosion with the same factor to remove noise while retaining object shapes.
Nose Tip Detection Using Shape index and Energy Effective for 3d Face Recogni...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
Combining Generative And Discriminative Classifiers For Semantic Automatic Im...CSCJournals
The object image annotation problem is basically a classification problem and there are many different modeling approaches for the solution. These approaches can be classified into two main categories such as generative and discriminative. An ideal classifier should combine these two complementary approaches. In this paper, we present a method achieving this combination by using the discriminative power of the neural networks and the generative nature of Bayesian networks. The evaluation of the proposed method on three typical image’s database has shown some success in automatic image annotation.
FUZZY IMAGE SEGMENTATION USING VALIDITY INDEXES CORRELATIONijcsit
This paper introduces an algorithm for image segmentation using a clustering technique; the technique is
based on the fuzzy c means algorithm (FCM) that is executed iteratively with different number of clusters.
Furthermore, simultaneously five validity indexes are calculated and their information is correlated to
determine the optimal number of clusters in order to segment an image, results and simulations are shown
in the paper.
A Review Paper on Stereo Vision Based Depth EstimationIJSRD
Stereo vision is a challenging problem and it is a wide research topic in computer vision. It has got a lot of attraction because it is a cost efficient way in place of using costly sensors. Stereo vision has found a great importance in many fields and applications in today’s world. Some of the applications include robotics, 3-D scanning, 3-D reconstruction, driver assistance systems, forensics, 3-D tracking etc. The main challenge of stereo vision is to generate accurate disparity map. Stereo vision algorithms usually perform four steps: first, matching cost computation; second, cost aggregation; third, disparity computation or optimization; and fourth, disparity refinement. Stereo matching problems are also discussed. A large number of algorithms have been developed for stereo vision. But characterization of their performance has achieved less attraction. This paper gives a brief overview of the existing stereo vision algorithms. After evaluating the papers we can say that focus has been on cost aggregation and multi-step refinement process. Segment-based methods have also attracted attention due to their good performance. Also, using improved filter for cost aggregation in stereo matching achieves better results.
This paper proposed a facial expression recognition approach based on Gabor wavelet transform. Gabor wavelet filter is first used as pre-processing stage for extraction of the feature vector representation. Dimensionality of the feature vector is reduced using Principal Component Analysis and Local binary pattern (LBP) Algorithms. Experiments were carried out of The Japanese female facial expression (JAFFE) database. In all experiments conducted on JAFFE database, results obtained reveal that GW+LBP has outperformed other approaches in this paper with Average recognition rate of 90% under the same experimental setting.
The document discusses image representation and feature extraction techniques. It describes how representation makes image information more accessible for computer interpretation using either boundaries or pixel regions. Feature extraction quantifies these representations by extracting descriptors like geometric properties, statistical moments, and textures. Desirable properties for descriptors include being invariant to transformations, compact, robust to noise, and having low complexity. Various boundary and regional descriptors are defined, such as chain codes, shape numbers, and moments.
Extraction of texture features by using gabor filter in wheat crop disease de...eSAT Journals
This document discusses a method for detecting diseases in wheat crops using image processing and artificial neural networks. It involves taking digital images of wheat crop leaves and preprocessing the images by applying Gaussian and median filters to reduce noise. The images are then segmented using CIELAB color space. Texture features like area, perimeter, contrast, and energy are extracted from the images using Gabor filters. These features are then fed into an artificial neural network classifier to identify the type of disease present in the wheat crop. The method aims to help farmers more quickly and accurately detect diseases so they can better manage their crops and increase agricultural productivity.
Fusion Based Gaussian noise Removal in the Images using Curvelets and Wavelet...CSCJournals
This document presents a fusion-based method for removing Gaussian noise from images using curvelets and wavelets with a Gaussian filter. The proposed method aims to address artifacts that appear when using curvelets alone. It first applies Gaussian filtering, wavelet denoising, and curvelet denoising separately. It then fuses the results of these three approaches to obtain a better denoised image with fewer artifacts. The method is tested on various standard test images and medical images corrupted with white Gaussian noise. Results are evaluated using peak signal-to-noise ratio and weighted peak signal-to-noise ratio, which accounts for human visual sensitivity.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
This document summarizes a research paper that proposes recognizing handwritten Odia (an Indian language) numerals using a single layer perceptron neural network. It first extracts gradient and curvature features from images of handwritten digits. These features are used to train a single layer perceptron classifier. The system achieves 85% accuracy on a dataset of 100 handwritten digit patterns written by 100 people. It aims to provide an efficient way to recognize Odia numerals using a non-linear classifier with reduced complexity compared to other methods.
Stereo Vision Distance Estimation Employing Canny Edge Detector with Interpol...ZaidHussein6
This document summarizes a research paper that proposes a stereo vision algorithm called the Canny Block Matching Algorithm (CBMA) to estimate distance from stereo images. CBMA uses the Canny edge detector to extract edges from images and block matching with Sum of Absolute Difference (SAD) to determine disparity maps and reduce processing time. The algorithm was tested on stereo image pairs and achieved an error reduction of about 2% and processing time reduction compared to other methods. Interpolation techniques including bilinear, 1st order polynomial and 2nd order polynomial were also evaluated to enhance the output images and further reduce errors.
Gait Based Person Recognition Using Partial Least Squares Selection Scheme ijcisjournal
The document summarizes a research paper on gait-based person recognition using partial least squares selection. It presents an Arbitrary View Transformation Model (AVTM) that uses gait energy images and partial least squares (PLS) feature selection to improve gait recognition accuracy under varying viewing angles, clothing, and other conditions. The proposed AVTM PLS method is evaluated on the CASIA gait database and shown to achieve higher recognition rates compared to other existing methods, especially when there are changes in viewing angle, clothing, or whether the person is carrying something. Tables of results demonstrate the proposed method outperforms alternatives across different test conditions and ranges of gallery and probe viewing angles.
This document summarizes a research paper on fingerprint verification using steerable filters. It discusses how steerable filters can be used to extract texture features from fingerprints, without requiring minutiae extraction. The key points are:
1) A set of steerable filters at different orientations are applied to the fingerprint image to extract texture features.
2) The fingerprint image is divided into blocks and the mean response of each block for each filter provides the feature vector.
3) Experiments on two fingerprint databases achieved a genuine acceptance rate of 94% for verification.
4) The method extracts features directly from the image, without pre-processing steps required for minutiae-based matching, and with fewer computations.
This document summarizes a research paper on fingerprint verification using steerable filters. It discusses how existing fingerprint recognition systems rely on minutiae matching but have limitations related to image quality and minutiae extraction. The paper proposes using steerable filters to extract texture features from fingerprints. Steerable filters can selectively detect texture orientations and frequencies. The method applies steerable filters at different orientations to a fingerprint image to extract features, divides the image into blocks, and computes mean values to form a feature matrix for classification. Experimental results on two databases achieved a genuine acceptance rate of 94% for verification.
Segmentation of medical images using metric topology – a region growing approachIjrdt Journal
A metric topological approach to the region growing based segmentation is presented in this article. Region based growing techniques has gained a significant importance in the medical image processing field for finest of segregation of tumor detected part in the image. Conventional algorithms were concentrated on segmentation at the coarser level which failed to produce enough evidence for the validity of the algorithm. In this article a novel technique is proposed based on metric topological neighbourhood also with the introduction of new objective measure entropy, apart from the traditional validity measures of Accuracy, PSNR and MSE. This measure is introduced to prove the amount of information lost after segmentation is reduced to greater extent which elucidates the effectiveness of the algorithm. This algorithm is tested on the well known benchmarking of testing in ground truth images in par with the proposed region based growing segmented images. The results validated show the validation of effectiveness of the algorithm.
Improvement oh the recognition rate by random forestYoussef Rachidi
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures.
Improvement of the Recognition Rate by Random ForestIJERA Editor
In this paper; we introduce a system of automatic recognition of characters based on the Random Forest Method in non-constrictive pictures that are stemmed from the terminals Mobile phone. After doing some pretreatments on the picture, the text is segmented into lines and then into characters. In the stage of characteristics extraction, we are representing the input data into the vector of primitives of the zoning types, of diagonal, horizontal and of the Zernike moment. These characteristics are linked to pixels’ densities and they are extracted on binary pictures. In the classification stage, we examine four classification methods with two different classifiers types namely the multi-layer perceptron (MLP) and the Random Forest method. After some checking tests, the system of learning and recognition which is based on the Random Forest has shown a good performance on a basis of 100 models of pictures
FINGERPRINT MATCHING USING HYBRID SHAPE AND ORIENTATION DESCRIPTOR -AN IMPROV...IJCI JOURNAL
Fingerprint recognition is a promising factor for the Biometric Identification and authentication process.
Fingerprints are broadly used for personal identification due to its feasibility, distinctiveness, permanence,
accuracy and acceptability. This paper proposes a way to improve the Equal Error Rate (EER) in
fingerprint matching techniques in the domain of hybrid shape and orientation descriptor. This type of
fingerprint matching domain is popular due to capability of filtering false and strange minutiae pairings.
EER is calculated by using FMR and FNMR to check the performance of proposed technique.
Geoid height determination is one of the major problems of geodesy because usage of satellite
techniques in geodesy isgetting increasing. Geoid heights can be determined using different methods according
to the available data. Soft computing methods such as Fuzzy logic and neural networks became so popular that
they are used to solve many engineering problems. Fuzzy logic theory and later developments in uncertainty
assessment have enabled us to develop more precise models for our requirements. In this study, How to
construct the best fuzzy model is examined. For this purpose, three different data sets were taken and two
different kinds (two inpust one output and three inputs one output) fuzzy model were formed for the calculation
of geoid heights in Istanbul (Turkey). The Fuzzy models results of these were compared with geoid heights
obtained by GPS/levelling methods. The fuzzy approximation models were tested on the test points.
Efficient 3D stereo vision stabilization for multi-camera viewpointsjournalBEEI
In this paper, an algorithm is developed in 3D Stereo vision to improve image stabilization process for multi-camera viewpoints. Finding accurate unique matching key-points using Harris Laplace corner detection method for different photometric changes and geometric transformation in images. Then improved the connectivity of correct matching pairs by minimizing
the global error using spanning tree algorithm. Tree algorithm helps to stabilize randomly positioned camera viewpoints in linear order. The unique matching key-points will be calculated only once with our method.
Then calculated planar transformation will be applied for real time video rendering. The proposed algorithm can process more than 200 camera viewpoints within two seconds.
Gabor filter is a powerful way to enhance biometric images like fingerprint images in order to extract correct features from these images, Gabor filter used in extracting features directly asin iris images, and sometimes Gabor filter has been used for texture analysis. In fingerprint images The even symmetric Gabor filter is contextual filter or multi-resolution filter will be used to enhance fingerprint imageby filling small gaps (low-pass effect) in the direction of the ridge (black regions) and to increase the discrimination between ridge and valley (black and white regions) in the direction, orthogonal to the ridge, the proposed method in applying Gabor filter on fingerprint images depending on translated fingerprint image into binary image after applying some simple enhancing methods to partially overcome time consuming problem of the Gabor filter.
SHORT LISTING LIKELY IMAGES USING PROPOSED MODIFIED-SIFT TOGETHER WITH CONVEN...ijfcstjournal
The paper proposes the modified-SIFT algorithm which will be a modified form of the scale invariant feature transform. The modification consists of considering successive groups of 8 rows of pixel, along the height of the image. These are used to construct 8 bin histograms for magnitude as well as orientation individually. As a result the number of feature descriptors is significantly less (95%) than the standard SIFT approach. Fewer feature descriptor leads to reduced accuracy. This reduction in accuracy is quite drastic when searching for a single (RANK1) image match; however accuracy improves if a band of likely (say tolerance of 10%) images is to be returned. The paper therefore proposes a two-stage-approach where
First Modified-SIFT is used to obtain a shortlisted band of likely images subsequently SIFT is applied within this band to find a perfect match. It may appear that this process is tedious however it provides a significant reduction in search time as compared to applying SIFT on the entire database. The minor reduction in accuracy can be offset by the considerable time gained while searching a large database. The
modified-SIFT algorithm when used in conjunction with a face cropping algorithm can also be used to find a match against disguised images.
Efficient fingerprint image enhancement algorithm based on gabor filtereSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
SVM Based Identification of Psychological Personality Using Handwritten Text IJERA Editor
This document describes a study that uses handwriting analysis to identify psychological personality traits using support vector machines (SVM). Handwriting samples were collected and preprocessed by removing noise and segmenting lines. Features like slope, shape, and edge histograms were extracted. SVM with radial basis function kernel was used for classification. Analysis of single lines achieved 95% accuracy while multiple lines achieved 91% accuracy in identifying traits like cheerfulness and weariness. The methodology was also applied to analyze handwriting of celebrities and compare the results to analyses by graphologists. The study aims to automate handwriting analysis using machine learning techniques.
SLIC Superpixel Based Self Organizing Maps Algorithm for Segmentation of Micr...IJAAS Team
We can find the simultaneous monitoring of thousands of genes in parallel Microarray technology. As per these measurements, microarray technology have proven powerful in gene expression profiling for discovering new types of diseases and for predicting the type of a disease. Gridding, Intensity extraction, Enhancement and Segmentation are important steps in microarray image analysis. This paper gives simple linear iterative clustering (SLIC) based self organizing maps (SOM) algorithm for segmentation of microarray image. The clusters of pixels which share similar features are called Superpixels, thus they can be used as mid-level units to decrease the computational cost in many vision applications. The proposed algorithm utilizes superpixels as clustering objects instead of pixels. The qualitative and quantitative analysis shows that the proposed method produces better segmentation quality than k-means, fuzzy cmeans and self organizing maps clustering methods.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Enhanced Latent Fingerprint Segmentation through Dictionary Based ApproachEditor IJMTER
The accuracy of latent finger print matching compared to roll and plain finger print
matching is significantly lower due to background noise, poor ridge quality and overlapping
structured noise in latent images. In this paper the proposed algorithm is dictionary-based approach
for automatic segmentation and enhancement towards the goal of achieving “lights out” latent
identifications system. Total variation decomposition model with L1 fidelity regularization in latent
finger print image remove background noise. A coarse to fine strategy is used to improve robustness
and accuracy. It improves the computational efficiency of the algorithm.
Similar to FINGERPRINT CLASSIFICATION BASED ON ORIENTATION FIELD (20)
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
DESIGN OF AN EMBEDDED SYSTEM: BEDSIDE PATIENT MONITORijesajournal
Embedded systems in the range of from a tiny microcontroller-based sensor device to mobile smart phones
have vast variety of applications. However, in the literature there is no up to date system-level design of
embedded hardware and software, instead academic publications are mainly focused on the improvement
of specific features of embedded software/hardware and the embedded system designs for specific
applications. Moreover, commercially available embedded systems are not disclosed for the view of
researchers in the literature. Therefore, in this paper we first present how to design a state of art embedded
system including emerged hardware and software technologies. Bedside Patient monitor devices used in
intensive cares units of hospitals are also classified as embedded systems and run sophisticated software
and algorithms for better diagnosis of diseases. We reveal the architecture of our, commercially available,
bedside patient monitor to provide a design example of embedded systemsrelating to emerged technologies.
PIP-MPU: FORMAL VERIFICATION OF AN MPUBASED SEPARATION KERNEL FOR CONSTRAINED...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources).
In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification.
Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit
(MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained
environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in
the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the
demonstration of the formal verification strategy on two representative kernel services. The publicly
released proofs have been implemented and checked using the Coq Proof Assistant for three kernel
services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of
an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Pip-MPU: Formal Verification of an MPU-Based Separationkernel for Constrained...ijesajournal
Pip-MPU is a minimalist separation kernel for constrained devices (scarce memory and power resources). In this work, we demonstrate high-assurance of Pip-MPU’s isolation property through formal verification. Pip-MPU offers user-defined on-demand multiple isolation levels guarded by the Memory Protection Unit (MPU). Pip-MPU derives from the Pip protokernel, with a full code refactoring to adapt to the constrained environment and targets equivalent security properties. The proofs verify that the memory blocks loaded in the MPU adhere to the global partition tree model. We provide the basis of the MPU formalisation and the demonstration of the formal verification strategy on two representative kernel services. The publicly released proofs have been implemented and checked using the Coq Proof Assistant for three kernel services, representing around 10000 lines of proof. To our knowledge, this is the first formal verification of an MPU based separation kernel. The verification process helped discover a critical isolation-related bug.
International Journal of Embedded Systems and Applications (IJESA)ijesajournal
International Journal of Embedded Systems and Applications (IJESA) is a quarterly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Embedded Systems and applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Embedded Systems and establishing new collaborations in these areas.
Authors are solicited to contribute to the journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of Embedded Systems & applications.
Call for papers -15th International Conference on Wireless & Mobile Network (...ijesajournal
15th International Conference on Wireless & Mobile Network (WiMo 2023) is dedicated to addressing the challenges in the areas of wireless & mobile networks. The Conference looks for significant contributions to the Wireless and Mobile computing in theoretical and practical aspects. The Wireless and Mobile computing domain emerges from the integration among personal computing, networks, communication technologies, cellular technology, and the Internet Technology. The modern applications are emerging in the area of mobile ad hoc networks and sensor networks. This Conference is intended to cover contributions in both the design and analysis in the context of mobile, wireless, ad-hoc, and sensor networks. The goal of this Conference is to bring together researchers and practitioners from academia and industry to focus on advanced wireless and Mobile computing concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to.
Call for Papers -International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
Scope & Topics
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
Topics of interest include, but are not limited to, the following
Chunking/Shallow Parsing
Dialogue and Interactive Systems
Deep learning and NLP
Discourseand Pragmatics
Information Extraction, Retrieval, Text Mining
Interpretability and Analysis of Models for NLP
Language Grounding to Vision, Robotics and Beyond
Lexical Semantics
Linguistic Resources
Machine Learning for NLP
Machine Translation
NLP and Signal Processing
NLP Applications
Ontology
Paraphrasing/Entailment/Generation
Parsing/Grammatical Formalisms
Phonology, Morphology
POS tagging
Question Answering
Resources and Evaluation
Semantic Processing
Sentiment Analysis, Stylistic Analysis, and Argument Mining
Speech and Multimodality
Speech Recognition and Synthesis
Spoken Language Processing
Statistical and Knowledge based methods
Summarization
Theory and Formalism in NLP
Signal Processing & NLP
Computer Vision, Image Processing& NLP
NLP, AI & Signal
Paper Submission
Authors are invited to submit papers through the conference Submission System by May 06, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by International Journal on Cybernetics & Informatics (IJCI) (Confirmed).
Selected papers from NLPSIG 2023, after further revisions, will be published in the special issue of the following journals.
International Journal on Natural Language Computing (IJNLC)
International Journal of Ubiquitous Computing (IJU)
International Journal of Data Mining & Knowledge Management Process (IJDKP)
Signal & Image Processing : An International Journal (SIPIJ)
International Journal of Ambient Systems and Applications (IJASA)
International Journal of Grid Computing & Applications (IJGCA)
Important Dates
Submission Deadline : May 06, 2023
Authors Notification : May 25, 2023
Final Manuscript Due : June 08, 2023
International Conference on NLP & Signal (NLPSIG 2023)ijesajournal
International Conference on NLP & Signal (NLPSIG 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Signal and Natural Language Processing (NLP).
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to:
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023)
May 27 ~ 28, 2023, Vancouver, Canada
https://acsit2023.org/se/index
Scope & Topics
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
Topics of interest include, but are not limited to, the following
The Software Process
Software Engineering Practice
Web Engineering
Quality Management
Managing Software Projects
Advanced Topics in Software Engineering
Multimedia and Visual Software Engineering
Software Maintenance and Testing
Languages and Formal Methods
Web-based Education Systems and Learning Applications
Software Engineering Decision Making
Knowledge-based Systems and Formal Methods
Search Engines and Information Retrieval
Paper Submission
Authors are invited to submit papers through the conference Submission System by April 08, 2023. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this conference. The proceedings of the conference will be published by Computer Science Conference Proceedings (H index 35) in Computer Science & Information Technology (CS & IT) series (Confirmed).
Selected papers from SE 2023, after further revisions, will be published in the special issue of the following journals.
The International Journal of Software Engineering & Applications (IJSEA) -ERA indexed
International Journal of Computer Science, Engineering and Applications (IJCSEA)
Important Dates
Submission Deadline : April 08, 2023
Authors Notification : April 29, 2023
Final Manuscript Due : May 06, 2023
11th International Conference on Software Engineering & Trends (SE 2023)ijesajournal
11th International Conference on Software Engineering & Trends (SE 2023) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of Software Engineering. The goal of this conference is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to the conference by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the areas of software engineering & applications. Topics of interest include, but are not limited to, the following.
PERFORMING AN EXPERIMENTAL PLATFORM TO OPTIMIZE DATA MULTIPLEXINGijesajournal
This article is based on preliminary work on the OSI model management layers to optimized industrial
wired data transfer on low data rate wireless technology. Our previous contribution deal with the
development of a demonstrator providing CAN bus transfer frames (1Mbps) on a low rate wireless channel
provided by Zigbee technology. In order to be compatible with all the other industrial protocols, we
describe in this paper our contribution to design an innovative Wireless Device (WD) and a software tool,
which will aim to determine the best architecture (hardware/software) and wireless technology to be used
taking in account of the wired protocol requirements. To validate the proper functioning of this WD, we
will develop an experimental platform to test different strategies provided by our software tool. We can
consequently prove which is the best configuration (hardware/software) compared to the others by the
inclusion (inputs) of the required parameters of the wired protocol (load, binary rate, acknowledge
timeout) and the analysis of the WD architecture characteristics proposed (outputs) as the delay introduced
by system, buffer size needed, CPU speed, power consumption, meeting the input requirement. It will be
important to know whether gain comes from a hardware strategy with hardware accelerator e.g or a
software strategy with a more perf
GENERIC SOPC PLATFORM FOR VIDEO INTERACTIVE SYSTEM WITH MPMC CONTROLLERijesajournal
Today, a significant number of embedded systems focus on multimedia applications with almost insatiable demand for low-cost, high performance, and low power hardware cosumption. In this paper, we present a re-configurable and generic hardware platform for image and video processing. The proposed platform uses the benefits offered by the Field Programmable Gate Array (FPGA) to attain this goal. In this context,
a prototype system is developed based on the Xilinx Virtex-5 FPGA with the integration of embedded processors, embedded memory, DDR, interface technologies, Digital Clock Managers (DCM) and MPMC.
The MPMC is an essential component for design performance tuning and real time video processing. We demonstrate the importance role of this interface in multi video applications. In fact, to successful the
deployment of DRAM it is mandatory to use a flexible and scalable interface. Our system introduces diverse modules, such as cut video detection, video zoom-in and out. This provides the utility of using this architecture as a universal video processing platform according to different application requirements. This platform facilitates the development of video and image processing applications.
This document summarizes the design challenges of an inverting buck-boost DC-DC converter that generates a negative output voltage from a positive input voltage. Key challenges discussed include the transition between continuous and discontinuous conduction modes, handling negative feedback and protection circuits, and improving transient load response. The proposed converter design addresses these challenges through its control topology, use of external components, and detection of conduction modes.
A Case Study: Task Scheduling Methodologies for High Speed Computing Systems ijesajournal
High Speed computing meets ever increasing real-time computational demands through the leveraging of
flexibility and parallelism. The flexibility is achieved when computing platform designed with
heterogeneous resources to support multifarious tasks of an application where as task scheduling brings
parallel processing. The efficient task scheduling is critical to obtain optimized performance in
heterogeneous computing Systems (HCS). In this paper, we brought a review of various application
scheduling models which provide parallelism for homogeneous and heterogeneous computing systems. In
this paper, we made a review of various scheduling methodologies targeted to high speed computing
systems and also prepared summary chart. The comparative study of scheduling methodologies for high
speed computing systems has been carried out based on the attributes of platform & application as well.
The attributes are execution time, nature of task, task handling capability, type of host & computing
platform. Finally a summary chart has been prepared and it demonstrates that the need of developing
scheduling methodologies for Heterogeneous Reconfigurable Computing Systems (HRCS) which is an
emerging high speed computing platform for real time applications.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Payment industry is largely aligned in their desire to create embedded payment systems ready for the
modern digital age. The trend to embed payments into a software platform is often regarded as first step
towards a broader trend of embedded finance based on digital representation of fiat currencies. Since it
became clear to our research team that there are no technologies and protocols that are protected against
attacks of quantum computing, and that enable automatic embedded payments, online or offline with no
fear of counterfeit, P2P or device-to-device to be made in real time without intermediaries, in any
denomination, even continuous payments per time or service, while preserving the privacy of all parties,
without enabling illicit activities, we decided to utilize the Generic Innovation Engine [1] that is based on
the Artificial Intelligence Assistance Innovation acceleration methodologies and tools in order to boost the
progress of innovation of the necessary solutions. These methodologies accelerate innovation across the
board. It proposes a framework for natural and artificial intelligence collaboration in pursuit of an
innovative (R&D) objective The outcome of deploying these Artificial Innovation Assistant (AIA)
methodologies was tens of patents that yield solutions, that a few of them are described in this paper. We
argue that a promising avenue for automated embedded payment systems to fulfil people’s desire for
privacy when conducting payments, and national security agencies demand for quantum-safe security,
could be based on DeFi and digital currencies platforms that does not suffer from flaws of DLT-based
solutions, while introducing real advantages, in all aspects, including being quantum-resilient, enabling
users to decide with whom, if at all, to share information, identity, transactions details, etc., all without
trade-offs, complying with AML measures, and accommodating the potential for high transaction volumes.
It is not legacy bank accounts, and it is not peer-dependent, nor a self-organizing network.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
2 nd International Conference on Computing and Information Technology ijesajournal
2
nd International Conference on Computing and Information Technology Trends
(CCITT 2023) will provide an excellent international forum for sharing knowledge and
results in theory, methodology and applications of Computing and Information Technology
Trends. The Conference looks for significant contributions to all major fields of the
Computer Science, Compute Engineering, Information Technology and Trends in theoretical
and practical aspects.
A NOVEL METHODOLOGY FOR TASK DISTRIBUTION IN HETEROGENEOUS RECONFIGURABLE COM...ijesajournal
Modern embedded systems are being modeled as Heterogeneous Reconfigurable Computing Systems
(HRCS) where Reconfigurable Hardware i.e. Field Programmable Gate Array (FPGA) and soft core
processors acts as computing elements. So, an efficient task distribution methodology is essential for
obtaining high performance in modern embedded systems. In this paper, we present a novel methodology
for task distribution called Minimum Laxity First (MLF) algorithm that takes the advantage of runtime
reconfiguration of FPGA in order to effectively utilize the available resources. The MLF algorithm is a list
based dynamic scheduling algorithm that uses attributes of tasks as well computing resources as cost
function to distribute the tasks of an application to HRCS. In this paper, an on chip HRCS computing
platform is configured on Virtex 5 FPGA using Xilinx EDK. The real time applications JPEG, OFDM
transmitters are represented as task graph and then the task are distributed, statically as well dynamically,
to the platform HRCS in order to evaluate the performance of the designed task distribution model. Finally,
the performance of MLF algorithm is compared with existing static scheduling algorithms. The comparison
shows that the MLF algorithm outperforms in terms of efficient utilization of resources on chip and also
speedup an application execution.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Gas agency management system project report.pdfKamal Acharya
The project entitled "Gas Agency" is done to make the manual process easier by making it a computerized system for billing and maintaining stock. The Gas Agencies get the order request through phone calls or by personal from their customers and deliver the gas cylinders to their address based on their demand and previous delivery date. This process is made computerized and the customer's name, address and stock details are stored in a database. Based on this the billing for a customer is made simple and easier, since a customer order for gas can be accepted only after completing a certain period from the previous delivery. This can be calculated and billed easily through this. There are two types of delivery like domestic purpose use delivery and commercial purpose use delivery. The bill rate and capacity differs for both. This can be easily maintained and charged accordingly.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Rainfall intensity duration frequency curve statistical analysis and modeling...bijceesjournal
Using data from 41 years in Patna’ India’ the study’s goal is to analyze the trends of how often it rains on a weekly, seasonal, and annual basis (1981−2020). First, utilizing the intensity-duration-frequency (IDF) curve and the relationship by statistically analyzing rainfall’ the historical rainfall data set for Patna’ India’ during a 41 year period (1981−2020), was evaluated for its quality. Changes in the hydrologic cycle as a result of increased greenhouse gas emissions are expected to induce variations in the intensity, length, and frequency of precipitation events. One strategy to lessen vulnerability is to quantify probable changes and adapt to them. Techniques such as log-normal, normal, and Gumbel are used (EV-I). Distributions were created with durations of 1, 2, 3, 6, and 24 h and return times of 2, 5, 10, 25, and 100 years. There were also mathematical correlations discovered between rainfall and recurrence interval.
Findings: Based on findings, the Gumbel approach produced the highest intensity values, whereas the other approaches produced values that were close to each other. The data indicates that 461.9 mm of rain fell during the monsoon season’s 301st week. However, it was found that the 29th week had the greatest average rainfall, 92.6 mm. With 952.6 mm on average, the monsoon season saw the highest rainfall. Calculations revealed that the yearly rainfall averaged 1171.1 mm. Using Weibull’s method, the study was subsequently expanded to examine rainfall distribution at different recurrence intervals of 2, 5, 10, and 25 years. Rainfall and recurrence interval mathematical correlations were also developed. Further regression analysis revealed that short wave irrigation, wind direction, wind speed, pressure, relative humidity, and temperature all had a substantial influence on rainfall.
Originality and value: The results of the rainfall IDF curves can provide useful information to policymakers in making appropriate decisions in managing and minimizing floods in the study area.
Software Engineering and Project Management - Software Testing + Agile Method...Prakhyath Rai
Software Testing: A Strategic Approach to Software Testing, Strategic Issues, Test Strategies for Conventional Software, Test Strategies for Object -Oriented Software, Validation Testing, System Testing, The Art of Debugging.
Agile Methodology: Before Agile – Waterfall, Agile Development.
Build the Next Generation of Apps with the Einstein 1 Platform.
Rejoignez Philippe Ozil pour une session de workshops qui vous guidera à travers les détails de la plateforme Einstein 1, l'importance des données pour la création d'applications d'intelligence artificielle et les différents outils et technologies que Salesforce propose pour vous apporter tous les bénéfices de l'IA.
Prediction of Electrical Energy Efficiency Using Information on Consumer's Ac...PriyankaKilaniya
Energy efficiency has been important since the latter part of the last century. The main object of this survey is to determine the energy efficiency knowledge among consumers. Two separate districts in Bangladesh are selected to conduct the survey on households and showrooms about the energy and seller also. The survey uses the data to find some regression equations from which it is easy to predict energy efficiency knowledge. The data is analyzed and calculated based on five important criteria. The initial target was to find some factors that help predict a person's energy efficiency knowledge. From the survey, it is found that the energy efficiency awareness among the people of our country is very low. Relationships between household energy use behaviors are estimated using a unique dataset of about 40 households and 20 showrooms in Bangladesh's Chapainawabganj and Bagerhat districts. Knowledge of energy consumption and energy efficiency technology options is found to be associated with household use of energy conservation practices. Household characteristics also influence household energy use behavior. Younger household cohorts are more likely to adopt energy-efficient technologies and energy conservation practices and place primary importance on energy saving for environmental reasons. Education also influences attitudes toward energy conservation in Bangladesh. Low-education households indicate they primarily save electricity for the environment while high-education households indicate they are motivated by environmental concerns.
Design and optimization of ion propulsion dronebjmsejournal
Electric propulsion technology is widely used in many kinds of vehicles in recent years, and aircrafts are no exception. Technically, UAVs are electrically propelled but tend to produce a significant amount of noise and vibrations. Ion propulsion technology for drones is a potential solution to this problem. Ion propulsion technology is proven to be feasible in the earth’s atmosphere. The study presented in this article shows the design of EHD thrusters and power supply for ion propulsion drones along with performance optimization of high-voltage power supply for endurance in earth’s atmosphere.
FINGERPRINT CLASSIFICATION BASED ON ORIENTATION FIELD
1. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
DOI : 10.5121/ijesa.2018.8403 27
FINGERPRINT CLASSIFICATION BASED ON
ORIENTATION FIELD
Zahraa Hadi Khazaal and Safaa S. Mahdi
Department of Computer Engineering, Al Nahrain University, Baghdad, Iraq
ABSTRACT
This paper introduces an effective method of fingerprint classification based on discriminative feature
gathering from orientation field. A nonlinear support vector machines (SVMs) is adopted for the
classification. The orientation field is estimated through a pixel-Wise gradient descent method and the
percentage of directional block classes is estimated. These percentages are classified into four-dimensional
vector considered as a good feature that can be combined with an accurate singular point to classify the
fingerprint into one of five classes. This method shows high classification accuracy relative to other spatial
domain classifiers.
KEYWORDS
Orientation Field, Singular point, SVMs Classifier, Feature Vector.
1.INTRODUCTION
The maturing of fingerprint identification system and the fingerprint database is expanded
therefore, the identification become a problem due to a comparison of the input image with a large
fingerprint images taken a long time especially in real time application, so that the classification
becomes a vital role in increasing speed and accuracy of the system[1]. By reducing the size of the
database through dividing it into classes. SVMs classifier provides elegant solution to different
problems such as classification, regression problem, face detection, and handwriting recognition.
SVM can be considered a conventional classifier due to immediate determination of decision
boundary between training data, this leads to maximization of feature space between classes that
minimizing generalization error, also the risk of over fitting is less in SVMs. At last its ability to
learning with few training examples. There are many applications of fingerprint that increase the
privacy of individuals e.g. (deal with money in electronic wallet, and electoral voting). Fingerprint
classified into five classes named as Whorl, Left Loop, Arch, Right loop and Tented arch according
to henry classification as shown in Figure (1). The distrubion of classes is uneven. The probability
of classes were 0.279 0.029, 0.317, 0.338, 0.029, and 0.037 for the whorl, right loop, left loop,
tented arch and arch respectively [2]as approximatly descriped in figure (2). The significant
problem in classification is the similrity of structure and shape of images escepicaly in whrol where
covering large spread of print for the same charateristic, this problem called large inter-class
variation as shown in figure (3). A classification algorithm was divided into 4 types (i) Rule-based
method depend on the number, locations of singularity and geometrical shape of ridge line [3]. (ii)
Syntactic approach in this method image was described by small groups of directional components
in the orientation image then classification depends on some grammars. (iii) Structural approach
was based on a rational graph model for describing the feature. (iv) Cognitive method, for example,
neural network, fuzzy and support vector machines (SVMs) were relay on feature vector either
from singularity region or directional image and processed it to obtain final classification based on
a pyramidal architecture[4].
2. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
28
Figure 1. Five samples of a fingerprint from NIST-DB4 database. (a) Right loop, (b)Whorl, (c)Left loop, (e)
Arch, (f) Tented arch
Figure 2. The distribution of henry classification [2].
different classification techniques have been proposed such as hierarchical classifier based on K
Nearest Neighbors (KNN) and SVM for feature extracted from orientation field and complex filter
[5] and convolutional Neural Network was used for classification [6] as well as Fuzzy C-Means
(FCM) and Naive Bayes classifier demonstrated for classification fingerprint into 4 classes [7] .
In this paper, an efficient method for classification with low time consuming and high accuracy is
investigated, uncomplicated but robust against noise based on extracting feature that consists of a
directional block percentage of an image and location and number of accurate singular points. We
use SVM as a classifier for classification and comparing it with KNN classifier.
Right Loop 31%
Left Loop 34%Whorl 28%
Arch 4%
Tented
arch 3%
(a) (b)
(c)
(e) (f)
3. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
29
The paper is regulated as follows. Section 2 introduces the proposed method. Section 3, 4, 5 features
will be extracted, then section 5 presents classifier that was applied, Section 6 displays the
experimental result for two databases, finally conclusions is carried out based on result.
Figure 3. Three whorl images with different characteristics
2. THE PROPOSED METHOD
The basic idea of the proposed method is to extract a sufficient feature to improve time
performance and to provide high accuracy for fingerprint image classification in different quality,
and that makes recognition of fingerprint to become more easier in large database. Figure (4)
demonstrates the block diagram of the proposed method. Support vector machines used for
classification.
Figure 4. Proposed method block diagram
Input Image from
Database
Orientation Field
Estimation
Directional Percentage
Vector
Core and Delta
Detection
Classification
vector
SVM classifier
Whorl Whorl Whorl
4. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
30
3. DIRECTIONAL FIELD ESTIMATION BY LEAST MEAN SQUARE
ALGORITHM.
A generic step in the singular point determination is the orientation field that presents the direction
and location of the ridge in the fingerprint image. The gradient-based method was used for
calculating orientation θ, through different steps[8], these are as follows,
1- Compute the gradient 𝜕 𝑥(𝑖, 𝑗) and 𝜕 𝑦(𝑖, 𝑗) of each pixel in fingerprint image along the
horizontal and vertical direction image based on Sobel Operator that consist of 3×3
gradient filter as follows: -
𝑠 𝑥 = [
1 2 1
0 0 0
−1 −2 −1
] (1)
𝑠 𝑦 = [
1 0 −1
2 0 −2
1 0 −1
] (2)
2- Calculate the orientation 𝜃(𝑖, 𝑗) of 𝑤 × 𝑤 nonoverlapping blocks centering at pixel
(𝑖, 𝑗) and can be computed as: -
𝑉𝑥(i, j) = ∑ ∑ 2𝐺 𝑥
𝑗+
𝑊
2
𝑣=𝑗−
𝑊
2
𝑖+
𝑊
2
𝑢=𝑖−
𝑊
2
(u, v)𝐺 𝑦(𝑢, 𝑣) (3)
𝑉𝑦(𝑖, 𝑗) = ∑ ∑ 𝐺 𝑥
2(𝑢, 𝑣)
𝑗+
𝑊
2
𝑣=𝑗−
𝑊
2
𝑖+
𝑊
2
𝑢=𝑖−
𝑊
2
− 𝐺 𝑦
2(𝑢, 𝑣) (4)
Accordingly,
𝜃(𝑖, 𝑗) =
1
2
𝑡𝑎𝑛−1
(
𝑉𝑥
𝑉𝑦
) +
𝜋
2
(5)
3- In order to remove broken ridge on the orientation field when gradually varies in the local
neighborhood where not genuine singular point appears due to noise, the orientation field
is converted to continues vector field to improve accuracy then apply a low pass filter.
The continues vector field in the x and y components is given by: -
𝜙 𝑥(𝑖, 𝑗) = cos(2𝜃(𝑖, 𝑗)) (6)
𝜙 𝑦(𝑖, 𝑗) = sin(2𝜃(𝑖, 𝑗)) (7)
4- Smooth the orientation by low pass filter as follows: -
𝜙 𝑥
′ (𝑖, 𝑗) = ∑ ∑ 𝐺(𝑢, 𝑣). 𝜙 𝑥(𝑖 − 𝑢𝑤, 𝑗 − 𝑣𝑤)
𝑤 𝜙
2
𝑣=−
𝑤 𝜙
2
𝑤 𝜙
2
𝑢=−
𝑤 𝜙
2
(8)
5. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
31
𝜙 𝑦
′ (𝑖, 𝑗) = ∑ ∑ 𝐺(𝑢, 𝑣). 𝜙 𝑦(𝑖 − 𝑢𝑤, 𝑗 − 𝑣𝑤)
𝑤 𝜙/2
𝑣=−𝑤 𝜙/2
𝑤 𝜙/2
𝑢=−𝑤 𝜙/2
(9)
5- Estimate the smoothed orientation field centered at pixel (𝑖, 𝑗) to obtain a reliable
directional field and can be expressed as: -
𝜃′(𝑖, 𝑗) ≈
1
2
𝑡𝑎𝑛−1
𝜙 𝑦
′ (𝑖, 𝑗)
𝜙 𝑥
′ (𝑖, 𝑗)
(10)
Where 𝜃′
∈ {0, 𝜋 4⁄ , 𝜋 2⁄ , 𝜋}
6- Depending on Eq (10) , calculate the directional block percentage (DBP) for
approximation value of 𝜃′
for range 0 , 𝜋 4⁄ , 𝜋 2⁄ , and 𝜋 by division 𝑁𝑑 to 𝑁 𝑇 where 𝑁𝑑
indicates the number of blocks within the 𝜃′
over 𝑁 𝑇 that represents the total number of
the fingerprint blocks in the orientation field and can be expressed as [9]:-
𝐷𝑃(𝜃′) =
𝑁𝑑
𝑁 𝑇
× 100% (11)
After that combine all percentages for all 𝜃′
to generate feature vector that will be used later
in classification. Figure (5) presents a sample of the orientation field of the fingerprint image.
Figure 5. (a) Original Image (b) Orientation Field
4. MODIFIED SINGULAR POINT DETECTION
A Poincare index algorithm is a well-Known method for localizing singular point that includes
core and delta point. The Poincare Index (PI) was computed around the closed curved that travels
across the counterclockwise route. The value of PI was calculated as the collect of the difference
between each two orientation within the curve and can be found at [10]:-
𝑝𝑜𝑖𝑛𝑐𝑎𝑟𝑒(𝑖, 𝑗) =
1
2𝜋
∑ ∆(𝑘)
𝑁−1
𝑘=0
(12)
(a) (b)
6. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
32
∆( 𝑘) =
{
𝛿( 𝑘) 𝑖𝑓 | 𝛿(𝑘)| <
𝜋
2
𝜋 + 𝛿( 𝑘) 𝑖𝑓 𝛿( 𝑘) ≤ −
𝜋
2
𝜋 − 𝛿( 𝑘) 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(13)
Where
𝛿(𝑘) = 𝜃′
(𝑥(𝑘+1)𝑚𝑜𝑑𝑁, 𝑦(𝑘+1)𝑚𝑜𝑑𝑁) − 𝜃′(𝑥 𝑘, 𝑦 𝑘) (14)
The PI for core point is 0.5 but for delta is -0.5, furthermore postprocessing has been used to avoid
spurious singular point in the region of scares, crease, blurred prints and other area that affected
by noise and to obtain perfect and stable core and delta point, therefore this method should be
modified as follows[11]:
1- If the distance between two singular points less than 8 pixels, then all of them should be
eliminated.
2- In that case, N cores or deltas point occurs in the small circular region, then resolve this by
averaging cores and deltas.
Figure 6. Singular Point for Different Image in NIST SD4 and Second Database
5. CLASSIFICATION VECTOR
The classification vector consists of a directional block percentage for 4 directions, the position,
number of core and delta in fingerprint images as given in table 1.
7. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
33
Table 1. Classification vector
FEATURE VECTOR
No. of
deltas
No.
of
cores
Delta
Y
Direction
Delta
X
direction
Core
Y
direction
Core
x
direction
DBP
between
3𝜋 2⁄
and 𝜋
DBP
between
𝜋 2⁄ and
3 𝜋 𝟐⁄
DBP
between
𝜋 𝟒⁄ and
𝜋 𝟐⁄
DBP
between
0 and
𝜋 𝟒⁄
6. SVM CLASSIFIER
SVM is a supervised training technique that originated from statistical learning theory. SVM
performs a good classification via optimal hyperplane determination in infinite dimensional
feature space with a maximal margin to the training point that minimizing generalization error.
For nonlinearly separable data point a nonlinear function 𝜙(. ) which map input features into high
dimensional feature space ℋ where the hyperplane classifier applied[12].
For given training data: -
𝒟 = {(𝑥𝑖, 𝑦𝑖)| 𝑥𝑖 ∈ ℝ 𝑝
, 𝑦𝑖 ∈ {−1,1}}𝑖=1
𝑛
SVM was trained with learning algorithm from optimization theory Lagrange and gives a
sensible load through depending on the inner product in terms of the feature vector in the ℋ by
shaping them in kernel function K, i.e. K(x, y) = 𝜙(𝑥) 𝑇
. 𝜙(𝑦).The discriminate function of SVM
is given as [13],
𝑔(𝑥) = 𝑤 𝑇
𝜙(𝑥) + 𝑤0 (15)
The SVM determines a saddle point of Lagrange to specify the largest possible margin 𝐿 𝐷 to
the nearest training points. The dual form of the Lagrange is 𝑚𝑎𝑥𝐿 𝐷 as [13]: -
𝐿 𝐷 =
1
2
∑ 𝛼𝑖 −
1
2
𝑛
𝑖=1
∑ ∑ 𝛼𝑖 𝛼𝑗 𝑦𝑖
𝑛
𝑗=1
𝑦𝑗 𝜑 𝑇(𝑥𝑖)𝜑(𝑥𝑗)
𝑛
𝑖=1
(16)
Where 𝑦𝑖 = ±1, 𝑖 = 1, … 𝑛 𝑎𝑟𝑒 𝑐𝑙𝑎𝑠𝑠 𝑖𝑛𝑑𝑒𝑥 𝑣𝑎𝑙𝑢𝑒
𝛼𝑖 Is a Lagrange multiplier that satisfy, subject to the approval of,
∑ 𝛼𝑖 𝑦𝑖 = 0
𝑛
𝑖=1
, 0 ≤ 𝛼𝑖 ≤ 𝐶 (17)
Where C is a regularization parameter that controls fluctuation and training error. Kernel
function that is used in a nonlinear SVM must be symmetric function and should satisfy the
Mercer’s Theorem, thus, one of the important permissible kernel function is a Gaussian kernel as
expressed in following equation[14],
𝐾(𝑋, 𝑌) = exp (−
‖𝑋−𝑌‖
2𝜎2 ) (18)
8. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
34
Where 𝜎 is the Gaussian variance, the decision function of SVM described as follows[13],
𝑓(𝑥) = 𝑠𝑔𝑛 [∑ 𝛼𝑖 𝑦𝑖 𝜙(𝑥𝑖) 𝑇
𝜙(𝑥) + 𝑤0
𝑛
𝑖=1
] (19)
In criteria of fingerprint classification, we use feature vector that contains 14 dimensions which
classify fingerprint using a nonlinear binary classification. To distinguish image first
discrimination whorl from dataset that contain (Left loop, Arch, Right Loop, Tented arch) then
continuously apply SVM in hierarchal manner to separate the remaining fingerprint images as
illustrated in figure (7),
Figure 7. The block Diagram of Five class one VS all SVM Classifier [9]
7. EXPERIMENTAL RESULTS: -
In our work, the nonlinear SVMs classifier was applied according to figure (7) in addition,
parameters were chosen as C=1000, 𝜎 = 0.04. The performance of the proposed method is
presented by conducting on two databases. All program simulated in MATLAB 2017b
7.1 FIRST DATABASE NIST SPECIAL DATABASE 4 (NIST SD4)
NIST Special Database 4 (NIST SD4) be composed of 4000 gray scale images with size 512×512
classified for 5 classes, with 400 images for each class and two images per the same finger, so
500 fingerprints for training and 500 fingerprints for testing were randomly selected, table (2)
describes the confusion matrix on NIST SD4.
Feature vector from A, W, R, L, TA
R, TA, A, L
L, TA, L, R
W
A
R, LT
L R
9. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
35
Table (2) Confusion matrix for NIST SD4
Actual Class
Predicated Class
Whorl Negative
Whorl 96 4
Negative 0 400
Actual Class
Predicated Class
Arch Negative
Arch 94 6
Negative 0 300
Actual Class
Predicated Class
Tented Arch Negative
Tented Arch 100 0
Negative 5 195
Actual Class
Predicated Class
Left Loop Right Loop
Left Loop 100 0
Right Loop 0 95
Moreover, the performance evalulatition by using matrices named as precision, sensitivity,
specificity, and F-measure which are caluclated from the confucion matrix as follows and it
mensioned in figure (8)
Precision =
𝑡𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑡𝑢𝑟𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒+𝑓𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
(20)
𝑅𝑐𝑎𝑙𝑙(𝑆𝑒𝑛𝑠𝑖𝑡𝑣𝑖𝑡𝑦) =
𝑡𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑡𝑟𝑢𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑓𝑎𝑙𝑠𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒
(21)
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 =
𝑡𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒
𝑡𝑟𝑢𝑒 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝑓𝑎𝑙𝑠𝑒 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
(22)
𝐹_𝑀𝑒𝑎𝑠𝑢𝑟𝑒 = 2 ∗ (
𝑟𝑒𝑐𝑎𝑙𝑙 ∗ 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛
𝑟𝑒𝑐𝑎𝑙𝑙 + 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛
) (23)
10. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
36
Figure 8. Performance of NIST(SD4)
The result in the table (2) show that SVM allows us to achieve is 94 % accuracy for Whorl, 96%
for Arch, 100 % for Tented Arch and 100%, 95% for Left and Right respectively thus total mean
accuracy is 97% and average error percentage is 2.2% with average processing time 0.45 second,
also figure (8) provides the high ability of a classifier to identify positive and negative label.
7.2 SECOND DATABASE FOR 100 PERSONS
we are testing on Second database that contains 1000 fingerprint images for 100 people with size
640×480, each person has 10 images for the same fingerprint image with different orientation,
illumination, and quality half of them were used for test and other half for training. This database
was classified into 4 classes as Whorl, Left loop, Right loop, Arch. Table (3), figure (9) shows
the confusion matrix for this database and performance measure for database respectively.
Table (3) Confusion matrix for database of 100 persons each one has 10 fingerprint images
Actual Class
Predicated Class
Whorl Negative
Whorl 177 8
Negative 0 315
Actual Class
Predicated Class
Arch Negative
Arch 45 0
Negative 0 270
95.5
96
96.5
97
97.5
98
98.5
99
99.5
100
Precision Sensitivity Specificity Accuracy F-Measure
Actual Class
Predicated Class
Left Loop Right Loop
Left Loop 103 2
Right Loop 1 164
11. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
37
Figure (9) Performance of second database for 100 persons
from table (3) we observe that classifier confusion matrix of a database for 1000 images has
given an accuracy of 95.67% for Whorl and 100 %, 98%, 98.7 % for Arch, Left, and right
respectively i.e. the total average accuracy is 97.6 % with average processing time 0.46 second.
Also, when we apply KNN hierarchically as SVM classifier, the result of accuracy as described
in table (4) and the overall accuracy of both classifiers as shown in figure (10). Thus, KNN do
not perform good classification by comparing it with SVM for the same databases because of
learning and training phase is very fast which cannot be powerful to noise.
Table 4. Accuracy of KNN classifier
Class NIST (SD4) for 5 classes 2nd
Dataset for 4 Classes
Whorl 88% 67%
ARCH 96% 75%
Right Loop 87% 87%
Left Loop 83% 90%
Tented Arch 57% -
Figure 10. Overall accuracy of KNN and SVM
95.5
96
96.5
97
97.5
98
98.5
99
99.5
100
Precision Sensitivity Specificity Accuracy F-Measure
70
75
80
85
90
95
100
KNN for
NIST(SD4)
KNN for 3rd
database
SVM for
NIST(SD4)
SVM for 3rd
database
Accuracy
12. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
38
Table (5) show demonstrate comparison to other researchers for different databases and method
was used
Table 5. Compares our result with similar works.
ClassifierDatabase5 Class
Accuracy
FeatureAlgorithm
Fuzzy Wavelet
Neural network
(FWNN)
NIST-SD4/
2500 images
1000 train
images, 1500
test images
92.4%Directional
image and a
singular point on
ROI image
Wang, et al
[15]
2006
SVMFVC 2004/800
Images
400 train
images, 400 test
images
96.38%Directional
block percentage
Ji, Yi [9]
2007
CNNFVC2000, 2002,
2004
788 train
images, 100 test
images
97.2%-Su Jeon, Yong
[16] 2017
BP network with
SVM
FVC 200493.6%Primitive
Feature
extracted by a
Genetic
algorithm from
Orientation Field
Hu, Xei [17]
2010
Rule based
method
FVC 2004 for
320 images
95%Singular pointJaved, Usman
[18] 2011
Nonlinear SVMNIST-SD4 1000
images (5 class)
For database of
100 persons,
each one has 10
images (4 class)
97%,
97.6%
Directional
percentage and
singular position
and number
Our Proposed
In can be seen from table (5) that classification efficiency was improved in four and five classes
with less processing time due the hierarchal procedure of classification that gives better result.
8. CONCLUSIONS
In order to facilitate fingerprint recognition for large databases and reduce required processing
time, as well as, increase the efficiency , it is necessary to classify fingerprint first, so that our
method gives classification that depends on a robust classification vector which contains the
13. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
39
percentage of the directional image and accurate singular point, a nonlinear SVM classifier with
RBF kernel and KNN classifier was tested on two Database (NIST-SD4) and another database of
100 people , each one has 10 impressions for the same finger. The experimental result shows that
classification accuracy for SVM is 97.6% for 5 classes in NIST database and 97.6% for another
database for 4 classes while KNN classifier achieve 82% for NIST 80% for 2nd
database. In
summary, the algorithm with SVM classifier that used for fingerprint image classification is
more suitable and attractive as well as low computation time through classification for four and
five classes due to the hierarchal procedure.
ACKNOWLEDGEMENTS
The author wishes to thank the mother who gave all tenderness, God's mercy, also would like to
thank the Father and sisters who gives safety and strength, may God protect him.
REFERENCES
[1] R. Wang, C. Han, Y. Wu, and T. Guo, “Fingerprint Classification Based on Depth Neural Network,”
arXiv, pp. 1–14, 2014.
[2] A. A. Abbood and G. Sulong, “Fingerprint Classification Techniques : A Review,” IJCSI Int. J.
Comput. Sci. Issues, vol. 11, no. 1, pp. 111–122, 2014.
[3] F. Mirzaei, M. Biglari, and H. Ebrahimpour-komleh, “A Novel Rule-based Fingerprint
Classification Approach,” Int. J. Digit. Inf. Wirel. Commun., vol. 3, no. 4, pp. 51–55, 2013.
[4] R. Wang, C. Han, Y. Wu, and T. Guo, “Fingerprint Classification Based on Depth Neural Network,”
Drug Des. Open Access, vol. 5, no. 2, pp. 1–7, 2014.
[5] K. Cao, L. Pang, J. Liang, and J. Tian, “Fingerprint Classification by a Hierarchical Classifier,”
Pattern Recognit., vol. 46, no. 12, pp. 3186–3197, 2013.
[6] J. M. Shrein, “Fingerprint Classification Using Convolutional Neural Networks and Ridge
Orientation Images,” in IEEE Symposium Series on Computational Intelligence (SSCI), 2017, pp.
1–8.
[7] G. Vitello, F. Sorbello, G. I. M. Migliore, V. Conti, and S. Vitabile, “A Novel Technique for
Fingerprint Classification Based on Fuzzy C-Means and Naive Bayes Classifier,” 2014 Eighth Int.
Conf. Complex, Intell. Softw. Intensive Syst. A, pp. 155–161, 2014.
[8] G. A. Bahgat, A. H. Khalil, N. S. A. Kader, and S. Mashali, “Fast and Accurate Algorithm for Core
Point Detection in Fingerprint Images,” Egypt. Informatics J., vol. 14, no. 1, pp. 15–25, 2013.
[9] L. Ji and Z. Yi, “SVM-based Fingerprint Classification Using Orientation Field,” Third Int. Conf.
Nat. Comput. IEEE, vol. 2, no. Icnc, pp. 724–727, 2007.
[10] D. Weng, Y. Y. Ã, and D. Yang, “Singular Points Detection based on Multi-resolution in Fingerprint
Images,” Neurocomputing, vol. 74, no. 17, pp. 3376–3388, 2011.
[11] G. B. Iwasokun and O. C. Akinyokun, “Fingerprint Singular Point Detection Based on Modififed
Poincare Index Method,” vol. 7, no. 5, pp. 259–272, 2014.
[12] L. X. W. L. C.-C. J. Kuo, Visual Quality Assessment by Machine Learning. Springer, 2015.
[13] A. R. W. K. D. Copsey, Statistical Pattern Recognition, 3rd Editio. Wiley, 2011.
14. International Journal of Embedded Systems and Applications (IJESA), Vol 8, No.4,December 2018
40
[14] P. Wankhede and D. Doye, “Support Vector Machines for Fingerprint Classification,” no. 4, pp. 1–
5.
[15] W. Wang, J. Li, and W. Chen, “Fingerprint Classification Using Improved Directional Field and
Fuzzy Wavelet Neural Network,” in IEEE, 2006, vol. 2, pp. 9961–9964.
[16] W. Jeon and S. Rhee, “Fingerprint Pattern Classification Using Convolution Neural Network,” Int.
J. Fuzzy Log. Intell. Syst., vol. 17, no. 3, pp. 170–176, 2017.
[17] J. Hu and M. Xie, “Fingerprint Classification Based on Genetic Programming,” IEEE, vol. 6, pp.
193–196, 2010.
[18] S. Javed, “Computerized System for Fingerprint Classification using Singular Points,” IEEE, vol. 1,
pp. 4577–657, 2011.