This document describes a system called "Analysis of Intersections" for recognizing electronic signatures. The system analyzes signature images by finding the paths and intersections within the signature. It registers each path by storing the pixels in an array. Path features like shape, direction, size and slope are then determined. Signatures can be identified by comparing these features between signatures. The system was tested on 2,250 signatures and achieved an accuracy of 98.66% recognition.
This document discusses various tree-related concepts in Java Swing including the JTree component, tree models, tree rendering, and tree traversal. It explains that JTree uses a tree model to represent hierarchical data and tree paths to identify nodes. It also covers customizing node rendering, implementing tree models, and using progress indicators like JProgressBar and ProgressMonitor.
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ijcseit
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. The face recognition system is critical when individuals have very similar biometric signature such
as identical twins. In this paper, new efficient facial-based identical twins recognition is proposed
according to geometric moment. The utilized geometric moment is Zernike Moment (ZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area in an image is detected using
AdaBoost approach. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian
Twin Society which contain scaled and rotated facial images of identical twins in different illuminations.
The results prove the ability of proposed method to recognize a pair of identical twins. Also, results show
that the proposed method is robust to rotation, scaling and changing illumination.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
The Framework of Image Recognition based on Modified Freeman Chain CodeCSCJournals
Image recognition of line drawing involves feature extraction and feature comparison. Works on the extraction required the representation of the image to be compared. Combining these two requirements, a framework that develops a new extraction algorithm of a chain code representation is presented. In addition, new corner detection is presented as pre-processing to the line drawing input in order to derive the chain code. This paper presents a new framework that consist of five steps namely pre-processing and image processing, new corner detection algorithm, chain code generator, feature extraction algorithm, and recognition process. Heuristic approach that is applied in the corner detection algorithm accepts input of thinned binary image and produce a modified thinned binary image consisted of J character to represent corners in the image. Using the modified thinned binary image, a new chain code scheme that is based on Freeman chain code is proposed and an algorithm is developed to generate a single chain code series that is representing the line drawing input. The feature extraction algorithm is then extracting the three pre-defined features of the chain code for recognition purpose. The features are corner properties, distance between corners, and angle from a corner to the connected corner. The explanation of steps in the framework is supported with two line drawings. The results show that the framework successfully recognizes line drawing into five categories namely not similar line drawing, and four other categories that are similar but with attributes of rotation angle and scaling ratio.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
Error entropy minimization for brain image registration using hilbert huang t...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Independent Component Analysis of Edge Information for Face RecognitionCSCJournals
In this paper we address the problem of face recognition using edge information as independent components. The edge information is obtained by using Laplacian of Gaussian (LoG) and Canny edge detection methods then preprocessing is done by using Principle Component analysis (PCA) before applying the Independent Component Analysis (ICA) algorithm for training of images. The independent components obtained by ICA algorithm are used as feature vectors for classification. The Euclidean distance and Mahalanobis distance classifiers are used for testing of images. The algorithm is tested on two different databases of face images for variation in illumination and facial poses up to 180 degree rotation angle.
This document discusses various tree-related concepts in Java Swing including the JTree component, tree models, tree rendering, and tree traversal. It explains that JTree uses a tree model to represent hierarchical data and tree paths to identify nodes. It also covers customizing node rendering, implementing tree models, and using progress indicators like JProgressBar and ProgressMonitor.
ZERNIKE MOMENT-BASED FEATURE EXTRACTION FOR FACIAL RECOGNITION OF IDENTICAL T...ijcseit
Face recognition is one of the most challenging problems in the domain of image processing and machine
vision. The face recognition system is critical when individuals have very similar biometric signature such
as identical twins. In this paper, new efficient facial-based identical twins recognition is proposed
according to geometric moment. The utilized geometric moment is Zernike Moment (ZM) as a feature
extractor inside the facial area of identical twins images. Also, the facial area in an image is detected using
AdaBoost approach. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian
Twin Society which contain scaled and rotated facial images of identical twins in different illuminations.
The results prove the ability of proposed method to recognize a pair of identical twins. Also, results show
that the proposed method is robust to rotation, scaling and changing illumination.
Vehicle License Plate Recognition (VLPR) is an important system for harmonious traffic. Moreover this system is helpful in many fields and places as private and public entrances, parking lots, border control and theft control. This paper presents a new framework for Sudanese VLPR system. The proposed framework uses Multi Objective Particle Swarm Optimization (MOPSO) and Connected Component Analysis (CCA) to extract the license plate. Horizontal and vertical projection will be used for character segmentation and the final recognition stage is based on the Artificial Immune System (AIS). A new dataset that contains samples for the current shape of Sudanese license plates will be used for training and testing the proposes framework.
The Framework of Image Recognition based on Modified Freeman Chain CodeCSCJournals
Image recognition of line drawing involves feature extraction and feature comparison. Works on the extraction required the representation of the image to be compared. Combining these two requirements, a framework that develops a new extraction algorithm of a chain code representation is presented. In addition, new corner detection is presented as pre-processing to the line drawing input in order to derive the chain code. This paper presents a new framework that consist of five steps namely pre-processing and image processing, new corner detection algorithm, chain code generator, feature extraction algorithm, and recognition process. Heuristic approach that is applied in the corner detection algorithm accepts input of thinned binary image and produce a modified thinned binary image consisted of J character to represent corners in the image. Using the modified thinned binary image, a new chain code scheme that is based on Freeman chain code is proposed and an algorithm is developed to generate a single chain code series that is representing the line drawing input. The feature extraction algorithm is then extracting the three pre-defined features of the chain code for recognition purpose. The features are corner properties, distance between corners, and angle from a corner to the connected corner. The explanation of steps in the framework is supported with two line drawings. The results show that the framework successfully recognizes line drawing into five categories namely not similar line drawing, and four other categories that are similar but with attributes of rotation angle and scaling ratio.
One-Sample Face Recognition Using HMM Model of Fiducial AreasCSCJournals
In most real world applications, multiple image samples of individuals are not easy to collate for direct implementation of recognition or verification systems. Therefore there is a need to perform these tasks even if only one training sample per person is available. This paper describes an effective algorithm for recognition and verification with one sample image per class. It uses two dimensional discrete wavelet transform (2D DWT) to extract features from images and hidden Markov model (HMM) was used for training, recognition and classification. It was tested with a subset of the AT&T database and up to 90% correct classification (Hit) and false acceptance rate (FAR) of 0.02% was achieved.
AN EFFICIENT FEATURE EXTRACTION METHOD WITH LOCAL REGION ZERNIKE MOMENT FOR F...ieijjournal
Face recognition is one of the most challenging problems in the domain of image processing and machine vision. The face recognition system is critical when individuals have very similar biometric signature such as identical twins. In this paper, the facial area in an image is detected using AdaBoost approach. After that the facial area is divided into some local regions. Finally, new efficient facial-based identical twins feature extractor based on the geometric moment is applied into local regions of face image.The utilized geometric moment is Zernike Moment (ZM) as a feature extractor inside the local regions of facial area of identical twins images. The proposed method is evaluated on two datasets, Twins Days Festival and Iranian Twin Society which contain scaled and rotated facial images of identical twins in different illuminations. The results prove the ability of proposed method to recognize a pair of identical twins.Also, results show that the proposed method is robust to rotation, scaling and changing illumination.
Error entropy minimization for brain image registration using hilbert huang t...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Independent Component Analysis of Edge Information for Face RecognitionCSCJournals
In this paper we address the problem of face recognition using edge information as independent components. The edge information is obtained by using Laplacian of Gaussian (LoG) and Canny edge detection methods then preprocessing is done by using Principle Component analysis (PCA) before applying the Independent Component Analysis (ICA) algorithm for training of images. The independent components obtained by ICA algorithm are used as feature vectors for classification. The Euclidean distance and Mahalanobis distance classifiers are used for testing of images. The algorithm is tested on two different databases of face images for variation in illumination and facial poses up to 180 degree rotation angle.
This document summarizes a research paper that proposes a novel method for separating clumped particles in microscopic images. The method uses an iterative hypothesis and verification technique. It generates hypotheses about particle boundaries and colors, then verifies the hypotheses using measures like boundary distance. This allows it to detect non-circular particle shapes, unlike previous methods using circle/ellipse detection. The technique is tested on blood cell images and achieves 98% accuracy in particle counting, higher than other methods.
On comprehensive analysis of learning algorithms on pedestrian detection usin...UniversitasGadjahMada
Despite the surge of deep learning, deploying the deep learning-based pedestrian detection into the real system faces hurdles, mainly due to the huge resource usages. The classical feature-based detection system still becomes feasible option. There have been many efforts to improve the performance of pedestrian detection system. Among many feature set, Histogram of Oriented Gradient seems to be very effective for person detection. In this research, various machine learning algorithms are investigated for person detection. Different machine learning algorithms are evaluated to obtain the optimal accuracy and speed of the system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.
OCR is abbreviated as Optical Character Recognition. Optical Character recognition is a process of recognition of different characters (printed or handwritten) from a digital image of documents. In OCR technique, characters can be recognized through optical mechanism. Various combinations of lines & curves make the characters. Characters recognition ability of human beings is very high. They can recognize all characters accurately. But same task is very difficult by OCR system. The wide usage of touch-screen based mobile devices has led to a large volume of the users preferring touch-based interaction with the machine, as opposed to traditional input via keyboards/mice. To exploit this, we focus on the Android platform to design a personalized handwriting recognition system that is acceptably fast, light-weight, possessing a user-friendly interface with minimally-intrusive correction and auto-personalization mechanisms.
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...INFOGAIN PUBLICATION
This document summarizes a research paper about developing a computer program that can take a 2D photograph as input, analyze it to determine the objects and their 3D structure, and output a 3D representation that can be viewed from any angle. The program makes assumptions about the objects, such as they are constructed from transformations of known 3D models and are supported by other visible objects or a ground plane. It develops processes for 2D to 3D construction and 3D to 2D display that can handle most arrangements of objects with planar surfaces.
An Iot Based Smart Manifold Attendance SystemIJERDJOURNAL
ABSTRACT:- Attendance has been an age old procedure employed in different disciplines of educational institutions. While attendance systems have witnessed growth right from manual techniques to biometrics, plight of taking attendance is undeniable. In fingerprint based attendance monitoring, if fingers get roughed / scratched, it leads to misreading. Also for face recognition, students will have to make a queue and each one will have to wait until their face gets recognised. Our proposed system is employing “manifold attendance” that means employing passive attendance, where at a time, the attendance of multiple people can get captured. We have eliminated the need of queue system / paper-pen system of attendance, and just with a single click the attendance is not only captured, but monitored as well, that too without any human intervention. In the proposed system, creation of database and face detection is done by using the concepts of bounding box, whereas for face recognition we employ histogram equalization and matching technique.
Security System based on Sclera RecognitionIRJET Journal
This document summarizes a research paper on a new human identification method called sclera recognition. Sclera recognition uses the unique blood vessel patterns on the white outer part of the eye called the sclera. The document outlines the four main parts of the proposed sclera recognition system: 1) sclera segmentation to isolate the sclera region of an eye image, 2) feature extraction of the blood vessel patterns using algorithms like SURF, 3) feature matching between images, and 4) matching decision based on metrics like false acceptance and rejection rates. The sclera recognition system is described as a promising new biometric identification technique due to the uniqueness of individual sclera patterns.
Finding the shortest path in a graph and its visualization using C# and WPF IJECEIAES
This document summarizes a study that implemented Dijkstra's algorithm to find the shortest path between two vertices in an undirected graph using C# and WPF. It describes Dijkstra's algorithm and how it was programmed using .NET 4.0, Visual Studio 2010, and WPF. The program allows drawing graphs, finding the shortest path between two vertices by highlighting the path, and displaying the path length. Screenshots show example outputs of the program finding shortest paths in sample graphs.
New approach to calculating the fundamental matrix IJECEIAES
The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix ''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation it shows this technique of use in real-time applications
Detection of crossover & bifurcation points on a retinal fundus image by ...eSAT Journals
Abstract Over the last few decades, analysis of retinal vascular images has gained its popularity among the cyber scientists. Retina is unique in nature because of its most important feature, bifurcation & crossover points, serving reliable authentication basis. Using these two points many problem of authentication could be handled easily. Literature has shown that in a retinal vascular structure, bifurcation& crossover points need to be separated for the purpose of authentication. With this motivation in mind, we propose a novel method to segregate vascular bifurcation points from crossover points in any retinal image by analyzing neighborhood connectivity of non-vascular regions around a junction point on retinal blood vessels. Keywords: Retinal vessel analysis, retinal blood vessels, bifurcation and crossover point’s detection.
Detection of crossover & bifurcation points on a retinal fundus image by anal...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This is a full report of my project in Level 3 Term 1. The project was basically a self-driven vehicle capable of localizing itself in a grid and planning a path between two nodes. It can avoid particular nodes and plan path between two allowed nodes. Flood Fill Algorithm will be used for finding the path between two allowed nodes. The vehicle is also capable of transferring blocks from one node to another. In fact, this vehicle is a prototype of a self-driven vehicle capable of transporting passengers and it can also be used in industries to transfer different items from one place to another.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Authentication of a person is the major concern in this era for security purposes. In biometric systems Signature is one of the behavioural features used for the authentication purpose. In this paper we work on the offline signature collected through different persons. Morphological operations are applied on these signature images with Hough transform to determine regular shape which assists in authentication process. The values extracted from this Hough space is used in the feed forward neural network which is trained using back-propagation algorithm. After the different training stages efficiency found above more than 95%. Application of this system will be in the security concerned fields, in the defence security, biometric authentication, as biometric computer protection or as method of the analysis of person’s behaviour changes.
Two Methods for Recognition of Hand Written Farsi CharactersCSCJournals
This document describes two methods for recognizing handwritten Farsi characters using neural networks and machine learning techniques. The first method uses wavelet transforms to extract features from character borders and trains a neural network classifier on these features. It achieves 86.3% accuracy on test data. The second method divides characters into groups based on visual properties, extracts moment features for each group, and uses Bayesian classification with a decision tree post-processing step. It achieves an overall recognition rate of 90.64% according to the results presented. Experimental evaluations of both methods on different datasets of handwritten Farsi characters are discussed.
Comparative study of two methods for Handwritten Devanagari Numeral RecognitionIOSR Journals
Abstract : In this paper two different methods for Numeral Recognition are proposed and their results are
compared. The objective of this paper is to provide an efficient and reliable method for recognition of
handwritten numerals. First method employs Grid based feature extraction and recognition algorithm. In this
method the features of the image are extracted by using grid technique and this feature set is then compared
with the feature set of database image for classification. While second method contains Image Centroid Zone
and Zone Centroid Zone algorithms for feature extraction and the features are applied to Artificial Neural
Network for recognition of input image. Machine text recognition is important research area because of its
applications in many areas like Bank, Post office, Hospitals etc.
Keywords: Handwritten Numeral Recognition, Grid Technique, ANN, Feature Extraction, Classification.
Tracking number plate from vehicle usingijfcstjournal
This document presents a new algorithm in MATLAB to extract vehicle number plates from images in various lighting conditions. The algorithm uses preprocessing techniques like grayscale conversion, dilation, and edge detection. It then segments the region of interest containing the number plate and extracts it. Individual characters are then segmented and recognized using template matching. The algorithm achieves 99% accuracy on images taken from a fixed angle and distance under controlled conditions. It is less accurate for images with problematic backgrounds or lighting. The algorithm provides an automated way to extract number plates for applications like traffic monitoring, parking management, and stolen vehicle identification.
This document describes a system for offline signature verification. It begins with an introduction to the problem and approaches for online and offline verification. It then outlines the proposed approach, which involves preprocessing the signature image, extracting seven geometric features, training the system using sample signatures, and verifying new signatures by comparing their features to the training data. The key steps are preprocessing to remove noise, feature extraction of attributes like slant angle, aspect ratio, area, center of gravity, edge and cross points, and verification by calculating the Euclidean distance between a test signature's features and the trained template. Implementation details and results are also mentioned.
Neural Network based Vehicle Classification for Intelligent Traffic Controlijseajournal
Nowadays, number of vehicles has been increased and traditional systems of traffic controlling couldn’t be
able to meet the needs that cause to emergence of Intelligent Traffic Controlling Systems. They improve
controlling and urban management and increase confidence index in roads and highways. The goal of this
article is vehicles classification base on neural networks. In this research, it has been used a immovable
camera which is located in nearly close height of the road surface to detect and classify the vehicles. The
algorithm that used is included two general phases; at first, we are obtaining mobile vehicles in the traffic
situations by using some techniques included image processing and remove background of the images and
performing edge detection and morphology operations. In the second phase, vehicles near the camera are
selected and the specific features are processed and extracted. These features apply to the neural networks
as a vector so the outputs determine type of vehicle. This presented model is able to classify the vehicles in
three classes; heavy vehicles, light vehicles and motorcycles. Results demonstrate accuracy of the
algorithm and its highly functional level.
This document presents a simple signature recognition system that uses invariant central moment and modified Zernike moment for feature extraction. The system is divided into preprocessing, feature extraction, and recognition/verification stages. In preprocessing, the input signature image is converted to grayscale and binary, and the region of interest is extracted. Feature extraction uses invariant central moments and Zernike moments to extract shape features. Recognition and verification is performed using a backpropagation neural network for its high accuracy and low computational complexity. The system was tested on a database of 500 signatures from 50 individuals and achieved suitable performance for signature verification.
This document summarizes a research paper that proposes a novel method for separating clumped particles in microscopic images. The method uses an iterative hypothesis and verification technique. It generates hypotheses about particle boundaries and colors, then verifies the hypotheses using measures like boundary distance. This allows it to detect non-circular particle shapes, unlike previous methods using circle/ellipse detection. The technique is tested on blood cell images and achieves 98% accuracy in particle counting, higher than other methods.
On comprehensive analysis of learning algorithms on pedestrian detection usin...UniversitasGadjahMada
Despite the surge of deep learning, deploying the deep learning-based pedestrian detection into the real system faces hurdles, mainly due to the huge resource usages. The classical feature-based detection system still becomes feasible option. There have been many efforts to improve the performance of pedestrian detection system. Among many feature set, Histogram of Oriented Gradient seems to be very effective for person detection. In this research, various machine learning algorithms are investigated for person detection. Different machine learning algorithms are evaluated to obtain the optimal accuracy and speed of the system.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Comparative Analysis of Hand Gesture Recognition TechniquesIJERA Editor
During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.
OCR is abbreviated as Optical Character Recognition. Optical Character recognition is a process of recognition of different characters (printed or handwritten) from a digital image of documents. In OCR technique, characters can be recognized through optical mechanism. Various combinations of lines & curves make the characters. Characters recognition ability of human beings is very high. They can recognize all characters accurately. But same task is very difficult by OCR system. The wide usage of touch-screen based mobile devices has led to a large volume of the users preferring touch-based interaction with the machine, as opposed to traditional input via keyboards/mice. To exploit this, we focus on the Android platform to design a personalized handwriting recognition system that is acceptably fast, light-weight, possessing a user-friendly interface with minimally-intrusive correction and auto-personalization mechanisms.
Enhancing the Design pattern Framework of Robots Object Selection Mechanism -...INFOGAIN PUBLICATION
This document summarizes a research paper about developing a computer program that can take a 2D photograph as input, analyze it to determine the objects and their 3D structure, and output a 3D representation that can be viewed from any angle. The program makes assumptions about the objects, such as they are constructed from transformations of known 3D models and are supported by other visible objects or a ground plane. It develops processes for 2D to 3D construction and 3D to 2D display that can handle most arrangements of objects with planar surfaces.
An Iot Based Smart Manifold Attendance SystemIJERDJOURNAL
ABSTRACT:- Attendance has been an age old procedure employed in different disciplines of educational institutions. While attendance systems have witnessed growth right from manual techniques to biometrics, plight of taking attendance is undeniable. In fingerprint based attendance monitoring, if fingers get roughed / scratched, it leads to misreading. Also for face recognition, students will have to make a queue and each one will have to wait until their face gets recognised. Our proposed system is employing “manifold attendance” that means employing passive attendance, where at a time, the attendance of multiple people can get captured. We have eliminated the need of queue system / paper-pen system of attendance, and just with a single click the attendance is not only captured, but monitored as well, that too without any human intervention. In the proposed system, creation of database and face detection is done by using the concepts of bounding box, whereas for face recognition we employ histogram equalization and matching technique.
Security System based on Sclera RecognitionIRJET Journal
This document summarizes a research paper on a new human identification method called sclera recognition. Sclera recognition uses the unique blood vessel patterns on the white outer part of the eye called the sclera. The document outlines the four main parts of the proposed sclera recognition system: 1) sclera segmentation to isolate the sclera region of an eye image, 2) feature extraction of the blood vessel patterns using algorithms like SURF, 3) feature matching between images, and 4) matching decision based on metrics like false acceptance and rejection rates. The sclera recognition system is described as a promising new biometric identification technique due to the uniqueness of individual sclera patterns.
Finding the shortest path in a graph and its visualization using C# and WPF IJECEIAES
This document summarizes a study that implemented Dijkstra's algorithm to find the shortest path between two vertices in an undirected graph using C# and WPF. It describes Dijkstra's algorithm and how it was programmed using .NET 4.0, Visual Studio 2010, and WPF. The program allows drawing graphs, finding the shortest path between two vertices by highlighting the path, and displaying the path length. Screenshots show example outputs of the program finding shortest paths in sample graphs.
New approach to calculating the fundamental matrix IJECEIAES
The estimation of the fundamental matrix (F) is to determine the epipolar geometry and to establish a geometrical relation between two images of the same scene or elaborate video frames. In the literature, we find many techniques that have been proposed for robust estimations such as RANSAC (random sample consensus), least squares median (LMeds), and M estimators as exhaustive. This article presents a comparison between the different detectors that are (Harris, FAST, SIFT, and SURF) in terms of detected points number, the number of correct matches and the computation speed of the ‘F’. Our method based first on the extraction of descriptors by the algorithm (SURF) was used in comparison to the other one because of its robustness, then set the threshold of uniqueness to obtain the best points and also normalize these points and rank it according to the weighting function of the different regions at the end of the estimation of the matrix ''F'' by the technique of the M-estimator at eight points, to calculate the average error and the speed of the calculation ''F''. The results of the experimental simulation were applied to the real images with different changes of viewpoints, for example (rotation, lighting and moving object), give a good agreement in terms of the counting speed of the fundamental matrix and the acceptable average error. The results of the simulation it shows this technique of use in real-time applications
Detection of crossover & bifurcation points on a retinal fundus image by ...eSAT Journals
Abstract Over the last few decades, analysis of retinal vascular images has gained its popularity among the cyber scientists. Retina is unique in nature because of its most important feature, bifurcation & crossover points, serving reliable authentication basis. Using these two points many problem of authentication could be handled easily. Literature has shown that in a retinal vascular structure, bifurcation& crossover points need to be separated for the purpose of authentication. With this motivation in mind, we propose a novel method to segregate vascular bifurcation points from crossover points in any retinal image by analyzing neighborhood connectivity of non-vascular regions around a junction point on retinal blood vessels. Keywords: Retinal vessel analysis, retinal blood vessels, bifurcation and crossover point’s detection.
Detection of crossover & bifurcation points on a retinal fundus image by anal...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
This is a full report of my project in Level 3 Term 1. The project was basically a self-driven vehicle capable of localizing itself in a grid and planning a path between two nodes. It can avoid particular nodes and plan path between two allowed nodes. Flood Fill Algorithm will be used for finding the path between two allowed nodes. The vehicle is also capable of transferring blocks from one node to another. In fact, this vehicle is a prototype of a self-driven vehicle capable of transporting passengers and it can also be used in industries to transfer different items from one place to another.
Sensitivity analysis in a lidar camera calibrationcsandit
In this paper, variability analysis was performed o
n the model calibration methodology between
a multi-camera system and a LiDAR laser sensor (Lig
ht Detection and Ranging). Both sensors
are used to digitize urban environments. A practica
l and complete methodology is presented to
predict the error propagation inside the LiDAR-came
ra calibration. We perform a sensitivity
analysis in a local and global way. The local appro
ach analyses the output variance with
respect to the input, only one parameter is varied
at once. In the global sensitivity approach, all
parameters are varied simultaneously and sensitivit
y indexes are calculated on the total
variation range of the input parameters. We quantif
y the uncertainty behaviour in the intrinsic
camera parameters and the relationship between the
noisy data of both sensors and their
calibration. We calculated the sensitivity indexes
by two techniques, Sobol and FAST (Fourier
amplitude sensitivity test). Statistics of the sens
itivity analysis are displayed for each sensor, the
sensitivity ratio in laser-camera calibration data
Authentication of a person is the major concern in this era for security purposes. In biometric systems Signature is one of the behavioural features used for the authentication purpose. In this paper we work on the offline signature collected through different persons. Morphological operations are applied on these signature images with Hough transform to determine regular shape which assists in authentication process. The values extracted from this Hough space is used in the feed forward neural network which is trained using back-propagation algorithm. After the different training stages efficiency found above more than 95%. Application of this system will be in the security concerned fields, in the defence security, biometric authentication, as biometric computer protection or as method of the analysis of person’s behaviour changes.
Two Methods for Recognition of Hand Written Farsi CharactersCSCJournals
This document describes two methods for recognizing handwritten Farsi characters using neural networks and machine learning techniques. The first method uses wavelet transforms to extract features from character borders and trains a neural network classifier on these features. It achieves 86.3% accuracy on test data. The second method divides characters into groups based on visual properties, extracts moment features for each group, and uses Bayesian classification with a decision tree post-processing step. It achieves an overall recognition rate of 90.64% according to the results presented. Experimental evaluations of both methods on different datasets of handwritten Farsi characters are discussed.
Comparative study of two methods for Handwritten Devanagari Numeral RecognitionIOSR Journals
Abstract : In this paper two different methods for Numeral Recognition are proposed and their results are
compared. The objective of this paper is to provide an efficient and reliable method for recognition of
handwritten numerals. First method employs Grid based feature extraction and recognition algorithm. In this
method the features of the image are extracted by using grid technique and this feature set is then compared
with the feature set of database image for classification. While second method contains Image Centroid Zone
and Zone Centroid Zone algorithms for feature extraction and the features are applied to Artificial Neural
Network for recognition of input image. Machine text recognition is important research area because of its
applications in many areas like Bank, Post office, Hospitals etc.
Keywords: Handwritten Numeral Recognition, Grid Technique, ANN, Feature Extraction, Classification.
Tracking number plate from vehicle usingijfcstjournal
This document presents a new algorithm in MATLAB to extract vehicle number plates from images in various lighting conditions. The algorithm uses preprocessing techniques like grayscale conversion, dilation, and edge detection. It then segments the region of interest containing the number plate and extracts it. Individual characters are then segmented and recognized using template matching. The algorithm achieves 99% accuracy on images taken from a fixed angle and distance under controlled conditions. It is less accurate for images with problematic backgrounds or lighting. The algorithm provides an automated way to extract number plates for applications like traffic monitoring, parking management, and stolen vehicle identification.
This document describes a system for offline signature verification. It begins with an introduction to the problem and approaches for online and offline verification. It then outlines the proposed approach, which involves preprocessing the signature image, extracting seven geometric features, training the system using sample signatures, and verifying new signatures by comparing their features to the training data. The key steps are preprocessing to remove noise, feature extraction of attributes like slant angle, aspect ratio, area, center of gravity, edge and cross points, and verification by calculating the Euclidean distance between a test signature's features and the trained template. Implementation details and results are also mentioned.
Neural Network based Vehicle Classification for Intelligent Traffic Controlijseajournal
Nowadays, number of vehicles has been increased and traditional systems of traffic controlling couldn’t be
able to meet the needs that cause to emergence of Intelligent Traffic Controlling Systems. They improve
controlling and urban management and increase confidence index in roads and highways. The goal of this
article is vehicles classification base on neural networks. In this research, it has been used a immovable
camera which is located in nearly close height of the road surface to detect and classify the vehicles. The
algorithm that used is included two general phases; at first, we are obtaining mobile vehicles in the traffic
situations by using some techniques included image processing and remove background of the images and
performing edge detection and morphology operations. In the second phase, vehicles near the camera are
selected and the specific features are processed and extracted. These features apply to the neural networks
as a vector so the outputs determine type of vehicle. This presented model is able to classify the vehicles in
three classes; heavy vehicles, light vehicles and motorcycles. Results demonstrate accuracy of the
algorithm and its highly functional level.
This document presents a simple signature recognition system that uses invariant central moment and modified Zernike moment for feature extraction. The system is divided into preprocessing, feature extraction, and recognition/verification stages. In preprocessing, the input signature image is converted to grayscale and binary, and the region of interest is extracted. Feature extraction uses invariant central moments and Zernike moments to extract shape features. Recognition and verification is performed using a backpropagation neural network for its high accuracy and low computational complexity. The system was tested on a database of 500 signatures from 50 individuals and achieved suitable performance for signature verification.
This document summarizes a research paper about a simple signature recognition system designed using MATLAB. The system extracts features from signatures using invariant central moment and modified Zernike moment for invariant feature extraction. It is divided into preprocessing, feature extraction, and recognition/verification. Preprocessing prepares the signature image for processing. Feature extraction uses invariant central moments and Zernike moments. Recognition uses a backpropagation neural network for classification. The system was tested on a database of 500 signatures from 50 individuals, achieving high accuracy and low computational complexity.
IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mechanical and civil engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mechanical and civil engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document describes an artificial neural network based offline signature recognition system that uses local texture features. It begins with an introduction to signature recognition and motivation for the system. The system objectives are to develop preprocessing, feature extraction, and recognition phases. In preprocessing, signatures are converted to grayscale, binary, noise is reduced, and images are thinned and resized. Feature extraction extracts texture features like entropy, homogeneity, contrast, correlation and energy. Recognition is done using an artificial neural network classifier that compares test signature features to trained features. The system was tested on a database of 95 individuals with 10 signatures each, achieving 85-90% identification accuracy. Local texture features and neural network classification provide an effective approach to offline signature recognition.
Numeral recognition is an important research direction in field of pattern recognition, and it has
broad application prospects. Aiming at four arithmetic operations of general printed formats, this article
adopts a multiple hybrid recognition method and is applied to automatically calculating. This method mainly
uses BP neural network and template matching method to distinguish the numerals and operators, in order
to increase the operation speed and recognition accuracy. Sample images of four arithmetic operations are
extracted from printed books, and they are used for testing the performance of proposed recognition
method. The experiments show that the method provides correct recognition rate of 96% and correct
calculation rate of 89%.
This document summarizes a research paper that proposes an automated fingerprint recognition system using minutiae matching. It involves three main phases: image processing to enhance the fingerprint image and remove noise, minutiae extraction to detect ridge endings and bifurcations, and matching of minutiae points between two fingerprints to determine a match. The proposed system is evaluated on a database of fingerprint images. Results show the percentage of minutiae points correctly detected by the system ranges from 59% to 70%, demonstrating the potential of the minutiae matching technique for large-scale fingerprint recognition.
Face Detection and Recognition using Back Propagation Neural Network (BPNN)IRJET Journal
1) The document discusses face detection and recognition using a back propagation neural network. It aims to recognize faces from images and determine if individuals are authorized.
2) Face detection is used to locate and crop face areas from images. Principal component analysis extracts features for dimension reduction. A back propagation neural network and radial basis function network are then used for classification.
3) The system was tested and achieved high recognition rates. Individual information was stored in a database. The document reviews related work on neural networks and previous implementations of face recognition.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
ONLINE BANGLA HANDWRITTEN COMPOUND WORD RECOGNITION BASED ON SEGMENTATIONcscpconf
In this paper I propose a scheme for “Online Bangla Handwritten Compound Word Recognition” based on segmentation of word into its constituent characters with more accuracy. The goal of this Paper is to develop a system for segmentation of Bengali Compound Word into its constituent characters or basic strokes and then to recognize each character individually based on stroke generation, thus the recognizer can recognize the entire word. I
achieved the correct segmentation rate of 87% and the overall recognition rate of 73% on a dataset of 4200 Bangla Compound Words.
Segmentation and recognition of handwritten digit numeral string using a mult...ijfcstjournal
In this paper, the use of Multi-Layer Perceptron (MLP) Neural Network model is proposed for recognizing
unconstrained offline handwritten Numeral strings. The Numeral strings are segmented and isolated
numerals are obtained using a connected component labeling (CCL) algorithm approach. The structural
part of the models has been modeled using a Multilayer Perceptron Neural Network. This paper also
presents a new technique to remove slope and slant from handwritten numeral string and to normalize the
size of text images and classify with supervised learning methods. Experimental results on a database of
102 numeral string patterns written by 3 different people show that a recognition rate of 99.7% is obtained
on independent digits contained in the numeral string of digits includes both the skewed and slant data.
Hybrid fingerprint matching algorithm for high accuracy and reliabilityeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
Handwritten text and character recognition is a challenging task compared to recognition of handwritten numeral and computer printed text due to its large variety in nature. As practical pattern recognition problems uses bulk data and there is a one step self sufficient deterministic theory to resolve recognition problems by calculating inverse of Hessian Matrix and multiplication the inverse matrix it with first order local gradient vector. But in practical cases when neural network is large the inversing operation of the Hessian Matrix is not manageable and another condition must be satisfied the Hessian Matrix must be positive definite which may not be satishfied. In these cases some repetitive recursive models are taken. In several research work in past decade it was experienced that Neural Network based approach provides most reliable performance in handwritten character and text recognition but recognition performance depends upon some important factors like no of training samples, reliable features and no of features per character, training time, variety of handwriting etc. Important features from different types of handwriting are collected and are fed to the neural network for training. It is true that more no of features increases test efficiency but it takes longer time to converge the error curve. To reduce this training time effectively proper train algorithm should be chosen so that the system provides best train and test efficiency in least possible time that is to provide the system fastest intelligence. We have used several second order conjugate gradient algorithms for training of neural network. We have found that Scaled Conjugate Gradient Algorithm , a second order training algorithm as the fastest for training of neural network for our application. Training using SCG takes minimum time with excellent test efficiency. A scanned handwritten text is taken as input and character level segmentation is done. Some important and reliable features from each character are extracted and used as input to a neural network for training. When the error level reaches into a satisfactory level (10 -12 ) weights are accepted for testing a test script. Finally a lexicon matching algorithm solves the minor misclassification problems.
"FingerPrint Recognition Using Principle Component Analysis(PCA)”Er. Arpit Sharma
Fingerprint recognition is one of the oldest and most popular biometric technologies and it is used in criminal investigations, civilian, commercial applications, and so on. Fingerprint matching is the process used to determine whether the two sets of fingerprints details come from the same finger or not. This work focuses on feature extraction and minutiae matching stage. There are many matching techniques used for fingerprint recognition systems such as minutiae based matching, pattern based matching, Correlation based matching, and image based matching.
A new method based upon Principal Component Analysis (PCA) for fingerprint enhancement is proposed in this paper. PCA is a useful statistical technique that has found application in fields such as face recognition and image compression, and is a common technique for finding patterns in data of high dimension. In the proposed method image is first decomposed into directional images using decimation free Directional Filter bank DDFB. Then PCA is applied to these directional fingerprint images which gives the PCA filtered images. Which are basically directional images? Then these directional images are reconstructed into one image which is the enhanced one. Simulation results are included illustrating the capability of the proposed method.
- The document proposes a novel background subtraction algorithm for urban surveillance systems using big data techniques.
- The algorithm aims to automatically update the background image when no objects are detected, making it robust to changes in lighting conditions. It does this by filtering images to identify high frequency areas where objects are likely to be located.
- A key contribution is a new "grate filter" that is computationally more efficient than existing filters while still effectively identifying object areas. This helps address the high computational demands of processing large volumes of surveillance data.
Similar to System of “Analysis of Intersections Paths” for Signature Recognition (20)
This document provides an overview of wound healing, its functions, stages, mechanisms, factors affecting it, and complications.
A wound is a break in the integrity of the skin or tissues, which may be associated with disruption of the structure and function.
Healing is the body’s response to injury in an attempt to restore normal structure and functions.
Healing can occur in two ways: Regeneration and Repair
There are 4 phases of wound healing: hemostasis, inflammation, proliferation, and remodeling. This document also describes the mechanism of wound healing. Factors that affect healing include infection, uncontrolled diabetes, poor nutrition, age, anemia, the presence of foreign bodies, etc.
Complications of wound healing like infection, hyperpigmentation of scar, contractures, and keloid formation.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
How to Build a Module in Odoo 17 Using the Scaffold Method
System of “Analysis of Intersections Paths” for Signature Recognition
1. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 610
System of “Analysis of Intersections Paths” for Signature
Recognition
Farhad Shamsfakhr farhad_sm@ymail.com
Faculty of Engineering/Computer Engineering Department
Islamic Azad University/Hamedan Branch
Hamedan, 65138-734, Iran
Abstract
In today's world, the electronic city which is the offspring of the development of the information
world, paves the way for a round-the-clock interaction among computers and networks. The
planners of these electronic cities are mostly concerned about the accuracy and security of the
exchanged information. In order to elevate security and raise speed and accuracy in the
reviewing of network performance and the dependable identification of persons involved in
electronic operations, recognizing the accuracy of the electronic signature is deemed absolutely
essential. In this article, a system named "Analysis of Intersections" has been utilized for the
accurate recognition of the electronic signature. Of important features of this system are the
utilization of simple data structures such as array, stack, and list and determination of the
sensitivity level for recognizing the accuracy of the signature by setting an error percentage for
the size and recognition of the shape. An accuracy recognition test was performed on 15 samples
of 150 types of signatures using "Analysis of Intersections". Findings indicated that this system
showed an accurate recognition of 2,220 out of 2,250 signatures, indicating an applicability of
98.66 percent.
Keywords: Analysis of Intersections, Intersection, Threshold, Rotational Routing Algorithm,
Intersection Recognition Samples, Adaptation of Paths.
1. INTRODUCTION
With the ever increasing advancement of the information technology over the recent years,
electronic operations have gained momentum. Meanwhile, dissemination of personal and
organizational information over the insecure worldwide web and the easy access of individuals
and organizations to internet resources raise concerns regarding the unauthorized access of
strangers to users personal information and the breaching of privacies. To the end of
safeguarding the privacy of their users, planners of electronic cities have taken measures to
define an identity for their users. This could happen in the form of defining a username and a
password for the users or identifying unique characteristics of individuals such as fingerprint, face
recognition, etc. Question is which method is the fastest, most precise, and most cost-effective of
all? In fact, in order to increase the accuracy and precision of information on the one hand and
the speed of electronic operation completion on the other, it is advisable that a sensible, optimum,
intelligent, and easy-to-use method be created. A proper option for securing the entrance of users
into the web is the accurate and intelligent identification of individuals signatures. Using intelligent
systems, not only the speed of signature recognition but also its accuracy are augmented which
is crucial when performing important monetary, information, and security operations. [2] The
method introduced in this article concerns offline electronic signatures. In electronic signatures,
there is no noise or halo of colors. In this study, we attempt to analyze the signature image as a
collection of ways and introduce the main algorithms as follows: 1) way-finding algorithm shown
as "Around _ perceive (x,y)" and "Way _ Finder (x,y,z)" functions; 2) intersection recognition
algorithm shown as "Inter _ Section (x,y)" function; 3) end-checking algorithm shown as "sensing
_ opr _ End _ checker"; and 4) a unique algorithm for comparing and adapting samples with other
algorithms that are explained later. We refer to this system as "Analysis of Intersections". [3] By
using electronic signatures, we do not need to take pre-processing operations such as elimination
2. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 611
of noises. In some systems, however, the image of the signature has a halo of colors instead of
the main signature color. In such a case, a pre-processing operation is needed to transform the
halo of colors to only one color (e.g. black). After receiving the signature in the form of a bitmap
file, we separate the signature from the background; change the color of the background to white
and color of the signature to black.
FIGURE 1: shows the main image of the signature.
FIGURE 2: the pre-processed image.
The program starts with retrieving "Image _ Analysis ()" function. This function navigates the
array of the image to find the first pixel of the signature image (black pixel). After finding the first
pixel, "Around _ Perceive (x,y)" function is retrieved. The task of this function is to search the
environment around the received pixel in order to find the path it is going to navigate. It finds the
right path, navigates it, and continues this action until it reaches the first intersection. The design
of this function does not lead to registration of repeated paths and being trapped in loops known
as "wrong paths". We take the pixel signified in the following figure to be the first pixel of the
signature image which has been found by "Image _ Analysis () "function.
FIGURE 3: shows the magnified image of the signature.
After receiving x and y of the first pixel as the input parameters of "Around _ Perceive (x,y)"
function, this function reviews 8 pixels around the current pixel in order to find one pixel to
continue the path. This way the path is navigated and this continues until the first intersection in
the signature image is reached. Further explanations in this regard are given in the Appendix. By
this function, after finding one pixel as the found path, we must study the position of the pixel in
every stage in order to reach the intersection. To this end, we retrieve "Inter _ Section" function.
This function tries to receive the path pixel and decide whether it is in the intersection or not. If
yes, the intersection is found and "Around _ Perceive" function has completed its job
successfully. If not, "Around _ Perceive" is retrieved through the same pixel.
3. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 612
2. INTERSECTION
[4] Intersection is defined as the pixel where at least 4 different paths meet. In Inter _ Section
function, various figures of the intersection, known as "intersection recognition samples" exist. [1]
Comparing the position of the path pixel (the input pixel of Inter _ Section function) with the pixels
around indicates whether it can be candidate as the intersection center or not.
FIGURE 4: shows some of the intersection recognition samples.
2.1 Intersection Recognition
[5] After a pixel is candidate as the centre of intersection, the function decides whether this pixel
is a definite center or just candidate as one. Pixel A in Figure 5 is just a candidate center of
intersection as it is not the intersection of at least 4 distinct paths and is just a path pixel. In Figure
6, however, pixel A is a definite center as it is the intersection of at least 4 distinct paths. If a pixel
is the intersection of 3 distinct paths, that pixel is known as the center of a three-way junction
which after being added a virtual path, may become a definite intersection center. To decide
whether an intersection center is definite or not, "Inter _ Section" function uses an algorithm
known as Rotational Routing algorithm described fully in the Appendix.
FIGURE 5: shows the candidate intersection center.
4. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 613
FIGURE 6: depicts the definite one.
After recognizing the intersection, all paths ending to the intersection are registered in the array
temporarily, with the direction of every path and then the paths are navigated. Navigating these
paths is done by "Way _ Finder (x,y,z)" function. As said before, every path has a direction. In
order to optimize searches, the method of navigating each path is based on the direction of that
path. Therefore, for navigating two paths with two different directions, we use two different
methods, described fully in the Appendix. Navigating the path continues to the end of the path or
until meeting an intersection. If the path is not repeated, it is registered in the array. "Way _
Finder (x,y,z)" function reviews the position of the pixel in every stage of navigation in order to
reach the end of the path. To find out whether this path is terminated or not, "sensing _ opr _ End
_ checker" function is used. This function receives a pixel from "Way _ Finder" function and
decides whether the current path has ended or not. If the path has not ended, the said function
will retrieve "Way _ Finder" function again. Therefore, all the paths ending to the first intersection
in the signature are registered in the array. Then, all the paths of this intersection are eliminated
from the signature image and all the stages of the operation are repeated until no non-navigated
path remains in the signature image. If a path does not end to an intersection, it will be called a
separated path. Such paths are eliminated without being registered in the array. Figure 8 shows
the signature analysis stages of Figure 7. In every stage of signature processing, navigated paths
of an intersection are shown. The signature image after one stage of processing is also depicted.
The red path is a virtual path that we add to three-way junctions so that they will be turned to
intersections.
FIGURE 7: shows a signature sample.
5. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 614
FIGURE 8: shows the stages of processing the paths of that signature along with processed paths in every
stage and the image of the signature after being processed in each stage.
3. Adaptation of Paths
After storing all pixels in every path in the array, we have to use these paths to identify the
signature. In order to identify the signature, first "Path Information Table" must be formed for each
of the paths of the signature. This Table includes: 1) starting point: containing the features of the
first pixel of every path; 2) ending point: containing the features of the last pixel of every path; 3) x
range: containing the range of x's in every path; 4) y range Dx=Max(x)-Min(x): containing the
range of y's in every path; and 5) the whole path Dy=Max(y)-Min(y): containing the number of
pixels in every path. "Path _ Receipt _ Data _ Process" function creates the Path Information
Table, fully described in the Appendix. After such table is created for all paths, we have to decide
on the shape, direction, size & the slope measurement of every path based on the information
contained in this table as follows: first a Measure is considered for every signature. Measure is a
unit that measures the length of the path. For instance, if the length of a path is 30 pixels and its
measure is 3, the length of this path will be 10 measures.
Measure = Path Length / n
By using Measure, size will no more be a limiting factor; therefore, any size signature can easily
be recognized. We consider that Harry Hilton signed his signature in two different sizes.
FIGURE 9: shows a path of his first signature.
6. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 615
3.1) length path = 180 pixel , Measure = 180 10=18 , length path=18018=10 unite and
FIGURE 10: a path of his second signature.
3.2) length path = 78 pixel , Measure = 78 10=7.8 , length path=787.8=10 unite The lengths of
both paths are 10 units.
Threshold
Threshold is the amount we use as the error limit. If in the rectangular in Figure 11 the threshold
is 4, the rectangular will be complete if we ignore 4 pixels (error limit=4).
FIGURE 11: the threshold is 4, the rectangular will be complete if we ignore 4 pixels (error limit=4).
In this article, we calculate Span _ Limit = Path Length /7. A different calculation method can also
be used. The smaller the Span _ Limit, we will identify shapes with smaller error compared with
real shapes. The bigger the Span _ Limit, bigger adaptation errors are ignored. The slope of each
path are calculated as follows:
Y=mx+b , m = dy/dx = (y2-y1)/(x2-x1)
We apply our own definition for the shape, direction, and size of images. For example, Figure 12
in the signature is called a big quasi-rectangular which is shown in Figure 12.
FIGURE 12: big quasi-rectangular.
Our definition of shapes and directions is given in the Appendix.
The shape, direction, slope and size of a path are referred to as the features of a path. In order to
identify the accuracy of the signature, the following operation is taken: The number of paths in
signature A is compared with the number of paths in signature B. If they are equal, the features of
every path in one signature are compared with those of the corresponding path in the other
signature. In case features of the paths in both signatures are identical, the accuracy of the
signature is vindicated. In order to improve the identification performance, a certain percentage of
size differences must be ignored. For instance, if the features of every single path in one
signature is exactly identical with those of the corresponding path in the other signature, but there
is discrepancy in the size or the length of paths by 1 or 2 units, we may consider the length of
7. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 616
paths as identical after ignoring 2 units of discrepancy. An accuracy recognition test was
performed on 15 samples of 150 types of signatures using "Analysis of Intersections". Findings
indicated that this system showed an accurate recognition of 2,220 out of total 2,250 signatures,
indicating an applicability of 98.66 percent.
TABLE 2: Comparative Analysis of different off-line signature verification systems.
4. REFERENCES
[1] S. E. Umbaugh, Computer Imaging Digital Image Analysis and Processing, CRC Press,
2005
[2] M.Savov, G.Gluhchev, Automated Signature Detection from Hand Movement, "International
Conference on Computer Systems and Technologies _ CompSysTech" 2004
[3] A.Zimmer, L.Luan Ling, A Hybrid On/Off Line Handwritten Signature Verification System,
"Proceedings of the Seventh International Conference on Document Analysis and
Recognition
(ICDAR 2003) " 0-7695-1960-1/03 17.00 C 2003 IEEE
[4] A recognition algorithm for the intersection graphs of paths in trees . Computer Science
Division, Department of Mathematical Sciences, Tel-Aviv University, Ramat-Aviv, Tel-Aviv,
Israel Received 10 March 1977;
[5] A recognition algorithm for the intersection graphs of directed paths in directed trees .
Department of Computer Science, University of Illinois, Urbana, Ill. 61801, USA
Annex
1. "Around Perceive (X,Y)" function
The task of this function is to search around the pixel received by itself to find the right route at its
own front and to keep sensing the same until it comes to the first intersection . this effect the eight
pixels around the current pixel are surveyed and the first black pixel so found shall be considered
as the conducted pixel of continuation of the route. The adjacent pixels are surveyed line by line
beginning from the first line and the first column to the last line and the last column in the
following way.
Authors Method Results
Ammar, M. 1991 Distance Threshold 85.94%
Ammar et al., 1990 Distance Statistics 88.15%
Quek & Zhou, 2002 Neuro-Fuzzy Network 96%
8. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 617
FIGURE 1: pixels around the current pixel(* Now suppose that each Pixel measures 15x15 in dimensions.)
For j = y - 15 To y + 15 Step 15
For i = x - 15 To x + 15 Step 15
If i <> x Or j <> y Then
color = pic3.Point(i, j)
If color = 0 Then
{
.
.
.
}
The current pixel that is the input of the around perceive (X,Y) function with in each pause is
refereed to as parent pixel and the candidate pixels of the same phase are culled child pixels. In
case a candidate pixel is taken as a route, its parent pixel shall be reserved in an array named
pixels Memory. After the candidate pixel has been discovered within each please, it is this
surveyed from the viewpoint of being a repeated pixel where we may face two alternative
circumstances; 1st: there is a minimum of one (1) unrepeated candidate pixel. Absence of
candidate pixel in the array of pixels _Memory would indicate that the route has been discovered
and that it is the route of candidate pixel. Then, it is considered if its parent pixel exists in the
array parent pixel shall be reserved in the array, it is reserved in the array 2nd. All candidate
pixels are repeated pixels. In case all candidate pixels are present in pixels Memory array, this
means that the route has not been discovered in which case a stack shall be used. With this state
also the parent pixel shall be reserved in the array provided that it is not a repeated pixel. Lack of
discovery of a route indicates that the route we have covered to this point is wrong and that we
should go back to take another way. Pointer points at the end of the array pixels Memory.
Therefore, we are to reduce the pointer by one unit per phase and consider as new candidate
pixels the child pixels pointed at by the pointer continuing this to the point where a route appears.
We convert the array of pixels Memory to stack in fact by using the pointer and retrieve the
function by the values available in the stack in the following method.
Around_Perceive(Pixels_memory(pointer).x, Pixels_memory(pointer).y)
2. Intersection Identification Function
How many patterns are required to identify the intersection? Is it possible for us to identify
intersection candidate in the form of signature by using these patterns? It may appear initially that
to improve intersection identification performance we are to create all combinations of N black
pixels in M-1 locations and in other words we should have 2520 different intersection identification
patterns:
8!
C(8,4)=———— =2520
4!(8-4)!
It is evident that it shall be extremely time- consuming to create this number of patterns and it is a
useless effort. We don't need to do this; there is no requirement of taking as a pattern any four of
eight combination. We take as a model only the states more frequently occurring in the
intersection of a signature and those states do not exceed 40 in number. Whereas one compares
to intersection patterns any singles pixel adjacent to the input pixel of the function inter section,
then any pixel conforming to none of our patterns shall be left and shall give its place to an
adjacent pixel in an order of comparison. As an instance let us suppose that the pattern 1 has not
been defined in the function; that pattern 2 has been defined in the functional and that we find the
following form in the signature. It is evident that we cannot identity the position of pixel A in the
intersection be cause we have not defined pattern 1 in the function. However, me may identity the
position of pixel B in the intersection as pattern 2 has been defined in the function.
9. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 618
FIGURE 2: position of pixel A in the intersection
FIGURE 3: position of pixel B in the intersection
FIGURE 4: pattern1 FIGURE 5: pattern2
As observed, the intersection was finally identified with the definition of one pattern.
3. ROTATIONAL ROUTING
To determine if a pixel can be the center of a definite intersection, the function inter section takes
use of a method which we call Rotational routing. Because of congeries of pixels at the
intersections,
For recognizing definite intersection, the distincgtion action of the directions would be perform in
the place in which, the route is released of the pixel congestion.
all the possible routes on the eight sides of the intersection are counted clockwise after the
intersection center candidate has been identified within the intentional routing method. Keep in
mind that the starting point of the aforesaid routes shall be considered to be at a clearance by
three (3) pixels from the intersection center so that a separation in made from the point of
congeries of pixels at the intersection center. The rotational routing method at a depth of 3 is
considered here. However, a rotational routing at a depth of 4 may be used as an alternative with
the advantages of bringing greater flexibility and preventing errors.
10. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 619
FIGURE 6: Rotational Routing at a depth of 3.
FIGURE 7: Rotational Routing at a depth of 4.
The following figure illustrates method of functioning of rotational routing at a depth of 3.
11. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 620
FIGURE 8: functioning of rotational routing at a depth of 3.
(X,Y) is intersection center coordinate of the pixel dimensions are 15x5 in this example). A is the
center of intersection. The searches 1,2,3,5,6 and 7 are made at 3 pixels from the intersection
center (x-45) or (y-45). The searches 4 and 8 are made at an optional clearance from intersection
center. However, the clearance shouldn't be so little that it is included within the area of congeries
of pixels and not so large that make the length of the route so discovered excessively small. The
search is in every case made at the beginning of the search area and continued to the end. The
search is terminated upon reaching the first pixel. The pixel so encountered is recorded in the
array as the first pixel of the discovered route. Each intersection has a maximum of eight routes
and hence eight searches shall be required to discover the eight cutes. The search no 1 is
dedicated to the discovery of the route No.1 ( northwestern route). The length of search route
includes the distance between an optional clearance, say X-225 as an instances to specified
clearance e.g, X-45. The search No. 2 is dedicated to the discovery of the route No.2 ( northern
route). The length of the search route includes to distance between X-30 to X+30. search No.3 is
dedicated to the route No.3 ( northeastern route). The length of search route is the distance
between an optional clearance, say X+225 to the specified clearance X+45. the search no 4 is
dedicated to the discovery of route No.4 ( eastern route). The length of search route includes to
distance between Y-30 and Y+30 The search no 5 is dedicated to the route No.5 ( southeastern
route). The length of the search route is the distance between an optional charana say X+225 to
the known clearance X+45. The search No. 6 is dedicated to the route No.6 ( southern route).
The length of search route covers the distance from X-30 to x+30. The search No. 7 is dedicated
to discovery of route No. 7 ( southwestern route) the length of search route is the distance from
an optional clearance, say X-225 to the specified clearance X-45. Search No. 8 is dedicated to
the discovery of route No. 8 ( western route). The length of search route covers the distance from
Y-30 to Y+30. Therefore, all the routes, their respective numbers and their directions are reserved
in an array namely Inter Section Ways Array by using rotational routing algorithm. In case number
of the routes is equal to exceeds 4, then the intersection candidate pixel is an absolute
intersection and otherwise the function Around_ Perceive is retrieved by using a candidate pixel
and the remaining part of the route is measured until the intersection appears again.
4. OPTIMIZATION OF SEARCHES
Any pixel to be searched by Way_ Finder function for the continuation of the route, should have a
specific search method of its own. Uniform order of searches dedicated to all roles shall result in
an increase in number of comparisons, emergence of errors and a drop in processing rate of the
program. Suppose that current pixel is the pixel A and the purpose is to find pixel B for the
continuation of the route.
12. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 621
For j = y – 15 To y + 15 Step 15
For i = x - 15 To x + 15 Step 15
For j = y – 15 To y + 15 Step 15
For i = x + 15 To x - 15 Step -15
For j = y + 15 To y - 15 Step -15
For i = x - 15 To x + 15 Step 15
For j = y + 15 To y - 15 Step -15
For i = x + 15 To x - 15 Step -15
FIGURE 9: every route along with the proper search orders ( dimensions of each pixel measure 15x15)
5. RULES OF FORM & ROUTE DIRECTION IDENTIFICATION
The following rules are those of form and route direction identification. As observed , form and
route identification rule , name the name of route form, an example of the same and the direction
of the respective route have all been given in each line.
FIGURE 10: sample
DirectionSampleShapeLaw
horizontalHalf square1.1) If Abs(DX - DY) < (Span_Limit)
horizontalHalf
rectangular
1.2) If ((DX - DY) >= (Span_Limit)) & (DX <= 2 * DY)
horizontalsmall Half
square
1.3) If (DX > Span_Limit) & ((DX + Span_Limit) <= (DY))
horizontalbig Half
rectangular
1.4) If (DX > 2 * DY)
TABLE 1: Horizontal forms Identification rules. ( 1.) If Abs(yn - y1)> Span_Limit & Abs(xn - x1)<=
Span_Limit)
13. Farhad Shamsfakhr
International Journal of Image Processing (IJIP), Volume (5) : Issue (5) : 2011 622
In case name of the aforesaid rules applies to a form, then the following rules shall be surveyed:
DirectionSampleShapeLaw
verticalHalf square2.1) If Abs(DX - DY) < (Span_Limit)
verticalHalf
rectangular
2.2) If ((DY - DX) >= (Span_Limit)) & (DY <= 2 * DX)
verticalsmall Half
square
2.3) If (DY>Span_Limit) & ((DY + Span_Limit) <= (DX))
verticalbig Half
rectangular
2.4) If (DY > 2 * DX)
TABLE 2: Vertical forms identification rules .( 2.) If Abs(yn - y1)<= Span_Limit & Abs(xn - x1)> Span_Limit)
In case none of the afore rules applies to a form, then the following rules shall be considered.
DirectionThe SampleshapeLaw
horizontalHorizontal line3) If (DY <= Span_Limit)
vertical
Vertical line
4) If (DX <= Span_Limit)
TABLE 3: Secondary rules.